I am a beginner in OpenGL, trying to run some tests. There are plenty of fragment shaders available at GLSL Sandbox Gallexy, and I would like to try them in GLES, reusing the code. But, for most of the shaders, it does not seem to work.
Among the shaders I tried to run, the only one that worked in GLES is this one, for some reason. And I had to eliminate the time dependence of the shader, in order to accomplish that.
It seems that some variable names are different in WebGL and GLES. If that's the case, which ones, exactly? If not, what exactly is the procedure of translation from the former to the later?
One example of simple fragment shader that gives a black screen only. In my particular case, I am running it along this minimal vertex shader:
precision mediump float;
uniform mat4 uMVPMatrix;
attribute vec4 aPosition;
attribute vec2 aTextureCoord;
varying vec2 vTextureCoord;
void main() {
vTextureCoord = aTextureCoord;
gl_Position = uMVPMatrix * aPosition;
}
This issue isn't a difference between WebGL and OpenGL ES, it's a difference between the shader programming environment provided by sites like the GLSL Sandbox Gallery and Shadertoy and the corresponding environment (or lack thereof) on the OpenGL ES platform of your choice.
The various WebGL fragment shader sandboxes you see on the web provide inputs to your shader code via uniform variables. When you develop your own OpenGL ES app on another platform, you'll need to provide those inputs yourself.
The sandbox site you linked to provides the time, mouse, resolution and backbuffer uniforms in its JavaScript code by calculating the appropriate values itself and passing them to gl.uniform1f or similar functions (after first compiling the shaders and then looking up the numeric location for each uniform name in the compiled program). On another platform, you'll need to do the same using the OpenGL ES bindings that platform provides.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With