1

To implement this idea, I wrote the following two versions of my vertex and fragment shaders:

// Vertex:
precision highp int;
precision highp float;

uniform   vec4 r_info;
attribute vec2 s_coords;
attribute vec2 r_coords;
varying   vec2 t_coords;

void main (void) {
    int w = int(r_info.w);
    int x = int(r_coords.x) + int(r_coords.y) * int(r_info.y);
    int y = x / w;
        x = x - y * w;
        y = y + int(r_info.x);
    t_coords    = vec2(x, y) * r_info.z;
    gl_Position = vec4(s_coords, 0.0, 1.0);
}

// Fragment:
precision highp float;

uniform sampler2D sampler;
uniform vec4      color;
varying vec2      t_coords;

void main (void) {
    gl_FragColor = vec4(color.rgb, color.a * texture2D(sampler, t_coords).a);
}

vs.

// Vertex:
precision highp float;

attribute vec2 s_coords;
attribute vec2 r_coords;
varying   vec2 t_coords;

void main (void) {
    t_coords    = r_coords;
    gl_Position = vec4(s_coords, 0.0, 1.0);
}

// Fragment:
precision highp float;
precision highp int;

uniform vec4      r_info;
uniform sampler2D sampler;
uniform vec4      color;
varying vec2      t_coords;

void main (void) {
    int w = int(r_info.w);
    int x = int(t_coords.x) + int(t_coords.y) * int(r_info.y);
    int y = x / w;
        x = x - y * w;
        y = y + int(r_info.x);

    gl_FragColor = vec4(color.rgb, color.a * texture2D(sampler, vec2(x, y) * r_info.z).a);
}

The only difference between them (I hope) is the location where the texture coordinates are transformed. In the first version, the math happens in the vertex shader, in the second one it happens in the fragment shader.

Now, the official OpenGL ES SL 1.0 Specifications state that "[t]he vertex language must provide an integer precision of at least 16 bits, plus a sign bit" and "[t]he fragment language must provide an integer precision of at least 10 bits, plus a sign bit" (chapter 4.5.1). If I understand correctly, this means that given just a minimal implementation, the precision I should be able to get in the vertex shader should be better than that in the fragment shader, correct? For some reason, though, the second version of the code works correctly while the first version leads to a bunch of rounding errors. Am I missing something???

Community
  • 1
  • 1
Markus A.
  • 12,349
  • 8
  • 52
  • 116

1 Answers1

1

Turns out I fundamentally misunderstood how things work... Maybe I still do, but let me answer my question based on my current understanding:

I thought that for every pixel that is rendered, first the Vertex Shader and then the Fragment Shader are executed. But, if I now understand correctly, the Vertex Shader is only called once for each vertex of the triangle primitives (kind-of makes sense given it's name, too...).

So, the first version of my code above only calculates the correct texture coordinate at the actual corner points (vertices) of the triangles that I'm drawing. For all other pixels in the triangle, the texture coordinate is simply a linear interpolation between those corner coordinates. Of course, since my formula isn't linear (including rounding and modulo-operations), this leads to the wrong texture-coordinates for each individual pixel.

The second version, though, applies the non-linear transformation to the texture coordinates at each pixel location, giving the correct texture coordinates everywhere.

So, the generalized learning (and the reason I didn't just delete the question):

All non-linear texture-coordinate transformation must be done in the fragment shader.

Markus A.
  • 12,349
  • 8
  • 52
  • 116