0

I'm rendering a texture to screen with this code:

        if (beganDraw)
        {
            beganDraw = false;
            GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
            if (CameraMaterial != null)
            {
                GL.BindBuffer(BufferTarget.ArrayBuffer, screenMesh.VBO);
                GL.BindVertexArray(VAO);
                GL.BindBuffer(BufferTarget.ElementArrayBuffer, screenMesh.VEO);
                CameraMaterial.Use();
                screenMesh.ApplyDrawHints(CameraMaterial.Shader);
                GL.DrawElements(PrimitiveType.Triangles, 6, DrawElementsType.UnsignedInt, 0);
                GL.BindBuffer(BufferTarget.ElementArrayBuffer, 0);
                GL.BindVertexArray(0);
                GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
                GL.UseProgram(0);
            }
        }

As you see there is no transformation matrix.

I create the mesh to render the surface like this:

        screenMesh = new Mesh();
        screenMesh.SetVertices(new float[] {
            -1,-1,
            1,-1,
            1,1,
            -1,1
        });
        screenMesh.SetIndices(new uint[] {
            2,3,0,
            0,1,2
        });

And my question is, why do I have to go from -1 to 1 in order to fill the screen? Shouldn't it default to 0 to 1 ? Also, how can I make it to go from 0 to 1? Or is that even advised?

This is the shader:

[Shader vertex]
#version 150 core

in vec2 pos;
out vec2 texCoord;
uniform float _time;
uniform sampler2D tex;

void main() {
    gl_Position = vec4(pos, 0, 1);
    texCoord = pos/2+vec2(0.5,0.5);
}


[Shader fragment]
#version 150 core
#define PI 3.1415926535897932384626433832795

out vec4 outColor;
uniform float _time;
uniform sampler2D tex;
in vec2 texCoord;
//
void main() {
    outColor = texture2D(tex, texCoord);
}
pixartist
  • 1,137
  • 2
  • 18
  • 40

1 Answers1

3

The OpenTK GL. calls are just a thin layer on top of OpenGL. So your question is really about OpenGL coordinate systems.

After you applied all the transformations in your vertex shader, and assigned the desired vertex coordinates to gl_Position, those coordinates are in what the OpenGL documentation calls clip coordinates. They are then divided by the w component of the 4-component coordinate vector to obtain normalized device coordinates (commonly abbreviated NDC).

NDC are in the range [-1.0, 1.0] for each coordinate direction. I don't know what the exact reasoning was, but this is just the way it was defined when OpenGL was originally designed. I always thought it was kind of natural to have the origin of the coordinate system in the center of the view for 3D rendering, so it seems at least as reasonable as anything else that could have been used.

Since you're not applying any transformations in your vertex shader, and the w component of your coordinates is 1.0, your input positions need to be in NDC, which means a range of [-1.0, 1.0].

You can easily use a different range if you apply the corresponding mapping/transformation in your vertex shader. If you like to use [0.0, 1.0] for your x- and y-ranges, you simply add that mapping to your vertex shader by changing the value you assign to gl_Position:

gl_Position = vec4(2.0 * pos - 1.0, 0.0, 1.0);

This linearly maps 0.0 to -1.0, and 1.0 to 1.0.

Reto Koradi
  • 53,228
  • 8
  • 93
  • 133