0

I have been working on the beginnings of an engine for educational pursuit and I've run into an OpenGL concept which I THOUGHT I understood, however I cannot explain the behavior I've been observing. The issue is with the Depth Buffer. Also, understand that I have fixed the issue, and at the end of my post I will explain what fixed the problem, however I do not understand why the problem was fixed through my solution. First I initialize GLUT & GLEW:

//Initialize openGL
glutInit(&argc, argv);
//Set display mode and window attributes
glutInitDisplayMode(GLUT_DOUBLE | GLUT_DEPTH | GLUT_RGB);

//Size and position attributes can be found in constants.h
glutInitWindowSize(WINDOW_WIDTH, WINDOW_HEIGHT);
glutInitWindowPosition(WINDOW_XPOS, WINDOW_YPOS);
//Create window
glutCreateWindow("Gallagher");

// Initialize GLEW
glewExperimental = true;
glewInit();

//Initialize Graphics program
Initialize();

Then I initialize my program (Leaving segments out for readability and lack of relevance):

//Remove cursor
glutSetCursor(GLUT_CURSOR_NONE);

//Enable depth buffering
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LESS);
glDepthRange(0.0f, 1.0f)

//Set back color
glClearColor(0.0,0.0,0.0,1.0);

//Set scale and dimensions of orthographic viewport
//These values can be found in constants.h
//Program uses a right handed coordinate system.
glOrtho(X_LEFT, X_RIGHT, Y_DOWN, Y_UP, Z_NEAR, Z_FAR);

Anything beyond that point just includes initializing various engine components, loading the .obj files, and initializing instances of a ModularGameObject class, attaching the meshes to them, nothing that touches any relevant glut/glew. However, before I go on it may be important to specify the following values:

X_LEFT = -1000;
X_RIGHT = 1000;
Y_DOWN = -1000;
Y_UP = 1000;
Z_NEAR = -0.1;
Z_FAR = -1000;

Which causes my viewport to follow a right handed coordinate system. The final segment of code which seems to be involved in the problem is my vertex shader:

#version 330 core
//Position of vertices in attribute 0
layout(location = 0) in vec4 _vertexPosition;
//Vertex Normals in attribute 1
layout(location = 1) in vec4 _vertexNormal;

//Model transformations
//Uniform location of model transformation matrix
uniform mat4 _modelTransformation;

//Uniform location of camera transformations
//Camera transformation matrix
uniform mat4 _cameraTransformation;
//Camera perspective matrix
uniform mat4 _cameraPerspective;

//Uniform location of inverse screen dimensions
//This is used because GLSL normalizes viewport from -1 to 1
//So any vector representing a position in screen space must be multiplied by this vector before display
uniform vec4 _inverseScreenDimensions;

//Output variables
//Indicates whether a vertex is valid or not, non valid vertices will not be drawn.
flat out int _valid;        // 0 = valid vertex
//Normal to be sent to fragment shader
smooth out vec4 _normal;

void main()
{
    //Initiate transformation pipeline

    //Transform to world space
    vec4 vertexInWorldSpace = vec4(_modelTransformation *_vertexPosition);

    //Transform to camera space
    vec4 vertexInCameraSpace = vec4(_cameraTransformation * vertexInWorldSpace);

    //Project to screen space
    vec4 vertexInScreenSpace = vec4(_cameraPerspective * vertexInCameraSpace);

    //Transform to device coordinates and store
    vec4 vertexInDeviceSpace = vec4(_inverseScreenDimensions * vertexInScreenSpace);
    //Store transformed vertex
    gl_Position = vertexInScreenSpace;

}

This code results in all transformations and normal calculations (not included) to be done correctly, however each face of my models constantly fighting to be above all the others. The only time I have no issues is standing inside of the first model being drawn, then nothing flickers and I can view the inside of Suzanne's head just as I should be able to.

After weeks of trying anything I could possibly think of I finally worked my way to a solution which involves changing/adding a mere two lines of code. First, I added this line to the end of my main function in my vertex shader:

gl_Position.z = 0.0001+vertexInScreenSpace.z;

The addition of this line of code caused every bit of z-fighting to go away, except now the depth buffer was completely backwards, vertices further away were being reliably drawn on top of vertices in front. This is my first question, why would this line of code cause more reliable behavior?

Now that I had reliable behavior and no more depth-fighting it was a matter of reversing the draw order, and so I changed my call to glDepthRange to the following:

glDepthRange(1.0f, 0.0f);

I was under the assumption that glDepthRange(0.0f, 1.0f) would cause objects closer to my Z_NEAR (-0.1) to be closer to 0 and objects closer to my Z_FAR(-1000) to be closer to 1. Then, having my Depth Test set to GL_LESS would make perfect sense, matter of fact this should be the case regardless of what my Z_NEAR and Z_FAR are because of the way that glDepthRange maps the values, if I'm not mistaken.

I must be mistaken though, because this line change would mean that objects closer to me would store a value closer to 1 in the depth buffer and objects further would have a value of 0 rendering a backwards draw order- but sure enough it works like a charm.

If anybody can point me in the direction of why my assumptions are wrong and what I could possibly be not factoring into my understanding of glsl and depth buffering. I would rather not move on with progress of my engine until I completely understand the functioning of it's foundation.

Edit: The contents of my _cameraPerspective matrix are as follows: Perspective matrix diagram

AspectX     0           0               0
0           AspectY     0               0
0           0           1               0
0           0         1/focalLength         0

Where AspectX is 16 and AspectY is 9. The focal length defaults to 70, however controls were added to change this during runtime.

Pointed out by derhass, this does not explain how any of the information passed to glOrtho() is taken into account by the shader. The viewport dimensions, due to not using the standard pipeline & matrix stack, are considered with _inverseScreenDimensions. This is a vec4 which contains [1/X_RIGHT, 1/Y_UP, 1/Z_Far, 1]. Or for lack of variable names, [1/1000, 1/1000, -1/1000, 1].

Multiplying the screen coordinates vector by this in my vertex shader results in an X value between -1 and 1, a Y value between -1 and 1, a Z value between 0 and 1 (If the object was in front of the camera it had a negative z coordinate to begin with), and a W of 1.

If I'm not mistaken, this would be the final step to reach "device coordinates", followed by drawing the mesh.

Please keep in mind the original question: I know this is not streamlined, I know I'm not using GLM or all the most often used libraries for this, however my question is not "Hey guys fix this!" My question is: Why was this fixed by the changes I made?

Mr. Nex
  • 233
  • 1
  • 12
  • 1
    The code you provides so far is not complete enough. I do have some suspicions, but cannot say anything with reasonable confidence. How do you set up your matrices? Especially, how does that `glOrtho()` call, which modifies the top element of the currently selected matrix stack, end up in `_cameraPerspective`? – derhass May 27 '14 at 19:33
  • What are you doing on a per-frame basis? Do you clear the depth buffer every frame? – Gyan aka Gary Buyn May 27 '14 at 19:47
  • Wait - don't worry about that last comment. @derhass is right, we don't have enough information. Your best bet is to reduce your code to the smallest possible executable example (http://www.sscce.org/). If that example has the same problem, post it in your question. If it doesn't then start adding things back in one by one until the problem appears. Then you have the cause :) – Gyan aka Gary Buyn May 27 '14 at 20:02
  • @derhass The (Mat)rices and (Vec)tors are all held in my own Matrix and Vector classes. Originally, this project started with those as an operator overloading / memory management exercise to familiarize myself with C++. Once the math was working, I created a software renderer (with just a simple flat shader). After that worked correctly, I used the same logic in my software renderer integrated into GLSL. This being said, I'm not using the provided matrix stack. I'm copying the contents of my Mats/Vecs into arrays of GLfloats and creating uniforms sent to my shader. Will add Matrix structures. – Mr. Nex May 27 '14 at 20:17
  • The depth range merely applies to the way your projection's near and far values map to window-space. Ordinarily, zNear becomes **0.0** in window-space and zFar becomes **1.0**. You ***can*** sometimes more evenly distribute depth buffer precision (that is, cancel out the inherent bias toward giving values near the near-clip plane more precision when using a perspective projection) by inverting the depth range, the clear depth value and the depth test direction and using a floating-point depth buffer. The latter three things all apply to window-space Z. That is a lot of effort, and I think you – Andon M. Coleman May 27 '14 at 21:32
  • ... would neither want, nor benefit from the added effort in this situation. – Andon M. Coleman May 27 '14 at 21:37

2 Answers2

1

Using the following matrix as projection matrix:

AspectX     0           0               0
0           AspectY     0               0
0           0           1               0
0           0         1/focalLength     0

is going to completely destroy the depth value.

When this is applied to a vector (x,y,z,w)^T, you will get z'=zand w'=z/focalLength as the clip space components. After the perspecive divide, you will end up with a NDC z component of z'/w' which is just focaldepth and completely indepenent of the eye space z value. So you project everything to the same depth which totally explains the behavior you have seen.

This page explains how projection matrices are typically build and especially offers many details of how the z value is mapped.

With the line gl_Position.z = 0.0001+vertexInScreenSpace.z; you actually get some kind of "working" depth since then, the NDC Z coord will be (0.0001+z')/w' which is focalLenght * (1+ 0.0001/z) and finally at least a function of eye space z, as it should be. One could caluclate what near and far values that mapping actually would procude, but carrying out that calculation is quite pointless for this answer. You should make yourself familiar with the math for compuer graphic projections, especiallly linaer algebra and projective spaces.

The reason why the depth test is inverted is due to the fact that your projection matrix does negate the z coordinates. Usually, the view matrix is constructed in such a way that the viewing direction is -z, and the projection matrix has (0 0 -1 0) as the last row, while you have (0 0 1/focalLength 0), which basically multiplies z by -1 in effect.

derhass
  • 43,833
  • 2
  • 57
  • 78
  • Makes perfect sense. Forgot about the perspective transformation vs. the perspective divide, and what effect that divide would have on the Z coordinate. Further more, by changing .0001 to _inverseScreenDimensions.z the depth testing became even more accurate, just as your example would suggest, forcing a "working" depth to the z coordinate. I really cannot thank you enough for explaining this in a way I understood. Next semester I will be able to take my first C++, Graphics programming introduction, and Linear classes for a more formal education on the topics. – Mr. Nex May 27 '14 at 22:10
0

Near and Far are the distance of the near and far planes in the direction you are looking and therefore should usually both be positive numbers. Negative numbers would put the clipping planes behind the view origin, this is probably not what you want.

Richard Critten
  • 2,138
  • 3
  • 13
  • 16
  • You're right. Missed the fact that Near and Far are the Distances, not Displacements. This being said, making the adjustment does not affect the behavior at all. The problem still exists and is still solved by the same two changes. – Mr. Nex May 27 '14 at 21:21