4

I'm having a really weird issue with depth testing here. I'm rendering a simple mesh in an OpenGL 3.3 core profile context on Windows, with depth testing enabled and glDepthFunc set to GL_LESS. On my machine (a laptop with a nVidia Geforce GTX 660M), everything is working as expected, the depth test is working, this is what it looks like:

enter image description here

Now, if I run the program on a different PC, a tower with a Radeon R9 280, it looks more like this:

enter image description here

Strange enough, the really weird thing is that when I call glEnable(GL_DEPTH_TEST) every frame before drawing, the result is correct on both machines. As it's working when I do that, I figure the depth buffer is correctly created on both machines, it just seems that the depth test is somehow being disabled before rendering when I enable it only once at initialization. Here's the minimum code that could somehow be part of the problem:

Code called at initialization, after a context is created and made current:

glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);

Code called every frame before the buffer swap:

glClearColor(0.4f, 0.6f, 0.8f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

// mShaderProgram->getID() simply returns the handle of a simple shader program
glUseProgram(mShaderProgram->getID());  

glm::vec3 myColor = glm::vec3(0.7f, 0.5f, 0.4f);
GLuint colorLocation = glGetUniformLocation(mShaderProgram->getID(), "uColor");
glUniform3fv(colorLocation, 1, glm::value_ptr(myColor));

glm::mat4 modelMatrix = glm::mat4(1.0f);
glm::mat4 viewMatrix = glm::lookAt(glm::vec3(0.0f, 3.0f, 5.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
glm::mat4 projectionMatrix = glm::perspectiveFov(60.0f, (float)mWindow->getProperties().width, (float)mWindow->getProperties().height, 1.0f, 100.0f);
glm::mat4 inverseTransposeMVMatrix = glm::inverseTranspose(viewMatrix*modelMatrix);

GLuint mMatrixLocation = glGetUniformLocation(mShaderProgram->getID(), "uModelMatrix");
GLuint vMatrixLocation = glGetUniformLocation(mShaderProgram->getID(), "uViewMatrix");
GLuint pMatrixLocation = glGetUniformLocation(mShaderProgram->getID(), "uProjectionMatrix");
GLuint itmvMatrixLocation = glGetUniformLocation(mShaderProgram->getID(), "uInverseTransposeMVMatrix");

glUniformMatrix4fv(mMatrixLocation, 1, GL_FALSE, glm::value_ptr(modelMatrix));
glUniformMatrix4fv(vMatrixLocation, 1, GL_FALSE, glm::value_ptr(viewMatrix));
glUniformMatrix4fv(pMatrixLocation, 1, GL_FALSE, glm::value_ptr(projectionMatrix));
glUniformMatrix4fv(itmvMatrixLocation, 1, GL_FALSE, glm::value_ptr(inverseTransposeMVMatrix));

// Similiar to the shader program, mMesh.gl_vaoID is simply the handle of a vertex array object
glBindVertexArray(mMesh.gl_vaoID);

glDrawArrays(GL_TRIANGLES, 0, mMesh.faces.size()*3);

With the above code, I'll get the wrong output on the Radeon. Note: I'm using GLFW3 for context creation and GLEW for the function pointers (and obviously GLM for the math). The vertex array object contains three attribute array buffers, for positions, uv coordinates and normals. Each of these should be correctly configured and send to the shaders, as everything is working fine when enabling the depth test every frame.

I should also mention that the Radeon machine runs Windows 8 while the nVidia machine runs Windows 7.

Edit: By request, here's the code used to load the mesh and create the attribute data. I do not create any element buffer objects as I am not using element draw calls.

std::vector<glm::vec3> positionData;
std::vector<glm::vec2> uvData;
std::vector<glm::vec3> normalData;
std::vector<meshFaceIndex> faces;

std::ifstream fileStream(path);
if (!fileStream.is_open()){
    std::cerr << "ERROR: Could not open file '" << path << "!\n";
    return;
}
std::string lineBuffer;
while (std::getline(fileStream, lineBuffer)){
    std::stringstream lineStream(lineBuffer);
    std::string typeString;
    lineStream >> typeString;   // Get line token
    if (typeString == TOKEN_VPOS){  // Position
        glm::vec3 pos;
        lineStream >> pos.x >> pos.y >> pos.z;
        positionData.push_back(pos);
    }
    else{
        if (typeString == TOKEN_VUV){   // UV coord
            glm::vec2 UV;
            lineStream >> UV.x >> UV.y;
            uvData.push_back(UV);
        }
        else{
            if (typeString == TOKEN_VNORMAL){   // Normal
                glm::vec3 normal;
                lineStream >> normal.x >> normal.y >> normal.z;
                normalData.push_back(normal);
            }
            else{
                if (typeString == TOKEN_FACE){  // Face
                    meshFaceIndex faceIndex;
                    char interrupt;
                    for (int i = 0; i < 3; ++i){
                        lineStream >> faceIndex.positionIndex[i] >> interrupt
                            >> faceIndex.uvIndex[i] >> interrupt
                            >> faceIndex.normalIndex[i];
                    }
                    faces.push_back(faceIndex);
                }
            }
        }
    }
}
fileStream.close();     

std::vector<glm::vec3> packedPositions;
std::vector<glm::vec2> packedUVs;
std::vector<glm::vec3> packedNormals;

for (auto f : faces){
    Face face;  // Derp derp;
    for (auto i = 0; i < 3; ++i){
        if (!positionData.empty()){
            face.vertices[i].position = positionData[f.positionIndex[i] - 1];
            packedPositions.push_back(face.vertices[i].position);
        }
        else
            face.vertices[i].position = glm::vec3(0.0f);
        if (!uvData.empty()){
            face.vertices[i].uv = uvData[f.uvIndex[i] - 1];
            packedUVs.push_back(face.vertices[i].uv);
        }
        else
            face.vertices[i].uv = glm::vec2(0.0f);
        if (!normalData.empty()){
            face.vertices[i].normal = normalData[f.normalIndex[i] - 1];
            packedNormals.push_back(face.vertices[i].normal);
        }
        else
            face.vertices[i].normal = glm::vec3(0.0f);
    }
    myMesh.faces.push_back(face);
}

glGenVertexArrays(1, &(myMesh.gl_vaoID));
glBindVertexArray(myMesh.gl_vaoID);

GLuint positionBuffer;  // positions
glGenBuffers(1, &positionBuffer);
glBindBuffer(GL_ARRAY_BUFFER, positionBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3)*packedPositions.size(), &packedPositions[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);

GLuint uvBuffer;    // uvs
glGenBuffers(1, &uvBuffer);
glBindBuffer(GL_ARRAY_BUFFER, uvBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec2)*packedUVs.size(), &packedUVs[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, (void*)0);

GLuint normalBuffer;    // normals
glGenBuffers(1, &normalBuffer);
glBindBuffer(GL_ARRAY_BUFFER, normalBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(glm::vec3)*packedNormals.size(), &packedNormals[0], GL_STATIC_DRAW);
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);


glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);

The .obj loading routine is mostly adapted from this one: http://www.limegarden.net/2010/03/02/wavefront-obj-mesh-loader/

JWki
  • 61
  • 6
  • The issue has been solved, thanks to datenwolf. For further reference, explicitely setting the stride parameter in the glAttribPointer calls solved the problem. – JWki Jul 28 '14 at 13:19
  • I refer to the glVertexAttribPointer calls of course. – JWki Jul 28 '14 at 13:25

2 Answers2

5

This doesn't look like a depth testing issue to me, but more like misalignment in the vertex / index array data. Please show us the code in which you load the vertex buffer objects and the element buffer objects.

datenwolf
  • 159,371
  • 13
  • 185
  • 298
  • That was my instinct, but I also thought that if calling `glEnable(GL_DEPTH_TEST)` makes a difference, possibly there's an array overrun or an early dealloc or something like that so the reliance is on the classic undefined behaviour? That's the only way I could think of that an empirical observation of changing a logically unrelated factor could change incorrect output. I'm clutching at straws, possibly. – Tommy Jul 28 '14 at 08:04
  • I added the code used to create the vertex buffer objects to my post. I also suspected some issue with the vertex data, but as it looks perfectly alright on my nVidia and also works on the Radeon with the mentioned calling of glEnable, I figured that it's unlikely. – JWki Jul 28 '14 at 08:11
  • 2
    @JWki: The NVidia drivers are known to be very lenient on applications not following required OpenGL practice. As such they're not a good test subject to see if a OpenGL program works properly. What the AMD drivers lack in stability and sometimes performance, they make up by OpenGL compliance; they're following the OpenGL spec by the word, and then some more. So if a program is broken on AMD it likely has some issues. If a program works as expected with AMD's OpenGL implementation you know, that it's going to work everywhere. – datenwolf Jul 28 '14 at 10:43
  • 1
    @JWki: Your code looks very good actually. That's very odd, because your "broken" screenshot clearly looks like something is off in the vertex ordering (there are faces connecting vertices, which are not sharing a face in the properly rendered frame). I don't think disabling the depth test "fixes" the problem, it just makes it harder to notice. My suggestion would be to supply explicit stride lengths. It's a shot into the blue but I can't think of anything else right now. – datenwolf Jul 28 '14 at 10:53
  • @datenwolf, well thanks for calling my code good-looking first of all. I knew about the issues with the nvidia drivers, that's why I'm trying to test everything I write on Radeon cards asap. Unfortunately I'm bound to my nvidia for pretty much the rest of the year. Thank you very much for the suggestion with the stride lengths, I'll try that out as soon as I have access to the Radeon machine later today (I hope). Do you have any suggestions for how to choose the lengths, as I haven't used the stride parameter before and am rather unsure how it's supposed to be set. – JWki Jul 28 '14 at 11:36
  • 1
    @JWki: The stride length is the distance in bytes between the vectors in the array. A foolproof way to get the stride is to perform the following `((char*)&packed…[1]) - ((char*)&packed…[0])`. The result of this operation is of type `ptrdiff_t` which trivially casts to `GLsizei`. – datenwolf Jul 28 '14 at 11:50
  • Well thank you very much, I'll try that out and see if it helps. – JWki Jul 28 '14 at 12:54
  • @JWki: This leaves two possible scenarios: The AMD driver has a bug. Or the GLM vectors don't get tightly packed and for some reasons the NVidia driver ends up using the same alignments. Either case something is seriously broken. As a quick check you may check that `((char*)&packed…[1]) - ((char*)&packed…[0]) == sizeof(GLfloat)*n` where in is the number of elements. There's a neat trick to have this asserted at build time. – datenwolf Jul 28 '14 at 13:23
  • @JWki: You can use this macro for build time assertions (must be used within a function scope): `#define build_assert(cond) ((void)sizeof(char[1 - 2*!(cond)]))` – if the assert fails the compiler generates a "array index negative error". – datenwolf Jul 28 '14 at 13:29
  • Let us [continue this discussion in chat](http://chat.stackoverflow.com/rooms/58169/discussion-between-jwki-and-datenwolf). – JWki Jul 28 '14 at 13:30
  • @datenwolf: Include the solution from this discussion in this answer, for easier reference. – Peter O. Jul 01 '15 at 05:14
0

It is because of the function ChoosePixelFormat.

In my case the ChoosePixelFormat returns a pixelformat ID with value 8 which provides a depth buffer with 16 bits instead of the required 24 bits.

One simple fix was to set the ID manually to the value of 11 instead of 8 to get a suitable pixelformat for the application with 24 bits of depth-buffer.

  • Further this error seems to occur only on AMD Graphics Cards. On NVIDIA and Intel UHD fo example it choosel the correct pixelformat id from the driver. – Sveti007 May 05 '20 at 10:59