-1

Our application crashes on old Nvidia drivers..

Debug code is here

Looking around, here they say it is often due to an incorrect vertex attribute setup

This is how I setup my vbo and vao:

        /**
         * Init Vbo/vao.
         */
        float[] vertexData = new float[]{
            0, 0,
            1, 0,
            1, 1};

        debugVbo = new int[1];
        gl3.glGenBuffers(1, debugVbo, 0);

        gl3.glBindBuffer(GL3.GL_ARRAY_BUFFER, debugVbo[0]);
        {
            FloatBuffer buffer = GLBuffers.newDirectFloatBuffer(vertexData);

            gl3.glBufferData(GL3.GL_ARRAY_BUFFER, vertexData.length * Float.BYTES, buffer, GL3.GL_STATIC_DRAW);
        }
        gl3.glBindBuffer(GL3.GL_ARRAY_BUFFER, 0);

        debugVao = new int[1];
        gl3.glGenVertexArrays(1, debugVao, 0);
        gl3.glBindVertexArray(debugVao[0]);
        {
            gl3.glBindBuffer(GL3.GL_ARRAY_BUFFER, debugVbo[0]);
            {
                gl3.glEnableVertexAttribArray(0);
                {
                    gl3.glVertexAttribPointer(0, 2, GL3.GL_FLOAT, false, 0, 0);
                }
            }
            gl3.glBindBuffer(GL3.GL_ARRAY_BUFFER, 0);
        }
        gl3.glBindVertexArray(0);
    }

And this is how I render:

public static void render(GL3 gl3) {

gl3.glClear(GL3.GL_DEPTH_BUFFER_BIT | GL3.GL_COLOR_BUFFER_BIT);

gl3.glUseProgram(textureProgram);
{
    gl3.glBindVertexArray(debugVao[0]);
    {
        gl3.glActiveTexture(GL3.GL_TEXTURE0);
        gl3.glBindTexture(GL3.GL_TEXTURE_2D, texture[0]);
        gl3.glBindSampler(0, EC_Samplers.pool[EC_Samplers.Id.clampToEdge_nearest_0maxAn.ordinal()]);
        {
            gl3.glDrawArrays(GL3.GL_TRIANGLES, 0, 3);
        }
        gl3.glBindTexture(GL3.GL_TEXTURE_2D, 0);
        gl3.glBindSampler(0, 0);
    }
    gl3.glBindVertexArray(0);
}
gl3.glUseProgram(0);
}

This is my VS:

#version 330

layout (location = 0) in vec2 position;

uniform mat4 modelToCameraMatrix;
uniform mat4 cameraToClipMatrix;

out vec2 fragmentUV;

void main()
{
    gl_Position = cameraToClipMatrix * modelToCameraMatrix * vec4(position, 0, 1);

    fragmentUV = position;
}

And my FS:

#version 330

in vec2 fragmentUV;

out vec4 outputColor;

uniform sampler2D textureNode;

void main()
{
    outputColor = texture(textureNode, fragmentUV);
}

I read and re-read the same code since 2 days now, I can't find anything wrong. I tried also defining a stride of 2*4=8, but same outcome..

elect
  • 6,765
  • 10
  • 53
  • 119
  • So far your code looks correct, but I suggest you spray it with glGetError calls (one after each regular OpenGL call) to see, where exactly it does fail. Something weird is going on. Also for a quick test I suggest you do a full vertex attrib pointer setup in the *drawing* code, instead of relying on the VAO; if that works, you know something is going on with the VAO. – datenwolf Oct 14 '15 at 09:46
  • Tried already a full vertex attrib pointer setup. I'll try the glGetError spraying, thanks – elect Oct 14 '15 at 09:48
  • Who -1ed, do you mind explaining? – elect Oct 14 '15 at 13:34
  • Hit and run moron, typical.. – elect Oct 16 '15 at 09:30
  • elect: wasn't me (the -1 I mean). nice to see you figured out the problem. For what it's worth setting a texture parameter to an invalid value should not lead to a driver crash. This is clearly a driver bug and should be reported to nvidia. – datenwolf Oct 16 '15 at 12:15
  • Yeah, dont worry, I never thought it was you, datenwolf :). Anyway, as I said, the driver is quite old, something between 180 and 250 version releases, I guess even if I will report they'd never really fix it... However I also created a thread on the Nvidia dev forum, if anyone will ever have this problem again hopefully he will read me here or there – elect Oct 16 '15 at 12:26

1 Answers1

1

I can't believe it.

Problem lied somewhere else, where I was initializing my samplers

public static void init(GL3 gl3) {

    pool = new int[Id.size.ordinal()];

    gl3.glGenSamplers(Id.size.ordinal(), pool, 0);

    gl3.glSamplerParameteri(pool[Id.clampToEdge_nearest_0maxAn.ordinal()],
            GL3.GL_TEXTURE_WRAP_S, GL3.GL_CLAMP_TO_EDGE);
    gl3.glSamplerParameteri(pool[Id.clampToEdge_nearest_0maxAn.ordinal()],
            GL3.GL_TEXTURE_WRAP_T, GL3.GL_CLAMP_TO_EDGE);
    gl3.glSamplerParameteri(pool[Id.clampToEdge_nearest_0maxAn.ordinal()],
            GL3.GL_TEXTURE_MIN_FILTER, GL3.GL_NEAREST);
    gl3.glSamplerParameteri(pool[Id.clampToEdge_nearest_0maxAn.ordinal()],
            GL3.GL_TEXTURE_MAG_FILTER, GL3.GL_NEAREST);
    gl3.glSamplerParameteri(pool[Id.clampToEdge_nearest_0maxAn.ordinal()],
            GL3.GL_TEXTURE_MAX_ANISOTROPY_EXT, 0);
}

The crash was caused by setting the max anisotropy to 0... 1 resolved the crash..

Ps: also glSamplerParameterf instead glSamplerParameteri since it is a float value..

Anyway it is weird because that code was since since a lot of time and never trigger the violation previously.. I don't know.. maybe some latter code modification made in a way that the Nvidia driver couldn't detect anymore the problem and fix it by itself, who knows..

elect
  • 6,765
  • 10
  • 53
  • 119
  • Setting invalid values should not cause OpenGL drivers to crash. Produce errors: Yes. Crashing: Big NO! – datenwolf Oct 16 '15 at 12:16