9

I've created a couple of floating point RGBA texture...

glBindTexture( GL_TEXTURE_2D, texid[k] );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA,
             GL_FLOAT, data);

and then I double-buffer render/sample into them alternately in a shader program

    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, 
                       GL_TEXTURE_2D, texid[i], 0)

...

    state_tex_loc = glGetUniformLocation( program, "state_tex" )
    glUniform1i( state_tex_loc, 0 )
    glActiveTexture( GL_TEXTURE0 )
    glBindTexture( GL_TEXTURE_2D, texid[1-i] )

...

    void main( void )
    {
        gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
        vec2 sample_pos = gl_Vertex.xy / vec2( xscale, yscale );
        vec4 sample = texture2D( state_tex, sample_pos.xy );
        sample.rgb = sample.rgb + vec3( 0.5, 0.5, 0.5 );
        if ( sample.r > 1.1 )
            sample.rgb = vec3( 0.0, 0.0, 0.0 );
        gl_FrontColor = sample;
    }

...

    void main( void )
    {
        gl_FragColor = gl_Color;
    }

Notice the check for sample.r being greater than 1.1. This never happens. It seems that either the call to texture2D or the output of the fragment shader clamps the value of sample.rgb to [0.0..1.0]. And yet, my understanding is that the textures themselves have complete floating-point types in them.

Is there any way to avoid this clamping?

UPDATE:

As per instructions below, I've fixed my glTexImage2D() call to use GL_RGBA32F_ARB, but I still don't get a value greater than 1.0 out of the sampler.

UPDATE 2:

I just tried initializing the textures to values larger than 1.0, and it works! texture2d() returns the initial, > 1.0 values. So perhaps this means that the problem is in the fragment shader, writing to the texture?

UPDATE 3:

I've tried changing the shaders, and this works:

    varying vec4 out_color;
    void main( void )
    {
        gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
        vec2 sample_pos = gl_Vertex.xy / vec2( xscale, yscale );
        vec4 sample = texture2D( state_tex, sample_pos.xy );
        sample.rgb = sample.rgb + vec3( 0.5, 0.5, 0.5 );
        if ( sample.r > 1.1 )
            sample.rgb = vec3( 0.0, 0.0, 0.0 );
        out_color = sample;
    }

...

    varying vec4 out_color;
    void main( void )
    {
        gl_FragColor = out_color;
    }

Why does using a custom varying work, but using the built-in varying gl_FrontColor/gl_Color not work?

Ted Middleton
  • 6,859
  • 10
  • 51
  • 71

1 Answers1

7

I've created a couple of floating point RGBA texture...

No, you did not.

glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA, GL_FLOAT, data);

This statement does not create a floating-point texture. Well, maybe it does if you're using OpenGL ES, but it certainly doesn't in desktop GL. Though I'm pretty sure OpenGL ES doesn't let you use "4" as the internal format.

In desktop GL, the third parameter to glTexImage2D defines the image format. It is this parameter that tells OpenGL whether the data is floating-point, integer, or whatever. When you use "4" (which you should never do, because it's a terrible way to specify the internal format. Always use a real internal format), you're telling OpenGL that you want 4 unsigned normalized integer components.

The last three parameters specify the location, format, and data type of the pixel data that you want to upload to the texture. In desktop GL, this has no effect on how the data is stored. You're just telling OpenGL what your input pixels look like. The OpenGL ES specification unwisely changes this. The last three parameters do have some effect on what the internal format of the data is.

In any case, if you want 32-bit floats, you should ask for them:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, data);

Why does using a custom varying work, but using the built-in varying gl_FrontColor/gl_Color not work?

Because it's built-in. I haven't used built-in GLSL stuff in years, so I never even noticed that.

The 3.3 compatibility spec has a function glClampColor that defines vertex (and fragment) color clamping behavior. It only affects the built-ins. Personally? I'd avoid it and just not use built-in stuff at all.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
  • This is sort of what I was afraid of. When I try to use GL_RGBA32F as the 'internalFormat', GL complains of an 'invalid enumerant'. I thought that perhaps the actual storage format had to be specified with the later 'format' parameter. Do you know of any way to check whether this is supported by my version of GL (which should be 2.1 NVidia on OSX)? – Ted Middleton Feb 22 '12 at 17:26
  • 3
    If glGetString(GL_VERSION) returns >3.0, then GL_RGBA32F should work. If it returns <=2.1, then you need to check for GL_ARB_texture_float extension. If it present then use GL_RGBA32F_ARB for internal format. Alternatively check for GL_NV_float_buffer extension, and if it is present, then use GL_FLOAT_RGBA32_NV. If none of these extensions are present then your driver/card doesn't support floating points. – Mārtiņš Možeiko Feb 22 '12 at 18:06
  • Well, my version of GL definitely doesn't support GL_RGBA32F, and it doesn't support GL_FLOAT_RGBA32_NV either, but it does seem to support GL_RGBA32F_ARB. Or at least I don't seem to get the 'invalid enumerant' error. Problem is, I tried using GL_RGBA32F_ARB and it still doesn't work? The color in the above code still doesn't get reset to 0? – Ted Middleton Feb 23 '12 at 04:35
  • @TedMiddleton: What you've said is not possible. `GL_RGBA32F` is defined to be the *same value* as `GL_RGBA32F_ARB`. It is impossible for OpenGL to tell which one you pass because they are the same number: 0x8814. So there's a good chance that something else is going on. What OpenGL headers or loading library are you using? – Nicol Bolas Feb 23 '12 at 04:40
  • I'm actually doing this through pyopengl, which is of course using GL through SWIG bindings. I'm doing this on OS X 10.7, which has OpenGL 3.2, but only when enabled when starting the context. Otherwise you get 2.1. Because pyopengl doesn't provide bindings to easily create a 3.2 context, I guess on OS X they didn't bother building bindings to all the constants in gl3.h, which is where GL_RGBA32F is - just GL_RGBA32F_ARB in glext.h. – Ted Middleton Feb 23 '12 at 06:57
  • So yes, if I was coding this in C/C++, those two constants would be identical. But my language bindings are a bit peculiar. And unfortunately, even when using GL_RGBA32F_ARB, I still can't get a value greater than 1.0 out of the sampler in my shader. – Ted Middleton Feb 23 '12 at 06:57
  • 1
    @TedMiddleton: See my edit, down at the bottom of the answer. – Nicol Bolas Feb 23 '12 at 07:37