0

What is the correct way to convert from unsigned int texture to a normalized float and back again?

As a test I am currently trying to render an unsigned int texture to a standard RGB context and the following is working but it doesn't feel right

Relevant Draw Code:

            ShaderPropertySetter.SetUniform(gl, "uTexture_us2", 0);
            ShaderPropertySetter.SetUniform(gl, "maxIntensity_u", MaxIntensity);
            ShaderPropertySetter.SetUniform(gl, "minIntensity_u", MinIntensity);
            ShaderPropertySetter.SetUniformMat4(gl, "uModelMatrix_m4", modelMatrix);

            canvas.Bind(gl);// this binds the vertex buffer

            gl.BindTexture(OpenGL.GL_TEXTURE_2D, texture);

            gl.DrawArrays(OpenGL.GL_QUADS, 0, 4);

Texture Creation

    public static void FillTextureDataWithUintBuffer(OpenGL gl, uint[] buffer, int width, int height)
    {

        unsafe
        {
            fixed (uint* dataprt = buffer)
            {
                IntPtr pixels = new IntPtr(dataprt);
                const int GL_R32UI = 0x8236; //GL_R32UI is not currently defined in SharpGL

                gl.TexImage2D(OpenGL.GL_TEXTURE_2D,
                    0,
                    GL_R32UI,
                    width,
                    height,
                    0,
                    OpenGL.GL_RED_INTEGER,
                    OpenGL.GL_UNSIGNED_INT,
                    pixels);
            }
        }
        OpenGLTesting.CheckForFailure(gl);
    }

Current GLSL Code

---UPDATED---- (fixed stupid errors that commenters kindly pointed out) #version 150 core

in vec2 pass_texCord;

uniform usampler2D uTexture_us2;
uniform uint maxIntensity_u;
uniform uint minIntensity_u;


float linearNormalize(float value, in float max, in float min)
{
    //normalized = (x-min(x))/(max(x)-min(x))
    return (value - min) / (max - min);
}
void main(void) 
{

    uvec4 value = texture(uTexture_us2, pass_texCord);
    float valuef = float(value.r);
    float max = float(maxIntensity_u);
    float min = float(minIntensity_u);

    float normalized = linearNormalize(valuef,max,min)  ;

    gl_FragColor = vec4(normalized,normalized,normalized,1);
} 

So I am not very happy with the current state of the GLSL code (especially as it doesn't work :p) because I am performing the float cast, which seems to defeat the point.

Reason:

I am working on a compositor where some of the textures are stored as single channel unsigned int and others are stored in triple channel float, when one gets blended with another, I want to convert the "blendee"

Note: I am using SharpGL

chrispepper1989
  • 2,100
  • 2
  • 23
  • 48
  • Which values do minIntensity and maxIntensity contain? – BDL Dec 11 '14 at 09:58
  • I can set them in the test harness, usually I set minIntensity is 0 and maxIntensity to the highest value that is found in the unsigned int data set – chrispepper1989 Dec 11 '14 at 10:23
  • 1
    Just saw another thing: Your setting in your c# code "maxIntensity_u" but the variable in glsl is named "maxIntensity". Could it be that you're never setting these variables and get a divide by 0 in your shader? – BDL Dec 11 '14 at 10:27
  • 1
    You have a signed unsigned mismatch in your shader. You're sampling from an unsigned integer texture and storing the result in a signed integer. Shouldn't you be using `uvec4` instead of `ivec4`? – Andon M. Coleman Dec 11 '14 at 13:31
  • Thanks guys, there was a bunch of stupid errors in my glsl, fixing them does seem to have generated a result however I am still concerned about the cast from unsigned int to float. I am going to post updated glsl – chrispepper1989 Dec 11 '14 at 15:06
  • 1
    I really wonder why you are using unnormalized integer textures in the first place? Why not using a 32bit UNORM format, and float uniforms? – derhass Dec 11 '14 at 21:46
  • @derhass honestly I am starting to wonder the same thing. Would unorm textures be accessed using just "sample2d" instead of usample2d? Don't suppose you know of any handy tutorials :) – chrispepper1989 Dec 16 '14 at 09:58
  • I can't find unorm on the texture man page so I am a bit confused how I would set up an unormalised floating point texture, would I use the following: internalFormat: GL_R32F , format: GL_RED_SNORM, type: GL_FLOAT – chrispepper1989 Dec 16 '14 at 10:11
  • @chrispepper1989: Hmm, I stand corrected. There actually isn't a 32 bit normalized unsigned integer format. (I thought there is, as there are also the 16Bit ones like `GL_R16`). So you'll ever have to use `GL_R32UI` as you did before, or convert the data to the normalized float range [0,1] and use `GL_R32F`). – derhass Dec 16 '14 at 12:12
  • 1
    @chrispepper1989: The reason you cannot find UNORM formats in the manual pages is because they are given names like `GL_R8`. For many years, the ***only*** kind of texture image format was unsigned normalized. When GL added signed normalized support it added the suffix `_SNORM` but left the original constants the way they were (e.g. `GL_R8`). D3D actually uses the term `UNORM` in its constants, but that API has the liberty of completely reinventing itself every few years and does not have to re-use old names :P – Andon M. Coleman Dec 17 '14 at 01:32
  • I was hoping the numbers would be un-normalised and have just realized I am massively over complicating the issue, floating point texture would do the job fine http://stackoverflow.com/questions/5709023/what-exactly-is-a-floating-point-texture – chrispepper1989 Dec 17 '14 at 08:54

0 Answers0