2

I'm doing some customized 2D rendering to depth buffer that connects to a texture of internal format GL_DEPTH_STENCIL. In fragment shader, the normalized Z value (only 0.0 to 1.0 is used, I'm lazy) is explicitly written from some process:

in float some_value;
uniform float max_dist;
void main()
{
    float dist = some_process( some_value );
    gl_FragDepth = clamp( dist / max_dist, 0.0, 1.0 );
}

Now I need to perform further process on the resultant bitmap on CPU side. However, glGetTexImage would give you GL_UNSIGNED_INT_24_8 binary format for a depth-stencil data. What should I do with the 24-bit depth component? How does the normalized floating-point Z value of [-1.0, 1.0] map to the 24-bit integer?

jiandingzhe
  • 1,881
  • 15
  • 35
  • 1
    Look up [in the documentation](https://www.khronos.org/registry/OpenGL-Refpages/es3.0/html/glTexImage2D.xhtml). 24 first bits are depth, 8 latter bits are stencil. – Alexey S. Larionov Feb 22 '22 at 10:38
  • 1
    I suppose the actual depth value 0..1 is multiplied by 2^24-1, obtaining an integer 24 bits integer value. So you can convert an integer to a float on CPU by dividing by `(float)(2^24-1)` – Alexey S. Larionov Feb 22 '22 at 10:42
  • 1
    The depth is a floating point value in range [0.0, 1.0]. All you have to do is to assign the depth value to `gl_FragDepth`. The value is than encoded in the 24 bits. 0 is 0.0 and hFFFFFF is 1.0. – Rabbid76 Feb 22 '22 at 11:43

0 Answers0