The gl_Position
in the vertex shader is a clip space coordinate. There will be a division by w
to generate normalized device coordinates, where the visible range is [-1,1] in OpenGL (by default, can be changed nowadays). Those values will be transformed according to the currently set glDepthRange
parameters to finally get the window space z
value, which is in the range [0,1]
.
The depth buffer must just store these values, and - very similar to color values which often store only even 8 bit per channel values - an integer depth buffer is used to represent fixed point values in that range.
Quoting from setction 13.6 "coordinate transformations" of the OpenGL 4.5 core profile spec (emphasis mine):
z_w
may be represented using either a fixed-point or floating-point representation.
However, a floating-point representation must be used if the draw framebuffer has a floating-point depth buffer. If an
m
-bit fixed-point representation is used, we
assume that it represents each value k/(2^m-1)
,
where k
in {0,1,...,2^m
- 1}, as k (e.g. 1.0 is represented in binary as a string of all ones).
So, the window space z_w
value (which is in [0,1]) is just multiplied by 2^m -1
, and rounded to integer, and the result is stored in the buffer.