6

I want to use the depth buffer in a slightly unorthodox way and I got heavily confused by all the normalization, scaling and what ever is going on there.

My plan is to implement a spatial hashing algorithm by some guys from AMD (link to pdf).

tl;dr-version: Speed up nearest-neighbor-search by discretizing 3D-vertices into an array of (flat 2D) depth textures, setting the depth to the VertexID. The reason for using depth textures is that there is some smart depth testing going on to even get the results in a sorted order, but that's less important here.

My problem is that the VertexID is obviously an integer, ranging from 0 to the total amount of vertices ParticleCount, but that can't be directly used, as the output of the vertex shader has to be normalized to [-1..1) in OpenGL (or [0..1) in DirectX).

My vertex shader therefore does something like that:

float depth = 2.0 * gl_VertexID / ParticleCount - 1.0;
gl_Position = vec4(flatCoords, depth, 1.0);

That is kind of working, but what values are actually stored to the depth texture bound to the current framebuffer confuses me. I don't quite get the difference between the floating-point depth buffer and the integer version, if I can't even output real integers and when reading from the depth texture later everything seems to be normalized to [0..1] no matter what internal format I set (DepthComponent24, 32, 32f).

Can someone give me some advice how to get the VertexIDs out of these depth textures?

Thanks

Gigo
  • 3,188
  • 3
  • 29
  • 40
  • "*normalized to [-1..1) in OpenGL (or [0..1) in DirectX*" The 1 is inclusive in both cases. – Nicol Bolas Jul 26 '13 at 01:31
  • You're right, a slight oversight. But the default behavior of depth testing is clearing the depth buffer to 1 and using "less than" comparison, therefore all points with a depth of 1 are culled. – Gigo Jul 26 '13 at 01:43

1 Answers1

2

Output from the vertex shader in OpenGL after perspective divide is clipped to [-1,1], that means gl_Position.z/gl_Position.w has to be in that range. However, the depth value that is actually stored in the depth buffer gets remapped to the 0..1 range using the current depth range (glDepthRange) values. By default the depth range is 0..1, which translates to

depth_buf_value = 0.5 + 0.5 * gl_Position.z / gl_Position.w;

So in your case the depth buffer ultimately contains values of float(gl_VertexID) / ParticleCount, and thus:

vertex_id = depth_buf_value * ParticleCount
  • Ok, that confirms that what I tried to do should have worked, but it didn't. I will try to get back to this project soon to get it working. – Gigo Aug 07 '13 at 00:02
  • 1
    Another thought I had: What about precision? The internal format which is used for the depth buffer confuses me a bit. It is handled like a float, but stored as a normalized int? Does that affect the range which can be represented? The ParticleCount might very well be a million or more and I must be able to reliably restore the particle index from the depth buffer. – Gigo Aug 07 '13 at 00:06
  • 2
    A floating point value in the 0..1 range is converted to given internal format. You should get the same precision with 24-bit integer format and the floating point format both, since the latter has a 24-bit mantissa (23+implicit leading bit), which can be used to represent integers without a bit loss. However, since in OpenGL it goes through that remapping part, at least one bit of that precision is lost. But that still leaves 23 bits with >8mil distinct values. – camenomizoratojoakizunewake Aug 07 '13 at 06:10