0

I'm trying to implement collision detection on the GPU like this article:

https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch29.html

In step 2- Grid Generation, we use depth testing to make sure we only write particle IDs greater than the previous one.

I have this working right now by dividing the ID number by the total number of particles:

gl_FragDepth = v_ID/u_totalParticleCount;

But I fear if I get to a point with a lot of particles, I won't have enough accuracy for this.

I tried attaching a RGBA32F texture to my framebuffer depth attachment, but that's not allowed I guess.

Is there a way to do this? Or is putting my IDs into 0-1 space the only way?

Thanks a lot!

Mog
  • 1
  • 1

2 Answers2

3

The window-space depth is clamped to within the range specified by glDepthRange. And this function clamps the values you provide to the range [0, 1].

There's an NVIDIA extension that turns this clamping off: NV_depth_range_float. But otherwise, floating-point depth buffers exist primarily to give you greater precision in [0, 1], not larger numbers.

Depth component textures must use depth image formats. They don't store RGBA; they store DEPTH_COMPONENT data. So a 32-bit floating-point image format would be GL_DEPTH_COMPONENT32F.

Nicol Bolas
  • 449,505
  • 63
  • 781
  • 982
0

I'm not sure if using the depth buffer for this is the optimal path, since what you're doing is not really related to depth in any way and a lot of hardware (AMD especially) have some optimizations around depth that you'd be breaking.

There are plenty of frame buffer formats that would support the kind of thing you're trying to do, with a cleaner solution (for example, a 32bit integer format, that'll be fine for up to 4 billion particles).

Varrak
  • 708
  • 3
  • 13