0

I'm trying to visualize (for debugging purposes) the depth of a scene on a textured quad (I want to display the depth of a water pond onto the surface of the water). So what I'm doing is rendering the scene from camera point of view in a depth buffer and then binding that same depth buffer as a shader resource to the water shader. But when I sample the texture I always get a complete white texture. I used pow(value, 100.0) and I get a little variation which correctly folows the depth of the terrain below the water, so I must assume some values are there but they're in a small range around 1.00. So I tried normalizing the values by inverting the perspective transformation for z (from clip space back to eye space) with the following equation:

float linearDepth = near * far / (far - depth * (far - near));

but again, total white. The formula seems correct to me (I found it in the Frank Luna's book, and even if I compute the projection matrix by myself I get it). The far and near plane are in the range 0.1 300.0 but even playing with those values doesn't get me anywhere.

All this is because I want to implement soft edges on my water and I need the depth values, but I want to visualize them before so I know I'm doing it correctly.

Luca
  • 1,658
  • 4
  • 20
  • 41

1 Answers1

0

You're right that with a standard projection matrix, the distribution of depth values will all be very close to the value of 1 (only very close objects will show a visible difference). Unless the far plane is really, really close you're not going to see values appreciably below 1. That's also compounded some by the fact that the human eye doesn't see small deltas in bright colors nearly as well as between dark colors (so you're less able to see the changes).

You're definitely on the right track with your linearization of the depth buffer. In the past I got this working (in OpenGL, so YMMV) with something along the lines of the following:

float lineardepth = (2.0f * near) / (far + near - depth * (far - near));

That should be pretty close to something that would work in D3D, too.

Varrak
  • 708
  • 3
  • 13
  • I found out that using my conversion formula I get the z coordinate in eye/camera space, so it ranges from near to far. If I subtract near and divide by (far - near) I get a linear representation of the depth values. – Luca May 15 '18 at 12:05