0

I am trying to implement my own depth stencil like functionality in a pixel shader.

I have a normalised depth value, and I want to convert it to an integer in order to use the SM5 InterlockedMin() operation, like so:

uint d_uint = (uint)(dnorm * 4294967295);
uint d_uint_original = 0;

InterlockedMin(depthmask[c], d_uint, d_uint_original);

if (d_uint < d_uint_original) { //if true, we will have written a new depth into the mask, so write the element to the field
    field[c].x = d;
    field[c].yzw = i.n;
}

Where dnorm is the depth value.

However, uint d_uint = (uint)(dnorm * 4294967295); only ever evaulates to 0.

I know this because I can view the buffer in RenderDoc and see the shader only writes 0. I can also reduce the constant, to say 229496729, and it will write it correctly. If I set it to 2294967295 however it does not.

I know I am tripping over a floating point quantisation issue, but don't know how.

What is the correct way to encode a normalised value in an integer array?

I am aware of DX's UNORM/SNORM, and maybe I need a break, but its not clear from that documentation how they should be used, specifically converting one to UINT in order to be used with the interlocked functions.

sebf
  • 2,831
  • 5
  • 32
  • 50

1 Answers1

0

In your case, since I suppose that dnorm > 0, you can also simply use

uint depthAsUint = asuint(dnorm);

since for positive float numbers, the binary representation satisfies the comparison operator . eg :

asuint(0.1f) < asuint(0.5f) = true
mrvux
  • 8,523
  • 1
  • 27
  • 61