5

I've been attempting to encode 4 uints (8-bit) into one float so that I can easily store them in a texture along with a depth value. My code wasn't working, and ultimately I found that the issue boiled down to this:

asuint(asfloat(uint(x))) returns 0 in most cases, when it should return x.

In theory, this code should return x (where x is a whole number) because the bits in x are being converted to float, then back to uint, so the same bits end up being interpreted as a uint again. However, I found that the only case where this function seems to return x is when the bits of x are interpreted as a very large float. I considered the possibility that this could be a graphics driver issue, so I tried it on two different computers and got the same issue on both.

I tested several other variations of this code, and all of these seem to work correctly.

asfloat(asuint(float(x))) = x

asuint(asint(uint(x))) = x

asuint(uint(x)) = x

The only case that does not work as intended is the first case mentioned in this post. Is this a bug, or am I doing something wrong? Also, this code is being run in a fragment shader inside of Unity.

283
  • 141
  • 1
  • 9
  • Also, as a quick note, I'm finding more weird behavior in GLSL (tested on Shadertoy). If I run the equivalent of this code in GLSL, **floatBitsToUint(uintBitsToFloat(uint(x)))**, it only returns x if x is a defined constant, not if x is a declared variable. I really have no idea what is going on here. – 283 Feb 02 '20 at 04:28
  • GLSL does have similar denorm rules to hlsl, however in this case it sounds like the compiler is making an optimization to your code which, as a side effect, removed the point where it would have flushed the denormalized float. This is just a guess though. – Baggers Jan 13 '21 at 20:11

1 Answers1

5

After a long time of searching, I found some sort of answer, so I figured I would post it here just in case anyone else stumbles across this problem. The reason that this code does not work has something to do with float denormalization. (I don't completely understand it.) Anyway, denormalized floats were being interpreted as 0 by asuint so that asuint of a denormalized float would always be 0.

A somewhat acceptable solution may be (asuint(asfloat(x | 1073741824)) & 3221225471) This ensures that the float is normalized, however it also erases any data stored in the second bit. If anyone has any other solutions that can preserve this bit, let me know!

283
  • 141
  • 1
  • 9
  • You were right in that it is related to denormalization. The docs here https://learn.microsoft.com/en-us/windows/win32/direct3d11/floating-point-rules explain the behavior: "Denorms are flushed to sign-preserved zero on input and output of any floating-point mathematical operation. Exceptions are made for any I/O or data movement operation that doesn't manipulate the data" So you can move denorm floats but you shouldnt produce/process them. – Baggers Jan 13 '21 at 20:08