I've been attempting to encode 4 uints (8-bit) into one float so that I can easily store them in a texture along with a depth value. My code wasn't working, and ultimately I found that the issue boiled down to this:
asuint(asfloat(uint(x))) returns 0 in most cases, when it should return x.
In theory, this code should return x (where x is a whole number) because the bits in x are being converted to float, then back to uint, so the same bits end up being interpreted as a uint again. However, I found that the only case where this function seems to return x is when the bits of x are interpreted as a very large float. I considered the possibility that this could be a graphics driver issue, so I tried it on two different computers and got the same issue on both.
I tested several other variations of this code, and all of these seem to work correctly.
asfloat(asuint(float(x))) = x
asuint(asint(uint(x))) = x
asuint(uint(x)) = x
The only case that does not work as intended is the first case mentioned in this post. Is this a bug, or am I doing something wrong? Also, this code is being run in a fragment shader inside of Unity.