I am working with some data that can have large number values and the data itself is important.
The highest number seen is "89,482". So originally I was going to use unsigned int.
However using these numbers in usigned int format is causing some headaches, namely manipulating them in OpenGL shaders.
Basically things would be a lot simpler if I could use float instead.
However I don't fully understand the repercussions of storing a number like this as floating point. Especially as in the OpenGL case I don't have the choice of storing a single channel 32 bit floating point texture, only 16bit.
For 16bit float Wikipedia states:
Precision limitations on integer values
Integers between 0 and 2048 can be exactly represented
Integers between 2049 and 4096 round to a multiple of 2 (even number)
Integers between 4097 and 8192 round to a multiple of 4
Integers between 8193 and 16384 round to a multiple of 8
Integers between 16385 and 32768 round to a multiple of 16
Integers between 32769 and 65519 round to a multiple of 32
Integers equal to or above 65520 are rounded to "infinity".
So does this quite simply mean that in the case of the number "89,482", trying to store it in an 16bit OpenGL texture it will be rounded to infinity? If so what are my options? Stick with unsigned int? What about when I need to normalize it, can I cast to float?
--EDIT---
I need the value to be un-normalised in the shader