For most numbers, we know there will be some precision error with any floating point value. For a 32-bit float, that works out the be roughly 6 significant digits which will be accurate before you can expect to start seeing incorrect values.
I'm trying to store a human-readable value which can be read in and recreate a bit-accurate recreation of the serialized value.
For example, the value 555.5555
is stored as 555.55548095703125
; but when I serialize 555.55548095703125
, I could theoretically serialize it as anything in the range (555.5554504395, 555.555511475)
(exclusive) and still get the same byte pattern. (Actually, probably that's not the exact range, I just don't know that there's value in calculating it more accurately at the moment.)
What I'd like is to find the most human-readable string representation for the value -- which I imagine would be the fewest digits -- which will be deserialized as the same IEEE float.