Let me preface the question by saying that I understand why values such as 0.1
, 3.14
, 0.2
, and other values not composable of combinations of powers of two are ultimately unrepresentable by IEEE-754 formats, and that they may only be approximated as best as the precision allows.
What I am having trouble understanding is why attempting to represent the value 2-23 results in a slight margin of error.
2-23 is exactly equal to 1.1920928955078e-7
, or 0.00000011920928955078
.
In single-precision IEEE-754, it can be constructed as follows:
- The sign bit is
0
- The biased exponent is
104
(or0b01101000
in binary) to account for the 127-bias, leading to-23
being the final exponent value - The mantissa's bit field is entirely composed of
0
s, its ultimate value being1.0
when the implicit1
-bit is accounted for
However, storing this particular bit sequence in memory and printing it out in decimal notation, with 25 digits of precision past the decimal point results in the following:
0.0000001192092895507812500
^
|
margin of error starts here
This value contains an error of exactly 1.25e-21
. On this interactive website, this error value is referred to as an "Error due to conversion".
I am having trouble grasping this - because I understand, for example, why a value such as +3.14
cannot be exactly represented by a single-precision bitfield. No combination of negative powers of two in the mantissa scaled by the value in the exponent can exactly represent 3.14
, so the next closest approximation is chosen. Thus an "error due to conversion" is expected. Contrary to that, the value 2-23 is able to be stored exactly in a single-precision bitfield, yet when converted back to a decimal notation, an error appears.
There's clearly some sort of misunderstanding on my part, but I can't figure out where exactly.