In addition to what has been said in other answers, for example, intel floating point units use internally full 80 bit floating point representation with an excess in the number of bits.... so when it rounds the number to the nearest 23 bit float
number (as I assume from your output) think that it is able to be very precise and consider all the bits in an int
.
IEEE-752 specifies a 32bit float as a number with 23 bits dedicated to store the significand, which means that, for a normalized number, in which the most significant bit is implicit (not stored, as it is always a 1
bit) you have actually 24 bits of significand of the form 1xxxxxxx_xxxxxxxx_xxxxxxxx
, which means the number 2^24-1
is the last you'll be able to represent exactly (11111111_11111111_11111111
actually). After it, you can represent all the even numbers, but not the odds, as you lack the least significant bit to represent them. This should mean you are able to represent:
v decimal dot.
16777210 == 2^24-6 11111111_11111111_11111010.
16777211 == 2^24-5 11111111_11111111_11111011.
16777212 == 2^24-4 11111111_11111111_11111100.
16777213 == 2^24-3 11111111_11111111_11111101.
16777214 == 2^24-2 11111111_11111111_11111110.
16777215 == 2^24-1 11111111_11111111_11111111.
16777216 == 2^24 10000000_00000000_00000000_. <-- here the leap becomes 2 as there are no more than 23 bits to play with.
16777217 == 2^24+1 10000000_00000000_00000000_. (there should be a 1 bit after the last 0)
16777218 == 2^24+2 10000000_00000000_00000001_.
...
33554430 == 2^25-2 11111111_11111111_11111111_.
33554432 == 2^26 10000000_00000000_00000000__. <-- here the leap becomes 4 as there's another shift
33554436 == 2^26+4 10000000_00000000_00000001__.
...
If you imagine the problem in base 10, assume we have floating point numbers of just 3 decimal digits in significand, and an exponent of ten to raise the power. When we begin counting from 0
, we get this:
1 => 1.00E0
...
8 => 8.00E0
9 => 9.00E0
10 => 1.00E1 <<< see what happened here... this is the same number as the first but with the ten's exponent incremented, meaning a one digit shift of every digit to the left.
11 => 1.10E1
...
98 => 9.80E1
99 => 9.90E1
100 => 1.00E2 <<< and here.
101 => 1.01E2
...
996 => 9.96E2
997 => 9.97E2
998 => 9.98E2
999 => 9.99E2
1000 => 1.00E3 <<< exact, but here you don't have anymore a fourth digit to represent units.
1001 => 1.00E3 (this number cannot be represented exactly)
...
1004 => 1.00E3 (this number cannot be represented exactly)
1005 => 1.01E3 (this number cannot be represented exactly) <<< here rounding is applied, but the implementation is free to do whatever it wants.
...
1009 => 1.01E3 (this number cannot be represented exactly)
1010 => 1.01E3 <<< this is the next number that can be represent exactly with three floating point digits. So we switched from an increment of one by one to an increment of ten by ten.
...
Note
The case you show, is one of the rounding modes specified for the intel processors, it rounds to the even number closer, but in case it is half the distance, it counts the number of one bits in the significand and rounds up when it is odd, and rounds down when it is even (this is to avoid the rounding up always so importan in banking sometimes ---banks never use floating point because they don't have precise control on the rounding)