You can see the binary representation of the floating number 1.0
with the following lines of code:
#include <stdio.h>
int main(void) {
float a = 1.0;
printf("in hex, this is %08x\n", *((int*)(&a)));
printf("the int representation is %d\n", *((int*)(&a)));
return 0;
}
This results in
in hex, this is 3f800000
the int representation is 1065353216
The format of a 32 bit floating point number is given by
1 sign bit (s) = 0
8 exponent bits (e) = 7F = 127
23 mantissa bits (m) = 0
You add a (implied) 1 in front of the mantissa - in the above case the mantissa is all zeros, and the implied value is
1000 0000 0000 0000 0000 0000
This is 2^23 or 8388608. Now you multiply by (-1)^sign
- which is 1
in this case.
Finally, you multiply by 2^(exponent-150). Really, you should express the mantissa as a fraction (1.0000000) and multiply by 2^(exponent-127), but that's the same thing. Either way, the result is 1.0
That should clear it up for you.
UPDATE it was pointed out in the comments that my code example may invoke undefined behavior, although my gcc
compiler generated no warnings / errors. The below code is a more correct way to prove that 1.0
is 1065353216
in int
(for 32 bit int
and float
...):
#include <stdio.h>
union {
int i;
float a;
} either;
int main(void) {
either.a = 1.0;
printf("in hex, this is %08x\n", either.i);
printf("the int representation is %d\n", either.i);
return 0;
}