Let's say that I have a Single-precision floating-point format variable in my machine and I want to assign on it the result of a given operation. From Wikipedia:
The IEEE 754 standard specifies a binary32 as having:
- Sign bit: 1 bit
- Exponent width: 8 bits
- Significand precision: 24 bits (23 explicitly stored)
This gives from 6 to 9 significant decimal digits precision.
Is not clear to me how the last claim (precision of e-6) is derived.
In general, given a data type as float32
above, or float64
, how can one find out the precision limit in base 10?