How many bits of precision does a normalized double precision IEEE Floating Point number have? Is there sort of formula
what I found was this but I feel like it is wrong:
• +2-1022 to (2-2-52) * 21023
Please read https://en.wikipedia.org/wiki/IEEE_754-1985
In particular, it reads
Precision is defined as the minimum difference between two successive mantissa representations; thus it is a function only in the mantissa;
A double-precision IEEE 754 value uses 52 bits for the mantissa. A normalized value has an implicit '1' bit:
The leading 1 bit is omitted since all numbers except zero start with a leading 1; the leading 1 is implicit and doesn't actually need to be stored which gives an extra bit of precision for "free."
Thus, there are 53 bits of precision in an IEEE 754 double-precision floating-point value.