That answer to your question is that an IEEE 754 double precision number is a 64-bit value:
- 1 bit is the sign
- 11 bits represent the exponent
- 52 bits represent the significand (but due to a little deep magick, the signficand actually has 53 bits of precision).
It can represent no more than 264 discrete values — the same as a 64-bit integer (and actually less due to things like NaN
, positive and negative zero, etc.)
Its range, however, is much larger than that of a 64-bit integer: it can represent decimal values roughly from 10-308 to 10+308 ... albeit with with no more than 15 to 17 decimal digits of precision.
Floating point trades precision for range. It's a tradeoff.
See IEEE-754 Double Precision Binary Floating Point Format for rather more details.
Even better, read David Goldberg's 1991 paper, "What every computer scientist should know about floating-point arithmetic":
Abstract. Floating-point arithmetic is considered as esoteric subject by many people.
This is rather surprising, because floating-point is ubiquitous in computer systems:
Almost every language has a floating-point datatype; computers from PCs to supercomputers
have floating-point accelerators; most compilers will be called upon to compile floating-point
algorithms from time to time; and virtually every operating system must respond to floating-point
exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that
have a direct impact on designers of computer systems. It begins with background on floating-point
representation and rounding error, continues with a discussion of the IEEE floating point standard,
and concludes with examples of how computer system builders can better support floating point.
David Goldberg. 1991. "What every computer scientist should know about floating-point arithmetic".
ACM Comput. Surv. 23, 1 (March 1991), 5-48. DOI=10.1145/103162.103163
http://doi.acm.org/10.1145/103162.103163