The reason is that Python's float
type (typically IEEE 754 double precision floating point numbers) does not have such a value as 2.49999999999999992. Floating point numbers are generally on the form mantissa*base**exponent
, and in Python you can find the limits for float
in particular within sys.float_info
. For starters, let's calculate how many digits the mantissa itself can hold:
>>> from sys import float_info
>>> print float_info.radix**float_info.mant_dig # How big can the mantissa get?
9007199254740992
>>> print "2.49999999999999992"
2.49999999999999992
>>> 2.49999999999999992
2.5
Clearly the number we've entered is wider. Just how close can we go before things go wrong?
>>> print 2.5*float_info.epsilon
5.55111512313e-16
e-16
here means *10**-16
, so let's reformat that for comparison:
>>> print "%.17f"%(2.5*float_info.epsilon); print "2.49999999999999992"
0.00000000000000056
2.49999999999999992
This indicates that at a magnitude around 2.5, differences lower than about 5.6e-16 (including this 8e-17) will be lost to the storage itself. Therefore this value is 2.5, which rounds up.
We can also calculate an estimate of how many significant digits we can use:
>>> import math, sys
>>> print math.log10(sys.float_info.radix**sys.float_info.mant_dig)
15.9545897702
Very nearly, but not quite, 16. In binary the first digit will always be 1, so we can have a known number of significant digits (mant_dig), but in decimal the first digit will consume between one and four bits. This means the last digit may be off by more than one. Usually we hide this by printing only with a limited precision, but it actually occurs to lots of numbers:
>>> print '%f = %.17f'%(1.1, 1.1)
1.100000 = 1.10000000000000009
>>> print '%f = %.17f'%(10.1, 10.1)
10.100000 = 10.09999999999999964
Such is the inherent imprecision of floating point numbers. Types like bigfloat
, decimal
and fractions
(thanks to David Halter for these examples) can push the limits around, but if you start looking at many digits you need to be aware of them. Also note that this is not unique to computers; an irrational number, such as pi or sqrt(2), cannot be exactly written in any integer base.