-4

I decided to multiply two numbers: 1.0 17299991019999108999910899991119999 and as a result , I received in response: 1.729999101999911e+34

Code:

print(1.0*17299991019999108999910899991119999)
1.729999101999911e+34

Here I counted it on a calculator: enter image description here

Why is python not counting correctly?

dan04
  • 87,747
  • 23
  • 163
  • 198
Kucer0043
  • 27
  • 7
  • >>> print(1.0*17299991019999108999910899991119999.0) 1.729999101999911e+34 – Kucer0043 May 12 '23 at 15:55
  • 1
    Floating-point numbers have limited precision, try using [`decimal`](https://docs.python.org/3/library/decimal.html) . – bereal May 12 '23 at 15:56
  • 2
    Floating point values are only accurate until the 15th decimal place – pippo1980 May 12 '23 at 15:57
  • Contrast this with `1 * 17299991019999108999910899991119999` which gives `17299991019999108999910899991119999`, because integers in Python have arbitrary precision. – slothrop May 12 '23 at 16:02
  • >>> from decimal import * >>> print(Decimal(1.0) * Decimal(17299991019999108999910899991119999.0)) 1.729999101999910967850497553E+34 – Kucer0043 May 12 '23 at 16:02
  • 2
    You can see more clearly where the floating point precision accuracy is lost by printing all the digits: "{:.4f}".format(17299991019999108999910899991119999) --> '17299991019999109678504975534129152.0000' – FrontRanger May 12 '23 at 16:02
  • 3
    @Kucer0043 the problem with `Decimal(17299991019999108999910899991119999.0)` is that it uses a literal which the parser treats as floating point (so limited precision). You would need `Decimal("17299991019999108999910899991119999.0")` which lets you construct losslessly from a string literal. You would also need to set the precision appropriately high before the calculation, e.g. `getcontext().prec = 50` – slothrop May 12 '23 at 16:09
  • 1
    You cannot create a `Decimal` from float and expect it to be precise, because your `172...99.0` is already a float which loses precision immediately... – STerliakov May 12 '23 at 16:09

1 Answers1

2

Because 1.0 is a float, and the rules of Python arithmetic have float * int = float.

As you should know if you want to do any serious mathematical work, a Python float is an IEEE 754 double-precision number, and thus (except for “denormals”) has 53 bits (= 15-16 decimal digits) of precision. The number 17299991019999108999910899991119999 is 114 bits long and thus gets rounded off.

If you need high-precision arithmetic, use the appropriate data type instead of float.

  • int can exactly represent values as large as can fit in your computer's memory. Of course, it only works for integers, so can't be used for anything with digits after the decimal point.
  • fractions.Fraction can exactly represent any rational number. Of course, it won't work for things like square roots, trig functions, and logarithms.
  • decimal.Decimal exactly represents decimal fractions, (e.g., Decimal('0.01') really is 0.01 and not 0.01000000000000000020816681711721685132943093776702880859375), and can approximate arbitrary real numbers to a user-specified accuracy (by default, 28 significant digits). You still get some rounding errors (e.g., 1/3 can't be represented exactly), but less than you get with float.
dan04
  • 87,747
  • 23
  • 163
  • 198