4

I have installed Python3.6, distribution Anaconda, in two different machines. I cannot swear that I used the same installer file, although I think I did. I see the same when I try to check the Python, Anaconda and numpy versions: In the server machine In the local machine

I was getting small numerical differences. After some debugging I succeded to reduce the issue to invocations of numpy.exp. Just running the code

import numpy as np

x = -0.1559828702879514361612223
y = np.exp(x)
print("The exponential of %0.25f is %0.25f" % (x, y))

I get

The exponential of -0.1559828702879514361612223 is 0.8555738459791129013609634

in the first ('server') machine and

The exponential of -0.1559828702879514361612223 is 0.8555738459791127903386609

in the second ('local') machine.

I know that floats do not have 25 decimal precision, but these differences are propagating in my code and take place around the 12th decimal.

What could be the reason of the different behaviour?

zeycus
  • 860
  • 1
  • 8
  • 20
  • 1
    For what it's worth, I get the same output as your "server" while having a newer numpy and older python. Maybe has to do with the processor? – Ignacio Vergara Kausel Jun 07 '17 at 10:21
  • 1
    Try avoiding to include code as images, as it makes it out of reach of search engines – P. Camilleri Jun 07 '17 at 10:35
  • @IgnacioVergaraKausel Thx. Maybe you are right, I thought results were processor-independent, but maybe not. Probably out of ignorance, if that's the case I find it unsettling: I replace my machine, then my numbers change?! – zeycus Jun 07 '17 at 11:14
  • I'd be curious to know what processors are in both machines. It's interesting to compare with `Decimal.exp(Decimal(x))` which returns `0.8555738459791128455724346509`; it should be more accurate than `numpy` or the built-in `float` type. – Mark Ransom Jul 11 '17 at 03:43
  • @mark in fact with mathematica I checked that the true value is very close to the middle point of the two values. – zeycus Jul 11 '17 at 10:45

1 Answers1

1

This is not really about NumPy but about the results of floating point operations being system-dependent. You would get the same results without NumPy, by using math.exp instead. A simpler example is

math.exp(2**(-53)) - 1

which returns exactly 0 on one of my computers and 2.22e-16 on another. Both of these are equally wrong as the computation of math.expm1(2**(-53)) = 1.11e-16 demonstrates (incidentally, this is why the function expm1 exists).

In a way, the CPU-dependence does you good, clearly showing that those digits that differ between two systems are worthless. The thing to focus on is arranging the computations to reduce the loss of significance.

  • Thank you @alex, your example is very illuminating. I was not aware at all of such CPU dependence, but I will from now on. – zeycus Jul 11 '17 at 10:50