12

Comparing the results of a floating point computation across a couple of different machines, they are consistently producing different results. Here is a stripped down example that reproduces the behavior:

import numpy as np
from numpy.random import randn as rand

M = 1024
N = 2048
np.random.seed(0)

a = rand(M,N).astype(dtype=np.float32)
w = rand(N,M).astype(dtype=np.float32)

b = np.dot(a, w)
for i in range(10):
    b = b + np.dot(b, a)[:, :1024]
    np.divide(b, 100., out=b)

print b[0,:3]

Different machines produce different results like

  • [ -2.85753540e-05 -5.94204867e-05 -2.62337649e-04]
  • [ -2.85751412e-05 -5.94208468e-05 -2.62336689e-04]
  • [ -2.85754559e-05 -5.94202756e-05 -2.62337562e-04]

but I can also get identical results, e.g. by running on two MacBooks of the same vintage. This happens with machines that have the same version of Python and numpy, but not necessarily linked against the same BLAS libraries (e.g accelerate framework on Mac, OpenBLAS on Ubuntu). However, shouldn't different numerical libraries all conform to the same IEEE floating point standard and give exactly the same results?

Mark Dickinson
  • 29,088
  • 9
  • 83
  • 120
Urs
  • 705
  • 5
  • 10

1 Answers1

5

Floating point calculations are not always reproducible.

You may get reproducible results for floating calculations across different machines if you use the same executable image, inputs, libraries built with the same compiler and identical compiler settings (switches).

However if you use a dynamically linked library you may get different results, because of numerous reasons. First of all, as Veedrac pointed in comments it might use different algorithms for its routines on different architectures. Second, a compiler might produce different code depending on switches (various optimizations, control settings). Even a+b+c yields non-deterministic results across machines and compilers, because we can not be sure about order of evaluation, precision in intermediate calculations.

Read here why it is not guaranteed to get identical results on different IEEE 754-1985 implementations. New standard (IEEE 754-2008) tries to go further, but it still doesn't guarantee identical results among different implementations, because for example it allows implementers to choose when tinyness (underflow exception) is detected

More information about floating point determinism can be found in this article.

Community
  • 1
  • 1
Konstantin
  • 24,271
  • 5
  • 48
  • 65
  • 2
    At Python level, at least, we *can* be sure about order of evaluation and intermediate precisions in something like `a + b + c`: the order of evaluation is deterministic, and intermediate results are forced to memory (so nondeterminism due to unpredictable register spill isn't an issue). There *is* still a possibility of double rounding in a *single* arithmetic operation, though that problem's slowly becoming rarer... – Mark Dickinson May 06 '15 at 15:52