6

This question is about precision of computation using NumPy vs. Octave/MATLAB (the MATLAB code below has only been tested with Octave, however). I am aware of a similar question on Stackoverflow, namely this, but that seems somewhat far from what I'm asking below.

Setup

Everything is running on Ubuntu 14.04.

Python version 3.4.0.

NumPy version 1.8.1 compiled against OpenBLAS.

Octave version 3.8.1 compiled against OpenBLAS.

Sample Code

Sample Python code.

import numpy as np
from scipy import linalg as la

def build_laplacian(n):
  lap=np.zeros([n,n])
  for j in range(n-1):
    lap[j+1][j]=1
    lap[j][j+1]=1
  lap[n-1][n-2]=1
  lap[n-2][n-1]=1

  return lap

def evolve(s, lap):
  wave=la.expm(-1j*s*lap).dot([1]+[0]*(lap.shape[0]-1))
  for i in range(len(wave)):
    wave[i]=np.linalg.norm(wave[i])**2

  return wave

We now run the following.

np.min(evolve(2, build_laplacian(500)))

which gives something on the order of e-34.

We can produce similar code in Octave/MATLAB:

function lap=build_laplacian(n)
  lap=zeros(n,n);
  for i=1:(n-1)
    lap(i+1,i)=1;
    lap(i,i+1)=1;
  end

  lap(n,n-1)=1;
  lap(n-1,n)=1;
end

function result=evolve(s, lap)
  d=zeros(length(lap(:,1)),1); d(1)=1;
  result=expm(-1i*s*lap)*d;
  for i=1:length(result)
    result(i)=norm(result(i))^2;
  end
end

We then run

min(evolve(2, build_laplacian(500)))

and get 0. In fact, evolve(2, build_laplacian(500)))(60) gives something around e-100 or less (as expected).

The Question

Does anyone know what would be responsible for such a large discrepancy between NumPy and Octave (again, I haven't tested the code with MATLAB, but I'd expect to see similar results).

Of course, one can also compute the matrix exponential by first diagonalizing the matrix. I have done this and have gotten similar or worse results (with NumPy).

EDITS

My scipy version is 0.14.0. I am aware that Octave/MATLAB use the Pade approximation scheme, and am familiar with this algorithm. I am not sure what scipy does, but we can try the following.

  1. Diagonalize the matrix with numpy's eig or eigh (in our case the latter works fine since the matrix is Hermitian). As a result we get two matrices: a diagonal matrix D, and the matrix U, with D consisting of eigenvalues of the original matrix on the diagonal, and U consists of the corresponding eigenvectors as columns; so that the original matrix is given by U.T.dot(D).dot(U).

  2. Exponentiate D (this is now easy since D is diagonal).

  3. Now, if M is the original matrix and d is the original vector d=[1]+[0]*n, we get scipy.linalg.expm(-1j*s*M).dot(d)=U.T.dot(numpy.exp(-1j*s*D).dot(U.dot(d)).

Unfortunately, this produces the same result as before. Thus this probably has something to do either with the way numpy.linalg.eig and numpy.linalg.eigh work, or with the way numpy does arithmetic internally.

So the question is: how do we increase numpy's precision? Indeed, as mentioned above, Octave seems to do a much finer job in this case.

Community
  • 1
  • 1
  • You are using `expm` from `scipy`. What version of scipy are you using? – Warren Weckesser Aug 10 '14 at 00:43
  • 1
    `expm` is a tricky function. http://www.mathworks.com/help/matlab/ref/expm.html cites an article by Moler about '19 dubious Ways`. The Pade approximation depends strongly on the preconditioning strategy. The Octave doc for `expm` also discusses this approximation issue. – hpaulj Aug 10 '14 at 01:23
  • @WarrenWeckesser: I may be wrong, but I don't think it should matter much as `scipy` calls `numpy`, no? Anyway, my version is `0.14.0`. –  Aug 10 '14 at 02:44
  • @hpaulj: Yes, I am aware of this, and am also familiar with the Pade approximation algorithm. My point is that Octave and MATLAB produce very good results, whereas `scipy/numpy` is off by quite a few orders of magnitude. We can actually diagonalize the matrix first with `numpy`'s `eig` or `eigh`, and then take the exponential of the diagonal matrix. This produces the same result. I don't know which algorithm `scipy` uses, but it seems this either has to do with the way `numpy` diagonalizes matrices, or with the internal arithmetic workings. Question is, how can we increase `numpy`'s precision? –  Aug 10 '14 at 02:50
  • @William: `expm` is implemented in scipy. The source code for the development version is in https://github.com/scipy/scipy/blob/master/scipy/linalg/matfuncs.py; the `expm` function there is just a thin wrapper of the real implementation in https://github.com/scipy/scipy/blob/master/scipy/sparse/linalg/matfuncs.py – Warren Weckesser Aug 10 '14 at 07:58
  • Try an issue search on https://github.com/scipy/scipy/issues – hpaulj Aug 10 '14 at 12:23
  • @WarrenWeckesser: Thank you for enlightening me on this, I should have done further research! –  Aug 10 '14 at 18:01

1 Answers1

5

The following code

import numpy as np
from scipy import linalg as la
import scipy

print np.__version__
print scipy.__version__

def build_laplacian(n):
  lap=np.zeros([n,n])
  for j in range(n-1):
    lap[j+1][j]=1
    lap[j][j+1]=1
  lap[n-1][n-2]=1
  lap[n-2][n-1]=1
  return lap

def evolve(s, lap):
  wave=la.expm(-1j*s*lap).dot([1]+[0]*(lap.shape[0]-1))
  for i in range(len(wave)):
    wave[i]=la.norm(wave[i])**2
  return wave

r = evolve(2, build_laplacian(500))
print np.min(abs(r))
print r[59]

prints

1.8.1
0.14.0
0
(2.77560227344e-101+0j)

for me, with OpenBLAS 0.2.8-6ubuntu1.

So it appears your problem is not immediately reproduced. Your code examples above are not runnable as-is (typos).

As mentioned in scipy.linalg.expm documentation, the algorithm is from Al-Mohy and Higham (2009), which is different from the simpler scale-and-square-Pade in Octave.

As a consequence, the results also I get from Octave are slightly different, although the results are eps-close in matrix norms (1,2,inf). MATLAB uses the Pade approach from Higham (2005), which seems to give the same results as Scipy above.

hpaulj
  • 221,503
  • 14
  • 230
  • 353
pv.
  • 33,875
  • 8
  • 55
  • 49
  • This is nice. I will try to reproduce this. I have tried before on two machines (same software specs, different hardware), and was getting bad results (as in the original question). I will try again. –  Aug 10 '14 at 18:03
  • 2
    I get the same results - `0j and 2.77...e-101+0j)` with numpy 1.9.0, and scipy 0.13.0. I edited the typos. – hpaulj Aug 10 '14 at 19:49
  • @hpaulj: Thank you! I will reinstall Numpy/Scipy from pip and try again. I am really at a loss as to what might be causing this. Will report back on the results soon. –  Aug 10 '14 at 22:19
  • @hpaulj,pv.: May I ask how you installed `numpy/scipy`? –  Aug 10 '14 at 22:30
  • OK, this beats me. After reinstalling two times, I was getting the same problem. I then uninstalled through pip, and manually removed any remaining traces of `numpy/scipy` from the machine. I reinstalled from `pip3`, and voila -- works like a charm. No idea was had gone wrong before. Thanks, pv. and hpaulj, for running my code. I will accept it as an answer. –  Aug 10 '14 at 23:07
  • @William: one possible cause is having a left-over old Scipy version lying somewhere that is statically linked to a buggy old version of OpenBLAS (there have been a number of bugs in OpenBLAS matrix multiplication which can affect expm results). – pv. Aug 11 '14 at 11:39