I am trying to solve the eigenvalue problem for large symmetric sparse matrices with python. My main focus is structural dynamics, so I deal with mass and stiffness matrices and am usually interessted in computing the first "K" eigenvalues (>0). Currently I have tried the scipy.sparse.linalg functions eigsh and lobpcg. I did the following test:
from pyamg.gallery import poisson
N = 300
K = 9
A = poisson((N,N), format='csr') # <90000x90000 sparse matrix with 448800
# stored elements in CSR format>
First I tried scipy.sparse.linalg.eigsh (I look for eigenvalues around 0, therefore: sigma=0):
from scipy.sparse.linalg import eigsh
W, V = eigsh(A=A, k=K, sigma=0, tol=1e-8)
%timeit -n10 eigsh(A=A, k=K, sigma=0, tol=1e-8
# Result: 10 loops, best of 3: 2.42 s per loop
Result:
W = array([ 0.00021787, 0.00054466, 0.00054466, 0.00087145, 0.00108927, 0.00108927, 0.00141606, 0.00141606, 0.00185164])
Next, I tried scipy.sparse.linalg.lobpcg
from scipy.sparse.linalg import lobpcg
W2, V2 = lobpcg(A, X, tol=1e-8, largest=False, maxiter=1000)
However it took the whole 1000 iterations (time = 171.66 s) to get:
W2 = array([ 0.00021787, 0.00054466, 0.00054466, 0.00087145, 0.00108927, 0.00108927, 0.00141606, 0.00141606, 0.00185166])
Afterwars I tried using a preconditioner to solve the inverse iteration faster. I used the pyamg.smoothed_aggregation_solver
# create the AMG hierarchy
ml = smoothed_aggregation_solver(A)
# initial approximation to the K eigenvectors
X = scipy.rand(A.shape[0], K)
# preconditioner based on ml
M = ml.aspreconditioner()
and in this case the lobpcg converged much faster (31 iterations):
W3 ,V3 = lobpcg(A, X, M=M, tol=1e-8, largest=False, maxiter=1000)
%timeit -n10 eigsh(A=A, k=K, sigma=0, tol=1e-8, maxiter=1000)
# Result: 10 loops, best of 3: 4.45 s per loop
The result is the same as eigsh:
W3 = array([ 0.00021787, 0.00054466, 0.00054466, 0.00087145, 0.00108927, 0.00108927, 0.00141606, 0.00141606, 0.00185164])
Afterwars, I also tried a preconditioned eigsh, for which the OPinv had to be computed. However computing the inverse of the matrix consumes to much time and is inefficient.
So my question is if there is a faster/more efficient way to compute the eigenproblem in python (for large sparse matrices)? I have read some things about petsc (petsc4py) and pysparse, but have not tried it yet.