4

I have a very big sparse matrix A = 7Mi-by-7Mi matrix. I am using Matlab's eigs(A,k) function which can calculate first k eigenvalues and vectors. I need all of its eigenvector and values. But I can't store all of the eigenvectors because it requires a lot of memory.

Is there any way (in Matlab or Python) by which I can get eigenvectors one by one in a for loop? i.e. in ith iteration, I get the ith eigenvector and value.

Luqman Saleem
  • 191
  • 10
  • 5
    Most, if not all, algorithms to calculate eigenvalues/-vectors work from largest to smallest eigenvalue; MATLAB's `eigs()` does this for sure. You'll have to find an algorithm which does not depend on previously calculated eigenvalues, and then loop through. (Or buy more RAM of course). Also, requiring 7M eigenvalues smells like an [XY problem](https://meta.stackexchange.com/q/66377/325771) if you tell us what you need them for, we might be able to tell you you don't need them at all, but require an alternative solution instead. – Adriaan Apr 25 '19 at 12:47
  • 2
    @Adriaan In the eigs.m file it seems like the used algorithm is Krylov-Schur. Does it really require all the previously calculated eigenvalues for calculating the next or only some? If not, one may be able to do what is asked here. – NoDataDumpNoContribution Apr 25 '19 at 12:55
  • 2
    I dont think this is possible. The eigenvalues depend in the entire matrix, you need the entire matrix to compute each. If you crop a line, the eigenvalues change. – Ander Biguri Apr 25 '19 at 13:00
  • Did you have a look at `numpy.linalg.eig()`? Maybe you can add to your question why this is no promising approach (questions should always contain information about already tried out things and why they fail). – Alfe Apr 25 '19 at 13:25
  • I guess the trick here is that the original sparse matrix only fits into memory because it is so sparse. Maybe all the intermediate steps of computing the eigenvectors also produce only very sparse matrixes and thus, if you make the algorithm use the same sparse matrix type, you _can_ work out all eigenvectors in memory? – Alfe Apr 25 '19 at 13:28
  • Some algorithms work from smallest to largest by using shifts; others work from largest to smallest. Choose wisely. – duffymo Apr 25 '19 at 15:29

1 Answers1

1

If you have a good guess about how large the eigenvalue you are looking for is, say lambda_guess, you can use the Power iteration on

(A - lambda_guess* Id)^-1

This approach is sometimes referred to as the inverse-shift method. Here the method will converge to the eigenvalue closest to lambda_guess (and the better your guess the faster the convergence). Note that you wouldn't store the inverse, but only compute the solution of

x_next_iter = solve(A - lambda_guess*Id, x_iter), possibly itself with an iterative linear solver.

I would combine this with a subspace iteration method with subspace at least size two. This way, on your first iteration, you can find the smallest and second smallest eigenvalues lambda1, lambda2.

Then you can try lambdaguess= lambda2+ epsilon so that The first and second eigenvector outputted correspond to the second and third smallest eigenvalues, respectively.(if the first eigenvalue of this iteration is not the same as the value of lambda2 for your previous iteration, you need to make epsilon smaller and repeat. In practice you would test that their difference is small enough, to account for roundoff error and the fact that iterative methods are never exact). You repeat this until you get the eigenvalue number you are looking for. It's going to be slow, but you will only use two eigenvectors at any time.

NOTE: we assume all eigenvalues are distinct, otherwise this problem will not have a low memory solution with the usual techniques. In general, if the maximal multiplicity of an eigenvalue is m, you would need m vectors in memory for subspace iteration to converge.

Juan Carlos Ramirez
  • 2,054
  • 1
  • 7
  • 22