11

EDIT. Since this question was asked, I got a PhD on solving linear system of equations for tomography. As this question still gets a lot of traffic, I want to stress out the first sentence from the answer by @sellibitze: There is not simple answer. It highly depends on the nature of the matrix, and almost always you don't wan't to invert the matrix.

Now, to the original question by this very innocent man who thinks this can be answered easily...


While googleing about matrix inversion algorithms I found that there are several ways (and opinions!) about how to do this in code. I wondered which method is the fastest, or the one with the best performance, and trying to found that answer I found nothing.

I know that for some cases a pseudo-inverse can be computed (using SVD, cholevsky,...), I actually use some of those in my code, and I know that several times an inverse just doesn't exist, etc. It is easy to find an specific answer for an specific problem but not a general intuition for this big (HUGE!) problem that is matrix inversion.

So my question is:

What method is best in performance for small matrices? And in precision? What about big matrices?

My personal case is a 6x6 (EDIT:symetric) matrix that have to be inverted thousands of times (yes,yes, with different values) and I need high precision, but for sure speed would come really handy.

Note that I am not looking for code, I will code myself whatever answer fits most to my case, but I think this is a question that lots of programmers would like to know.

Ander Biguri
  • 35,140
  • 11
  • 74
  • 120
  • 1
    Why do you (think you) want to invert a matrix? – Agentlien Jan 16 '13 at 14:53
  • @Agentlien I (think I) need to invert a matrix because I am minimizing the gradient of a vector of 6 elements with a Newton Raphson algorithm, whose derivative is the Hessian matrix, a 6x6 matrix. I don't need to solve a Ax=B type of equation. – Ander Biguri Jan 16 '13 at 14:57
  • @sellibitze but that is not my case! I need to invert this one! (Nice article BTW, I wish I had read that some time ago...) – Ander Biguri Jan 16 '13 at 15:00
  • 1
    @AnderBiguri Regarding the problem you're solving. I've encountered a [nice article](http://rkb.home.cern.ch/rkb/AN16pp/node147.html) when solving a slightly similar thing, and managed to get by without inverting matrices. Might/might not be applicable to your case, just wanted to share the link. – Angew is no longer proud of SO Jan 16 '13 at 15:03
  • @AnderBiguri: you don't need to invert anything to perform Newton's method. – Alexandre C. Jan 16 '13 at 15:06
  • 1
    @Agentlien: You don't need the inverse matrix. You probably try to compute inv(A)*some_vector but there is no need to explicitly compute inv(A). You can compute the above product by factorizing A and do some forward/backward substitution. – sellibitze Jan 16 '13 at 15:08
  • In case your Hessian is symmetrical and positive definite, just go for a Cholesky factorization. If it's just symmetrical, use an LDL^T factorization. Both factorizations are offered by the Eigen library by the way. – sellibitze Jan 16 '13 at 15:10
  • @sellibitze o.O Why was I tagged in that comment? – Agentlien Jan 16 '13 at 15:11
  • @sellibitze And how can I make it? I got the numerical Jacobian and Hessian of my vector, how should I proceed next? – Ander Biguri Jan 16 '13 at 15:25
  • @sellibitze I found tha answer, Thanks anyway! – Ander Biguri Jan 17 '13 at 08:02
  • 1
    @AnderBiguri: Sorry, I thought I gave you the answer. But I see I addressed the wrong person (Agentlien). – sellibitze Jan 17 '13 at 10:32

1 Answers1

5

There is not simple answer. Make sure that you've read and understood this article.

For 2x2 matrices computing the inverse can be done with a simple formular involving the determinant. But for anything larger I would prefer factorizations, for example a pivoted LU factorization. If you're concerned about performance and deal with large sparse matrices, an iterative solver might be appropriate. Alternativly you can try MUMPS (multifrontal massivly parallel solver) and measure the performance. The problem with iterative solvers is that their speed of convergence heavily depends on the condition of the inverse problem and whether you find good preconditioners.

Perhaps you should start with the Eigen library and try pivoted LU factorizations first.

sellibitze
  • 27,611
  • 3
  • 75
  • 95