1

I need to solve a large number of linear problems with the same matrix of size of a few hundred. The initialization cost is irrelevant, but the run-time cost is critical. Naively, my LA background tells me I should just invert my matrix and use the cached invert to solve each of my linear problems. However, the Eigen documentation mentions that this may not be the best approach.

So, here are my 2 questions:

  1. What are the speed and accuracy differences between .solve() and just multiplying the inverted matrix by the RHS if the original matrix is well behaved?

  2. Does the answer change if the original matrix is not well behaved but not completely degenerate (none of the eigenvalues is 0, but the ratio of the largest to the smallest is large)?

I am a newbie in Eigen, so I apologize if this is a repeating question, I did search the archive before posting.

Nick Gnedin

1 Answers1

0

This really depends on a lot of things. E.g., if your matrix is very small the overhead of dividing (required by .solve()) is significant. For very large matrices the overhead of .solve() will get smaller. For rank-deficient matrices, .solve() can be faster (depending on the decomposition). But generally .solve() should be numerically more stable.

The best thing to do is to benchmark, and check the accuracy (e.g., by comparing the residual of each method) for matrices typically occurring in your application.

Another option (especially if setup cost is irrelevant) is to decompose and invert with double precision, and cast the result to single precision. This will likely be slightly more precise than decomposing and solving in single precision, but multiplying will be as fast or faster as solving in single precision.

chtz
  • 17,329
  • 4
  • 26
  • 56