0

In a problem I'm working on, there is a need to solve Ax=b where A is a n x n square matrix (typically n = a few thousand), and b and x are vectors of size n. The trick is, it is necessary to do this many (billions) of times, where A and b change only very slightly in between successive calculations.

Is there a way to reuse an existing approximate solution for x (or perhaps inverse of A) from the previous calculation instead of solving the equations from scratch?

I'd also be interested in a way to get x to within some (defined) accuracy (eg error in any element of x < 0.001), rather than an exact solution (again, reusing the previous calculations).

Alex I
  • 19,689
  • 9
  • 86
  • 158

1 Answers1

1

You could use the Sherman–Morrison formula to incrementally update the inverse of matrix A.

To speed up the matrix multiplications, you could use a suitable matrix multiplication algorithm or a library tuned for high-performance computing. The classic matrix multiplication has complexity O(n³). Strassen-type algorithms have O(n^2.8) and better.

A similiar question without real answer was asked here.

Axel Kemper
  • 10,544
  • 2
  • 31
  • 54
  • Axel - this is interesting, but I'm not sure my updated matrix can be written (even approximately) as $A + u v^T$. How about taking a few steps of conjugate gradient? (with or without restart) – Alex I Jan 10 '23 at 02:30