3

How does one solve a large system of linear equations efficiently when only a few of the constant terms change. For example:

I currently have the system Ax= b. I compute the inverse of A once, store it in a matrix and each time any entry updates in b perform a matrix-vector multiplication A^-1(b) to recompute x.

This is inefficient as only a couple of entries would have update in b. Are there more efficient ways of solving this system when A-1 remains constant but specific known values change in b?

I use uBlas and Eigen, but not aware of solutions that would address this problem of selective recalculation. Thanks for any guidance.

Atlas
  • 153
  • 3
  • 12

4 Answers4

3

Compute A^-1. If b_i is the ith component of b, then examine d/db_i A^-1 b (the derivative of A^-1 with respect to the ith component of b) -- it equals a column of A^-1 (in particular, the ith column). And derivatives of linear functions are constant over their domain. So if you have b and b', and they differ only in the ith component, then A^-1 b - A^-1 b' = [d/db_i A^-1] * (b-b')_i. For multiple components, just add them up (as A^-1 is linear).

Or, in short, you can calculate A^-1 (b'-b) with some optimizations for input components that are zero (which, if only some components change, will be most of the components). A^-1 b' = A^-1 (b'-b) + A^-1 (b). And if you know that only some components will change, you can take a copy of the appropriate column of A^-1, then multiply it by the change in that component of b.

Yakk - Adam Nevraumont
  • 262,606
  • 27
  • 330
  • 524
1

You can take advantage of the problem's linearity :

x0 = A_(-1)*b0 
x = A_(-1)*b = x0 + A_(-1)*db

where db is a the difference matrix between b and b0 and it should be filled with zero : you can compressed it to a sparse matrix.

the Eigen lib has a lot of cool functions for sparse matrices (multiplication, inverse, ...).

lucasg
  • 10,734
  • 4
  • 35
  • 57
1

Firstly, don't perform a matrix inversion, use a solver library instead. Secondly, pass your initial x to the library as a first guess.

The library will perform some kind of decomposition like LU, and use that to calculate x. If you choose an iterative solver, then it is already doing pretty much what you describe to home in on the solution; it will begin with a worse guess and generate a better one, and any good routine will take an initial guess to speed up the process. In many circumstances you have a good idea of the result anyway, so it makes sense to exploit that.

If the new b is near the old b, then the new x should be near the old x, and it will serve as a good initial guess.

Phil H
  • 19,928
  • 7
  • 68
  • 105
0

First, don't compute the matrix inverse, use rather the LU decomposition, or the QR decomposition (slower than LU but stabler). Such decompositions scale better than inversion performancewise with the matrix size, and are usually stabler (especially QR).

There are ways to update the QR decomposition if A changes slightly (eg. by a rank one matrix), but if B is changed, you have to solve again with the new b -- you cannot escape this, and this is O(n^2).

However, if the right hand side B only changes by a fixed element, ie. B' = B + dB with dB known in advance, you can solve A dx = dB once and for all and now the solution x' of Ax' = B' is x + dX.

If dB is not known in advance but is always a linear combination of a few dB_i vectors, you may solve for A dx_i = dB_i, but if you have many such dB_i, you end up with a n^2 process (this in fact amounts to computing the inverse)...

Alexandre C.
  • 55,948
  • 11
  • 128
  • 197