5

I need to solve a system of N linear equations as an intermediate step in a numerical optimizer. AFAIK reasonably simple algorithms for doing so exactly are O(N^3) (though I saw a horibly complicated one in some math paper that could do it in something like O(N^2.8) with a huge constant). In some cases N is huge, i.e. several thousand.

Is there any good way to get a decent approximate solution to a system of linear equations in less than O(N^3)?

Edit:

Here are some more details if it helps at all.

  1. My matrix is symmetric, and not sparse.

  2. It's the second derivative matrix from Newton-Raphson. I'm trying to optimize something in a 2000-dimensional space.

dsimcha
  • 67,514
  • 53
  • 213
  • 334
  • why not to use proven implementations like LAPACK: http://en.wikipedia.org/wiki/LAPACK – Andrey Feb 06 '11 at 17:58
  • Yes, there are many. Which one is appropriate depends on your particular coefficient matrix. If you tell us what it looks like, we might be able to suggest particular algorithms. – Thomas Feb 06 '11 at 18:03
  • There're several described on [wikipedia](http://en.wikipedia.org/wiki/Iterative_method), although I didn't user any of them. – Nikita Rybak Feb 06 '11 at 18:04
  • Well, it depends on your system `Ax= b`, if it has 1) a special structure in `A`, or 2) `A` will admit a low rank approximation, or 3) it's dense and close to full rank. So in case 3) there is no hope to gain anything significant, but please elaborate more on your details if 1) or 2) is applicable. Thanks – eat Feb 06 '11 at 18:22
  • @Andrey: Two reasons: First, the target language is D, for which there are no mature LAPACK bindings. Second, this is for a small part of a larger open-source library and I'm avoiding third-party dependencies like the plauge. – dsimcha Feb 06 '11 at 18:28

3 Answers3

4

There are iterative methods like Jacobi, Gauss-Seidel, cg, GMRES etc.

toochin
  • 332
  • 1
  • 2
  • 9
3

For a symmetric matrix, the conjugate gradient method is simple to implement, and will beat most other iterative methods (e.g. Gauss-Seidel, SOR). The main loop consists of a matrix-vector multiply and a few other vector operations.

Once you've got it working, you can use preconditioning to improve the convergence even more.

celion
  • 3,864
  • 25
  • 19
0

Yes, if the matrix you get from their coefficients is sparse. There's the method of the "Right lay" (in Bulgarian, not sure about the exact translation) if you have a tri-diagonal matrix, for instance, which operates in O(N). There are other algorithms that are still O(N^3) but achieve incredible results if the matrix conforms to some invariant that they require (sparse, diagonal-prevailing, triangular, etc.).

If you're stuck to a specific method based on your invariant, the only way to further speed things up is to go multi-threaded.

Try this search.

pnt
  • 1,916
  • 1
  • 20
  • 29