2

This is a very general question regarding the maximum size of a set of linear equations to be solved by today's fastest hardware, in the form:

X = AX + B

A: NxN matrix of floats, it is sparse.

B: N-vector of floats.

solve for X.

This becomes X(I-A) = B which is best solved using factorisation (and not matrix inversion) as I read here:

http://www.johndcook.com/blog/2010/01/19/dont-invert-that-matrix/

Do you know yourselfs or have a reference to a benchmark or paper which gives some maximum value for N with today's fastest hardware? Most benchmarks I have seen use N < 10,000. I am thinking about N>10x10^6 or more to be processed within a month.

Please consider not only the computational dimension but also storage for A. It can be a problem: e.g. assuming N = 1 x 10^6, storage would be 1x10^12 x 4 bytes / (1024x1024x1024) = 4 Terrabytes for totally dense matrix, which is about manageable I guess.

Lastly, can the method to solve the system be parallelised so that I can make the assumption that with parallelisation N can be pretty large?

thanks in advance, bliako

Community
  • 1
  • 1
bliako
  • 977
  • 1
  • 5
  • 16

0 Answers0