-1

I am working on an FEM program written in C, for my undergraduate degree which needs very large arrays (arrays consisting a[1 000 000] elements) to store data, and then manipulating them. It uses 2D arrays also, which have similar unusually huge sizes (say a[100 000][100 000]).

The program terminates without showing any error. It has been found that just when the program will execute the routine for generating mesh (which uses large 2D arrays), it crashes.

AAK
  • 3
  • 5
  • 1
    Why do you have a compiler requirement and not a memory requirement? A `100K x 100K float` table would take 37GB of ram (unless I did the maths wrong). A better compiler wont make it better. I think your problem has more to do with data structures or algorithms. – Imanol Luengo Oct 06 '16 at 09:08
  • 1
    Why do you think that free compilers are any different compared to proprietary ones as far as allocating huge arrays is concerned? It is the algorithm which is a problem here (where different compilers might have very different vector optimalizations), OS might get into way (as you might need to get some of that memory offheap) etc. Just memory size is not going to be a difference between compilers on its own and we don't know enough about other requirements to suggest anything. – Artur Biesiadowski Oct 06 '16 at 09:10
  • Gcc will work, however your computer might not. – Asoub Oct 06 '16 at 09:21
  • 1
    maybe interesting? http://stackoverflow.com/questions/17241227/sparse-matrix-library-for-c – Ryan Vincent Oct 06 '16 at 09:30
  • I am working in a computer which have 128GB RAM and Intel Xeon octa core Processor. The reason I mentioned about GCC is because I thought whether there is any such limit defined which restricted the manipulation of such arrays. – AAK Oct 18 '16 at 14:07

2 Answers2

3

When you reach such large sizes you should be thinking whether or not your matrix really has 100 000 x 100 000 elements or if most of them are null. If most are zeros you should use sparse matrices. That should alleviate memory use.

After that you should try and use matrix decomposition (such as lower upper) to solve your system, I believe you should be able to find implementations in your favourite language.

There are other people interested in such large systems, so try and see how they did it, and take advantage of approximation / iterative solvers.

Community
  • 1
  • 1
berna1111
  • 1,811
  • 1
  • 18
  • 23
  • + this is exactly the issue I suspect. Finite element arrays are typically *extremely* sparse and can usually be optimized to be reasonably banded as well. You can not "scale up" a toy research code to large systems without taking advantage of these features. – agentp Oct 20 '16 at 14:21
2

If you compile in 64-bit mode, neither gcc nor Clang will have any problems with such large arrays, provided you allocate them on the heap and remember that the count of elements exceeds the capacity of an int. So:

#define MATRIXSIZE (100*1000L)
typedef float row_t[MATRIXSIZE];
row_t *matrix = calloc(MATRIXSIZE, sizeof(*matrix));
for (int i = 0; i<MATRIXSIZE; i++)
    matrix[i][i] = 1.0f;

You will need a lot of available memory though.