I am mainly interested in the viability of shrinking such an array.
I'm working on a project where I have used single malloc() calls to each create individual moderately large 2D arrays. (Each only few tens of MiB, at the largest.) The thing is that over the life of one of the arrays, its content dramatically shrinks in size (by more than half). Obviously, I could just leave the array size alone for the life of the program. (It's only a x MiB on a system with GiB of RAM available.) But, we are talking about more than half of the allocated space falling into disuse well before the program terminates, and, due to the nature of how I am using the array, all the surviving data is kept in a contiguous set of rows (at the beginning of the block). It seems like a waste to hold on to all that RAM if I really don't need it.
While I know realloc() can be used to shrink dynamically created arrays, a 2D array is more complex. I think I understand the memory layout of it (as I implemented the function that structures it), but this is pushing the limits of my understanding of the language and the workings of its compilers. Obviously, I would have to work with rows (and deal with the row pointers), not merely bytes, but I don't know how predictable the outcome of all this would be.
And, yes, I need to create the array with a single malloc(). The object in question has several million rows. I tried using a loop to malloc() each row separately, but the program always froze at around 100,000 malloc()s.
For background, the source I'm using to construct these array is as follows:
char ** alloc_2d_arr(int cnum, int rnum) {
/* ((bytes for row pointers + (bytes for data)) */
char **mtx = malloc(rnum * sizeof (char *) + rnum * cnum * sizeof (char));
/* Initialize each row pointer to the first cell of its row */
char *p = (char *) (mtx + rnum);
for (int i = 0; i < rnum; i++) {
mtx[i] = p + i * cnum;
}
return mtx;
}