1
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <limits.h>

long *safeCalloc(int size)
{
  long *ptr = calloc(sizeof(long), size);
  if (ptr == NULL)
  {
    printf("Error: memory allocation failed\n");
    exit(-1);
  }
  return ptr;
}

int main(int argc, char *argv[])
{
  long lim = 10000000000;
  long *x = safeCalloc(lim);
  printf("%ld\n", x[0]);        //Prints 0
  printf("%ld\n", x[50]);       //Prints 0
  printf("%ld\n", x[lim / 10]); //Prints 0
  printf("%ld\n", x[lim / 5]);  //Segmentation fault
  printf("%ld\n", x[lim - 1]);  //Would be a segmentation fault
  free(x);
  return 0;
}

When running this code I get a segmentation fault when trying to access certain indexes of my array. The array didn't return a null pointer, and I can access several elements without any issues, but when trying to access something with an index bigger than lim / 9. It gives a segmentation fault. I also don't get a segmentation fault if my lim is smaller, for instance 10^9.

I am inclined to think this means I'm out of memory, but shouldn't calloc return NULL if there isn't enough memory?

Does anyone know what I'm doing wrong?

  • its' 80GB you are trying to allocate there (assuming long 8 bytes). The OS is *overcommitting* this memory to your program and hoping you are not going to actually use all of it. See here: https://stackoverflow.com/questions/48585079/malloc-on-linux-without-overcommitting – Eugene Sh. Oct 19 '20 at 18:03
  • Declaration of calloc is `void *calloc(size_t nitems, size_t size)`; so, your calloc line should be `calloc(size, sizeof(long))`, not `calloc(sizeof(long), size)`. I'm not sure but this may cause problems. – ssd Oct 19 '20 at 18:13
  • Thank you @EugeneSh. That does answer my question and I feel kind of silly for not having thought of it. – user14480191 Oct 19 '20 at 18:14
  • I was astonished when I discovered that `malloc` and `calloc` have been "updated" so that they can give false positives. I would have thought it better to introduce new functions for that. – Weather Vane Oct 19 '20 at 18:16
  • 1
    @WeatherVane I think `malloc` is not guilty here, it's the underlying OS service – Eugene Sh. Oct 19 '20 at 18:21
  • @EugeneSh. if I have more storage needs than memory will allow, I would make my solution file-based anyway. Perhaps that comes from a MCU background where one is acutely aware of memory contraints. – Weather Vane Oct 19 '20 at 18:23
  • @EugeneSh.: That's rather interesting, though; intuitively you'd expect that only the 4 pages which are touched would need to be allocated, and that won't run out of memory. I suppose it actually allocates all the pages in between as well? The calloc evidently isn't touching the pages, presumably relying on the OS to provide zeroed memory. – Nate Eldredge Oct 19 '20 at 22:54
  • @WeatherVane: Memory overcommit is certainly a controversial feature and there are many well-hashed-out arguments both pro and con. The linked question, and references therein, explains some of them, and also how you can disable it on your system if you don't like it. – Nate Eldredge Oct 19 '20 at 22:57
  • @NateEldredge thanks for that: one answer there does say "The whole situation is a mess". (If and) when I eventually complete my migration, I'll bear that in mind. Meanwhile the 32-bit MS compiler I am using has a fixed maximum allowance total of about 2Gb for all parts. – Weather Vane Oct 20 '20 at 08:23

0 Answers0