0

In a bunch of programs I came across regarding dynamic memory allocation I always find the same piace of code when it was need to allocate dynamic memory:

int *pointer = (int *)malloc(sizeof(int));

if(pointer == NULL)
    exit(1); // or return 1

// else the program keep running

I was wondering why it's not used, or at least I've never seen it, something like that:

int *pointer
while((pointer = (int *)malloc(sizeof(int)) == NULL);

I though about it and the only thing I came up with is that if you can't allocate new memory is because there is no more left on the heap so the 2nd example would turn into an infinite-loop. Is that right? Because if the memory allocation fails for some other reasons (I don't really know which) than putting the allocation inside a loop would probably solve the problem.

Am I missing something? Which one of the "style" is preferred?

fedemengo
  • 626
  • 2
  • 7
  • 24
  • 1
    The 2nd example would indeed turn into an infinite-loop on failure, whereas the first example would quit immediately... is there a reason you would prefer the infinite loop to the immediate exit? I'm not entirely certain I understand your question... – R_Kapp Dec 01 '16 at 15:21
  • 1
    Is there any possibility that this will be running in a multi-threaded program? If so, it is conceivable that thread 1 could fail to allocate because thread 2 has grabbed all the space. If thread 2 subsequently frees some of the space it had allocated, then thread 1 might be able to get some of the memory so released — and there might then be point in retrying the malloc, probably with some sort of pause between attempts, rather than hammering the allocator at full speed. Otherwise (single-threaded programs), there is no point in a retry. – Jonathan Leffler Dec 01 '16 at 16:14
  • Possible duplicate of https://stackoverflow.com/questions/11788803/what-if-malloc-fails – 0kcats Apr 29 '20 at 00:54

5 Answers5

1

As per the malloc man page, malloc can fail with only one error, which is ENOMEM, which is the out of memory error. There is no point of retrying in this case.

The style of retrying in a while loop is usually followed with other system calls, such as read(2), open(2), select(2), etc. which can return -1 to indicate an error and set the errno to EINTR, which means that they were interrupted because of a signal. You would usually want to retry the call again in this case.

Abhinav Upadhyay
  • 2,477
  • 20
  • 32
1

The only use case I can imagine for re-trying a failed malloc, would be a program running in a environment supporting bursts of memory allocation from different programs, all fully releasing it soon. In that case, a program unable to acquire all the necessary memory should just wait until another one releases enough to proceed.

But that would require the programmer to be very cautious as there is an immediate corner case. Suppose the system has 4 GB of free memory, that is when none of the memory consuming processes are running. Suppose that one could get 1.5GB, and needs 1.5 more, while another one also has 1.5 GB and also needs 1.5 more. You are in a nice and clean deadlock, because only 1 GB is available and both processes are waiting for the other to release anything.

Even if a process that cannot acquire all the necessary memory had to release everything before entering a loop, you could fall in race conditions where processes only acquire part of what they need, fail to get all, release and loop again if no process if able to acquire all at once.

As we prefer robust systems, programs assume that memory will be available, and just abort if it is not. It is now the problem of a human being to choose whether it is better to add memory, change the program to let it use less memory, or serialize processing to have only one big program at the same time or...

Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
Serge Ballesta
  • 143,923
  • 11
  • 122
  • 252
0

If there is no memory left, it would indeed cause your program to hang up in an infinite busy-loop, occupying CPU.

If your computer is running out of memory, I think the last thing you need is some ghost process remaining in RAM, stealing CPU.

It is better that your program gracefully terminates and inform the user that they are out of memory. Because running out of memory is a rather severe error condition on most computers.

This could in turn be caused by memory leaks, heap corruptions and other nasty things that make the execution environment for your program unstable. Which probably means that bugs in your own process are to blame. Which in turn means that the process won't recover from the error state by itself.

Lundin
  • 195,001
  • 40
  • 254
  • 396
0

Memory allocations are not retried when they fail because there is no reason to expect that after once failing, they will succeed on any subsequent attempt without anything having changed. That certainly applies to the insufficient memory case, as you observed, but it also applies to any other plausible failure mode you can imagine. After all, if malloc() could recover on its own, then it would not need to fail in the first place.

Thus, a tight memory-allocation loop such as you propose is about as bad as not testing whether the allocation succeeds at all. In the failure case, it is highly likely to put the program into an infinite loop, no matter what the reason for the failure.

John Bollinger
  • 160,171
  • 8
  • 81
  • 157
0

Retrying malloc in a loop actually makes sense (but not in an infinite loop), at least on Windows machines. See my detailed answer here: https://stackoverflow.com/a/30495537/2986286

When the page file is set to grow automatically, the memory allocation may fail when the page file size has to grow. Retrying allocation helps to work around this.

0kcats
  • 672
  • 5
  • 16