When the Standard was written, there were two common ways that programs could use malloc()
, and the authors of the Standard wanted to accommodate both.
Some programs would use malloc()
when they needed storage, and would expect that either there would be enough memory to satisfy all their needs, or else they would exit with a message saying that insufficient memory was available. There was no need to have programs try to get by with less memory than they would want to use.
Some programs would use malloc()
repeatedly to acquire all the memory they could get, and then use their own memory management code to subdivide that into pieces the application could use. On systems that would only have one program loaded into memory at once, interactive programs using this approach could keep users informed of how much memory was available, and could limit user actions in low-memory conditions to ensure that parts of the code that critically needed memory would always be able to get it.
To make the second approach work, it's necessary that unsuccessful calls to malloc()
exit cleanly without side effects, since such calls would be expected to occur. On the other hand, when using the first approach, it would be more convenient to have functions that can't allocate the memory they need call a configurable error-trap routine, which might in turn call longjmp()
or exit()
, than to have them return NULL and require that their caller accommodate that possibility.
While the Standard would suggest that malloc()
would either fail without side effects or else yield a usable pointer, many Linux systems introduce a third possibility: malloc()
may yield a pointer which, though not null, won't actually be usable to write storage. If the system would have only two megs of heap storage available just before a program requests to five allocations of 500K each, a Linux system might have all five allocations return seemingly-valid pointers, even though their total size exceeds the total storage available, in the hope that either code won't actually attempt to use all of the allocated storage, or more memory will somehow become available before too much of the storage gets written.
On a system where there's no way of knowing whether an allocation actually succeeded, there will be no generally-reliable way of handling allocation failures without terminating the program, so code to try to recover such failures gracefully will often offer little value.