13

I've seen a lot of code that checks for NULL pointers whenever an allocation is made. This makes the code verbose, and if it's not done consistently, only when the programmer felt like it, doesn't even ensure that the program won't crash when the address space runs out. Besides, if the program can't make more allocations, it wouldn't be able to do its function anyway, right?

So my question is, isn't it better for most programs not to check at all and just let the program crash if memory runs out? At least the code is more readable that way.

Note

I'm talking about desktop apps that run on modern computers (at least 2 GB address space), and that most definitely don't operate space shuttles, life support systems, or BP's oil platforms. Most importantly I'm talking about programs that use malloc but never really go above 5 MB of memory usage.

sashoalm
  • 75,001
  • 122
  • 434
  • 781
  • And how you should know why your program crashed? Was it NULL-pointer dereference or something subtle? – Matvey Aksenov Oct 29 '11 at 17:00
  • 2
    Yes, let's all forget about checking if memory has been successfully allocated. Segfaults are so much more fun! – NullUserException Oct 29 '11 at 17:03
  • 1
    Who of the advocates of checking for returns of `malloc` has ever seen it trigger in a hosted environment? I'd really like to hear of circumstances where this still makes sense. – Jens Gustedt Oct 29 '11 at 18:41
  • @Jens: It makes sense because you lose nothing at all by doing it, and one day it could prevent a space rocket from exploding, killing eight astronauts aboard. – Lightness Races in Orbit Oct 29 '11 at 18:45
  • @TomalakGeret'kal, space rockets are embedded systems. I was talking about hosted environments. And you loose a lot, code with a lot of `if/else` clauses simply becomes unreadable, you introduce more errors by that then you will ever capture. – Jens Gustedt Oct 29 '11 at 19:20
  • @JensGustedt: You only have to have one `if` statement in your "allocate or abort" wrapper for `malloc`. Checking the return value of `malloc` doesn't have to imply any readability penalty for client code. – CB Bailey Oct 29 '11 at 20:43
  • @Charles, if you have a wrapper, I agree, if you name it accordingly and define it `inline`, it could be ok. But then you really can't do anything but generic check and abort, anyhow. – Jens Gustedt Oct 29 '11 at 21:17
  • @JensGustedt: An generic "check and abort" is precisely what I would argue is much better than an assumption of success. – CB Bailey Oct 29 '11 at 21:27
  • @Charles, I would just call it "chasing a phantom". Mostly harmless, because wasting just one instruction or so, but not very useful either. Other memory allocation problems are much more severe and happen much more frequently. – Jens Gustedt Oct 29 '11 at 21:35
  • @Jens: Every system is "hosted", even embedded ones. – Lightness Races in Orbit Oct 29 '11 at 23:06
  • 2
    @TomalakGeret'kal, "hosted" versus "embedded" are precise terms from the C standard that describe the amount of support that the application can expect from the C library. – Jens Gustedt Oct 30 '11 at 07:02

7 Answers7

11

Always check the return value, but for clarity, it's common to wrap malloc() in a function which never returns NULL:

void *
emalloc(size_t amt){
    void *v = malloc(amt);  
    if(!v){
        fprintf(stderr, "out of mem\n");
        exit(EXIT_FAILURE);
    }
    return v;
}

Then, later you can use

char *foo = emalloc(56);
foo[12] = 'A';

With no guilty conscience.

Dave
  • 10,964
  • 3
  • 32
  • 54
  • Thanks, your answer takes the best of both worlds, it gives a clean exit and at the same time doesn't make the code needlessly verbose. – sashoalm Oct 30 '11 at 07:22
  • 1
    @harris `!v` is equally as valid as `v == NULL` – Dave Dec 09 '13 at 02:04
8

Yes, you should check for a null return value from malloc. Even if you can't recover from the failure of memory allocation you should explicitly exit. Carrying on as though memory allocation had succeeded leaves your application in an inconsistent state and is likely to cause "undefined behavior" which should be avoided.

For example, you may end up writing inconsistent data to external storage which may hinder the ability of the next run of the application to recover. It's much safer to exit swiftly in a more controlled fashion.

Many applications that want to exit on allocation failure wrap malloc in a function that checks the return value and explicitly aborts on failure.

Arguably, this is one advantage of the C++ default new approach to throw an exception on allocation failure. It requires no effort to exit on memory allocation failure.

CB Bailey
  • 755,051
  • 104
  • 632
  • 656
  • Although derefferencing a null pointer is UB by the standard, all systems that I ever programmed behave very nicely with it. Namely they just crash, not very differently from what you observe when calling `abort`. And null pointer references are usually not so difficult to debug. In any case, I think I have never seen a program crash for that reason, address spaces are gigantic nowadays. – Jens Gustedt Oct 29 '11 at 19:33
  • 2
    @JensGustedt: If I attempt to allocate a large block of memory and then write to a block at the end of that space first then I'm not initially going to dereference the null pointer when `malloc` fails. If I don't check the return value of `malloc` I am reasonably likely to be able to scribble over some other memory before causing a segmentation violation. Who knows what might happen. – CB Bailey Oct 29 '11 at 20:25
  • checking the pointer itself just gives you false security in that case. Allocating such a huge chunk and expecting you can just fill it in random order is more dangerous than ignoring a null pointer. You should first access all pages in it in order to see if any of these accesses produces a fault. And my claim was that this null pointer return of `malloc` never happens. Did you ever see such a thing, honnestly, in a modern hosted environment? – Jens Gustedt Oct 29 '11 at 21:25
  • 1
    @JensGustedt: I'm not sure what you're arguing for. If I `malloc` 20MB *and it succeeds*, I can access that 20MB in any order that I like. If a system allows be to do dangerous things just because I don't access the allocated memory in order then something has gone seriously wrong. If `malloc` fails and I access the memory at "null + 19MB" I am committing a serious error because it might be valid space in my address space but overlapped with some other crucial data. – CB Bailey Oct 29 '11 at 21:31
  • you didn't reply to my question. Did you see it happen? – Jens Gustedt Oct 29 '11 at 21:36
  • 1
    @JensGustedt: Are you just asking if have I seen `malloc` return 0 on a desktop system? Yes, I've seen it happen ocassionally, but why does it matter to you what I've seen? – CB Bailey Oct 29 '11 at 21:38
  • I think you are an experienced programmer, perhaps you have some experience different from mine, this is why I am asking for concrete cases where you have seen this happen. A modern system has at least one TiB of address space.(On smaller oldish 32 bit systems this is still 4 GiB.) Before your above example of allocating 20 MiB fails, you'd have to have exhausted that address space. And in most of the cases your are in trouble much before that. – Jens Gustedt Oct 29 '11 at 21:51
  • 1
    @JensGustedt: Many modern programs still only have 2GB of address space available to them even if they are run on systems with a much larger system address space. Many, many programs end up processing more data than it was ever envisioned that they would need to process. You don't have to exhaust the address space for a 20MB allocation to fail; you can get it to fail by badly fragmenting the address space. I've seen this happen a lot. – CB Bailey Oct 29 '11 at 22:06
7

Similar to Dave's approach above, but adds a macro that automatically passes the file name and line number to our allocation routine so that we can report that information in the event of a failure.

#include <stdio.h>
#include <stdlib.h>

#define ZMALLOC(theSize) zmalloc(__FILE__, __LINE__, theSize)

static void *zmalloc(const char *file, int line, int size)
{
   void *ptr = malloc(size);

   if(!ptr)
   {
      printf("Could not allocate: %d bytes (%s:%d)\n", size, file, line);
      exit(1);
   }

   return(ptr);
}

int main()
{
   /* -- Set 'forceFailure' to a non-zero value in order to observe
         how 'zmalloc' behaves when it cannot allocate the
         requested memory -- */

   int bytes        = 10 * sizeof(int);
   int forceFailure = 0;
   int *anArray     = NULL;

   if(forceFailure)
      bytes = -1;

   anArray = ZMALLOC(bytes);

   free(anArray);

   return(0);
}
2

but it is much more difficult to troubleshoot if you don't log where the malloc failed.

failed to allocate memory in line XX is to prefer than just to crash.

AndersK
  • 35,813
  • 6
  • 60
  • 86
  • 1
    And you have ever seen a program crash because `malloc` failed and not because the program logic simply missed to do the `malloc` at all. In an embedded system? I'd be curious. – Jens Gustedt Oct 29 '11 at 19:26
1

You should definitely check the return value for malloc. Helpful in debugging and the code becomes robust.

Always check malloc'ed memory?

Community
  • 1
  • 1
varunl
  • 19,499
  • 5
  • 29
  • 47
1

In a hosted environment error checking the return of malloc makes not much sense nowadays. Most machines have a virtual address space of 64 bit. You'd need a lot of time to exhaust that. Your program will most likely fail at a completely different place, namely when your physical+swap memory is exhausted. It will have shown completely ridiculous performance before that, because it only was swapping and the user will have triggered Cntrl-C long before you ever come there.

Segfaulting "nicely" on a null pointer reference would be a clear point to see where things fail in a debugger. But in my practice I have never seen a failed malloc as a cause.

When programming for embedded systems the picture changes completely. There you definitively should check for failed malloc.

Edit: To clarify that after the edit of the question. The kind of programs/systems described there are clearly not "embedded". I have never seen malloc fail under the circumstances described there.

Jens Gustedt
  • 76,821
  • 6
  • 102
  • 177
0

I'd like to add that edge cases should always be checked even if you think they are safe or cannot lead to other issues than a crash. Null pointer dereference can potentially be exploited (http://uninformed.org/?v=4&a=5&t=sumry).

Dpp
  • 1,926
  • 1
  • 18
  • 26