5

Is there a more frequent error that is memory-related that throws bad_alloc? I understand that it means that memory-allocation has failed, but what is the most common mistake that leads to this in code?

Andy N.
  • 67
  • 1
  • 11

1 Answers1

8

EDIT: The other commenters have pointed out a few interesting scenarios. I'm adding them to my response for the sake of completeness.

Case 1: Running out of memory

My understanding is that bad_alloc is thrown whenever the operators new and new[] fail to allocate memory to an object or variable. This can happen if you've newed a bunch of objects and forgot to delete them before they got out of scope (i.e., your code leaks like crazy).

Case 2: Allocating huge amounts of memory in one swoop

Allocating a large chunk of memory, as in the case of a 1000 x 1000 x 1000 matrix of doubles, will possibly fail because it requires a single block of that size.

There might be several free memory blocks available, but none of these are large enough.

Case 3: Passing an invalid value to new[]

bad_alloc is thrown if you pass a negative value as its parameter.

Bo Persson
  • 90,663
  • 31
  • 146
  • 203
  • 2
    Monster leak is one of the top candidates, but another common error is attempting to allocate a lot more memory than you thought you'd asked for. 1000x1000x1000 array of `double`s, for example. You might have tonnes of RAM, but how much is free and are there any 8 GB contiguous blocks? – user4581301 Jun 13 '18 at 00:52
  • *This, however, can only happen if you've newed a bunch of objects and forgot to delete them* -- If you give `new` a negative number, this can also cause a `std::bad_alloc` to be thrown. – PaulMcKenzie Jun 13 '18 at 00:57
  • That's also true. Which I guess is a reminder that you shouldn't allocate big arrays for no other reason than "just being on the safe side of things". Use dynamic memory allocation instead. EDIT: @PaulMcKenzie do you mean writing something like new int[-5]? – Massimo Di Saggio Jun 13 '18 at 00:58
  • 2
    @MassimoDiSaggio -- Yes. Try it, you will see `std::bad_alloc` is thrown, more exactly a `std::bad_array_new_length`, which is derived from `bad_alloc` is thrown. – PaulMcKenzie Jun 13 '18 at 01:01
  • 3
    Case 2 is probably misstated. If you are trying to reflect the comment from user4581301, the failure comes from a single _contiguous_ allocation. Even if a computer has the free memory to allocate a million doubles, that memory may not be in a single, uninterrupted contiguous block. – Drew Dormann Jun 13 '18 at 01:35
  • 1
    One more case for you, that I’ve encountered: when the heap has been corrupted (eg by buggy code writing through invalid pointers). That can lead to the new operator throwing a bad_alloc even when there’s plenty of RAM available, simply because the munged heap-data-structures have led its algorithm astray. – Jeremy Friesner Jun 13 '18 at 02:26
  • 1
    *"bad_alloc is thrown if you pass a negative value as its parameter."* operator new takes `std::size_t`, so negative values are just huge numbers (`-5` +> `numeric_limits::max() - 5 + 1`). So case 3 is similar to case 2. – Jarod42 Jun 13 '18 at 10:05
  • @DrewDormann On a 32-bit platform, you'll never have 8GB of contiguous address space, virtual or physical. On a 64-bit platform, you'll almost certainly have 8GB of contiguous virtual address space, and that's all that matters – physical pages may well be scattered all about or not even all mapped at the same time. (I'm not aware of a 64-bit platform that doesn't use virtual memory.) – Arne Vogel Jun 13 '18 at 12:32
  • To clarify my previous comment before someone else does :-) this assumes the OS considers itself able to provide 8GB of virtual RAM (backed by physical RAM and/or swap space and possibly taking the risks resulting from memory overcommit). Heap fragmentation may still make an 8GB contiguous allocation impossible even if the total amount of free memory on the heap exceeds 8GB. Dynamic memory management is quite complex nowadays. – Arne Vogel Jun 13 '18 at 12:41