I found this interesting exercise in Bruce Eckel's Thinking in C++, 2nd ed., Vol.1 in Chapter 13:
/*13. Modify NoMemory.cpp so that it contains an array of int
and so that it actually allocates memory instead of
throwing bad_alloc. In main( ), set up a while loop like
the one in NewHandler.cpp to run out of memory and
see what happens if your operator new does not test to
see if the memory is successfully allocated. Then add the
check to your operator new and throw bad_alloc*/
#include <iostream>
#include <cstdlib>
#include <new> // bad_alloc definition
using namespace std;
int count = 0;
class NoMemory {
int array[100000];
public:
void* operator new(size_t sz) throw(bad_alloc)
{
void* p = ::new char[sz];
if(!p)
{
throw bad_alloc(); // "Out of memory"
}
return p;
}
};
int main() {
try {
while(1) {
count++;
new NoMemory();
}
}
catch(bad_alloc)
{
cout << "memory exhausted after " << count << " allocations!" << endl;
cout << "Out of memory exception" << endl;
exit(1);
}
}
My question is: why does this code does not throw the bad_alloc
, when ran completely out of memory (as per the Task Manager's resource monitor on Win7)?
I assume the global ::new char[sz]
never returns with 0, even if the memory is full. But why? It even turns the Win7 OS into a numb, non-responding state once ran out of memory, still is does keep on trying to allocate new space.
(One interesting addition: I tryed it on Ubuntu too: the bad_alloc
is not thrown either, still this OS does not goes frozen but this dangerous process gets killed before by the OS - smart isn't it?)