Consider the following little program running on Linux:
#include <iostream>
#include <unistd.h>
#include <cstring>
int main() {
size_t array_size = 10ull * 1000 * 1000 * 1000;
size_t number_of_arrays = 20;
char* large_arrays[number_of_arrays];
// allocate more memory than the system can give
for (size_t i = 0; i < number_of_arrays; i++)
large_arrays[i] = new char[array_size];
// amount of free memory didn't actually change
sleep(10);
// write on that memory, so it is actually used
for (size_t i = 0; i < number_of_arrays; i++)
memset(large_arrays[i], 0, array_size);
sleep(10);
for (size_t i = 0; i < number_of_arrays; i++)
delete [] large_arrays[i];
return 0;
}
It allocates a lots of memory, more than the system can give. However, if I monitor the memory usage with top
, it actually doesn't decrease. The program waits a bit, then it starts to write to the allocated memory and only then the amount of available free memory drops... until the system becomes unresponsive and the program is killed by oom-killer
.
My questions are:
- Why Linux promises to allocate more memory than it actually can provide? Shouldn't
new[]
throw astd::bad_alloc
at some point? - How can I make sure, that Linux actually takes a piece of memory without having to write to it? I am writing some benchmarks where I would like to allocate lots of memory fast, but at the same time, I need to stay below certain memory limit.
- Is it possible to monitor the amount of this "promised" memory?
The kernel version is 3.10.0-514.21.1.el7.x86_64
. Maybe it behaves differently on newer versions?