1

Consider the following little program running on Linux:

#include <iostream>
#include <unistd.h>
#include <cstring>

int main() {
  size_t array_size = 10ull * 1000 * 1000 * 1000;
  size_t number_of_arrays = 20;
  char* large_arrays[number_of_arrays];

  // allocate more memory than the system can give
  for (size_t i = 0; i < number_of_arrays; i++)
    large_arrays[i] = new char[array_size];

  // amount of free memory didn't actually change
  sleep(10);

  // write on that memory, so it is actually used
  for (size_t i = 0; i < number_of_arrays; i++)
    memset(large_arrays[i], 0, array_size);

  sleep(10);

  for (size_t i = 0; i < number_of_arrays; i++)
    delete [] large_arrays[i];

  return 0;
}

It allocates a lots of memory, more than the system can give. However, if I monitor the memory usage with top, it actually doesn't decrease. The program waits a bit, then it starts to write to the allocated memory and only then the amount of available free memory drops... until the system becomes unresponsive and the program is killed by oom-killer.

My questions are:

  • Why Linux promises to allocate more memory than it actually can provide? Shouldn't new[] throw a std::bad_alloc at some point?
  • How can I make sure, that Linux actually takes a piece of memory without having to write to it? I am writing some benchmarks where I would like to allocate lots of memory fast, but at the same time, I need to stay below certain memory limit.
  • Is it possible to monitor the amount of this "promised" memory?

The kernel version is 3.10.0-514.21.1.el7.x86_64. Maybe it behaves differently on newer versions?

knopers8
  • 438
  • 5
  • 11
  • 2
    `char* large_arrays[number_of_arrays];` -- This is not valid C++. – PaulMcKenzie Jan 31 '20 at 13:57
  • Is this something that only g++ can understand? It does compile it. – knopers8 Jan 31 '20 at 14:00
  • @knopers8 Yes, Variable Length Arrays are GCC extension to C++ (they are a feature in C). However, you could easily make it valid per standard by making `number_of_arrays` `const` (or `constexpr`) – Yksisarvinen Jan 31 '20 at 14:04
  • @knopers8 By using an extension such as VLA's, which is implemented in whatever way the compiler wants to implement them, you need to change to valid C++ and rerun your tests. – PaulMcKenzie Jan 31 '20 at 14:05
  • Thanks for pointing it out! I don't use it in my tests though, that was just a minimal example to illustrate the problem. – knopers8 Jan 31 '20 at 15:00
  • The reason Linux has this behavior is because many processes ask for far more memory than they actually use. – Eljay Jan 31 '20 at 17:59

1 Answers1

1

Why Linux promises to allocate more memory than it actually can provide?

Because that is how your system had been configured. You can change the behaviour with. sysctl 'vm.overcommit_memory'.

Shouldn't new[] throw a std::bad_alloc at some point?

Not if the system over commits the memory.

How can I make sure, that Linux actually takes a piece of memory without having to write to it?

You can't as far as I know. Linux maps memory upon page fault when unmapped memory is accessed.

Is it possible to monitor the amount of this "promised" memory?

I think that "virtual" size of the process memory is what you're looking for.

eerorika
  • 232,697
  • 12
  • 197
  • 326