1

For a long time I though that C++ (STL) would throw a bad_alloc when there was no available memory.

However, guided by some common knowledge I heard about Linux (such as "Linux doesn't really reserve memory until you use it"), I decided to test what were the implications of that on the behavior of bad_alloc. It turns out that there are definitely certain uses in which bad_alloc is not thrown because the actual error happens after the allocator did its job.

In this example, in the first loop I allocate obviously more memory (1TB) than what I have in my Linux Fedora30 system.

This loops ends and the following loop runs until approximately when I initialize (construct) about 100GB (=total RAM+swap in my system).

#include<iostream>
#include<memory>
#include<cassert>
#include<vector>

using T = char;

int main(){
    std::size_t block_size = 1000000000; // ~1GB
    std::size_t n_blocks = 1000; // number of blocks
    std::allocator<T> A;
    std::vector<T*> ps(n_block);
    for(int i = 0; i != n_block; ++i){
        cout << "allocating block " << i << std::endl;
        ps[i] = A.allocate(block_size); // ps[i] = (char*)malloc(1000000000);
        assert(ps[i]);
    }
    for(int i = 0; i != n_block; ++i){
        cout << "constructing block " << i << std::endl;
        for(long j = 0; j != block_size; ++j){
            A.construct(ps[i] + j, 'z'); // ps[i][j] = 'z'; // hard error "Killed" HERE
        }
    }
    //////////////////////////////// interesting part ends here
    for(int i = 0; i != n_block; ++i){
        for(long j = 0; j != block_size; ++j){
            assert(ps[i][j] == 'z');
            A.destroy(ps[i]);
        }
        A.deallocate(ps[i], block_size);
    }
}

I understand that there are idiosyncrasies in the operating systems, and that there is undefined behavior in many system related operations.

My question is, am I using C++ in the right way? Is something to be done about this behavior in terms of restoring the expected bad_alloc behavior? Even if not, it there a way to detect that some memory cannot be touched in advance? Checking for null doesn't seem to cover this case (again, in Linux.)

While I was building this example I though that, perhaps, construct (i.e. the placement new inside it) would somehow throw or give some error while detecting something fishy about the raw pointer location but that didn't happen: Linux just "killed" the program.

This is the output of this program in Fedora30 with 32GB + 64GB swap:

1
2
3
...
997
998
999
*0
*1
*2
*3
...
*86
*87
*88
*89
*90
Killed

(Linux outputs "Killed" and quits the program).


Note: I know other uses, (e.g. block sizes, different order --interleaved-- allocation and construction, etc.) in which the program throws a bad_alloc. I am asking specifically about uses like this one and if there is a way to recover in this context.

For example, I know that if I do A.allocate("1TB") this will throw a bad_alloc right away. This will also happen gracefully if I interleave the allocation of construction of the small blocks:

    for(int i = 0; i != n_block; ++i){
        ps[i] = A.allocate(block_size); //eventually throws, but HERE
        for(long j = 0; j != block_size; ++j){
            A.construct(ps[i] + j, 'z');
        }
    }
alfC
  • 14,261
  • 4
  • 67
  • 118
  • 2
    If you disable memory overcommit (see /proc/sys/vm/overcommit_memory and https://www.kernel.org/doc/Documentation/vm/overcommit-accounting ), you'll get different results. – Jesper Juhl May 22 '19 at 05:18
  • @JesperJuhl, Exactly , `sudo sysctl vm.overcommit_memory=2` restored the predictable behavior. If the default setting is `0` (I don't know) it makes sense why sometime `bad_alloc` works and sometimes it doesn't. I guess nothing else can be done on the C/C++ side if one cannot change that setting (for whatever reason, for example not being root). https://serverfault.com/questions/141988/avoid-linux-out-of-memory-application-teardown – alfC May 22 '19 at 08:04

0 Answers0