3

On a system with virtual memory, it should be possible to allocate lots of address space, more than you have physical RAM, and then only write to as much of it as you need.

On a 32-bit system of course there is only four gigabytes of virtual address space, but that limit disappears on a 64-bit system.

Granted that Windows doesn't use the full 64-bit address space, apparently it uses 44 bits; that is still sixteen terabytes, so there should be no problem with allocating e.g. one terabyte: Behind Windows x64's 44-bit virtual memory address limit

So I wrote a program to test this, attempting to allocate a terabyte of address space in chunks of ten gigabytes each:

#include <new>
#include <stdio.h>

void main() {
  std::set_new_handler([]() {
    perror("new");
    exit(1);
  });
  for (int i = 0; i < 100; i++) {
    auto p = new char[10ULL << 30];
    printf("%p\n", p);
  }
}

Run on Windows x64 with 32 gigabytes of RAM, it gives this result (specifics differ between runs, but always qualitatively similar):

0000013C881C1040
0000013F081D0040
00000141881E2040
00000144081F1040
0000014688200040
0000014908219040
0000014B88226040
0000014E08232040
0000015088246040
0000015308252040
0000015588260040
new: Not enough space

So it only allocates 110 gigabytes before failing. That is larger than physical RAM, but much smaller than the address space that should be available.

It is definitely not trying to actually write to the allocated memory (that would require the allocation of physical memory); I tried explicitly doing that with memset immediately after allocation, and the program ran much slower, as expected.

So where is the limit on allocated virtual memory coming from?

rwallace
  • 31,405
  • 40
  • 123
  • 242
  • What about your pagefile? What size do you have for it? – Alejandro Aug 18 '21 at 14:16
  • @Alejandro Whatever the default setting is; I never bothered changing it. If I actually wrote to the allocated memory, thereby requiring it to be backed by physical storage, I would expect the page file size to be the limiting factor, but the above program never writes anything to the allocated memory, so I would expect it to consist purely of allocated address space, never needing page file space. What am I missing? – rwallace Aug 18 '21 at 14:18
  • 2
    Allocating means at least reserving RAM/pagefile, even if you never touch it. The out of memory condition appears as soon as you allocate over the maximum available, not when you actually use it. That's why everyone tells to check the return value of `malloc` (or your equivalent in your environment) but never worry about out of memory on variable assignment. – Alejandro Aug 18 '21 at 14:36
  • 2
    By default, Windows assigns a pagefile around 2.5 and 3 times the installed RAM (it changes from version to version too), which is around the limit you hit. You can try changing pagefile size and see how it affects the allocation you can get. Or changing the amount of RAM :D – Alejandro Aug 18 '21 at 14:37
  • 2
    @Alejandro Yeah, you're right, looks like Windows is being cautious and assuming it will all end up being used. `VirtualAlloc` can allocate 128 terabytes – the behavior I started off expecting – with `MEM_RESERVE`, but fails when `MEM_COMMIT` is also specified. – rwallace Aug 18 '21 at 14:39
  • 1
    Most likely the `new` in your code ends up commiting a chunk of memory (because it expect to be usable right away). `VirtualAlloc` with `MEM_RESERVE` doesn't really allocates anything, so it can go as far as the full address space, until you commit some of it. The difference lies in the OS calls done manually of by your C++ runtime does internally. – Alejandro Aug 18 '21 at 14:52

0 Answers0