On a system with virtual memory, it should be possible to allocate lots of address space, more than you have physical RAM, and then only write to as much of it as you need.
On a 32-bit system of course there is only four gigabytes of virtual address space, but that limit disappears on a 64-bit system.
Granted that Windows doesn't use the full 64-bit address space, apparently it uses 44 bits; that is still sixteen terabytes, so there should be no problem with allocating e.g. one terabyte: Behind Windows x64's 44-bit virtual memory address limit
So I wrote a program to test this, attempting to allocate a terabyte of address space in chunks of ten gigabytes each:
#include <new>
#include <stdio.h>
void main() {
std::set_new_handler([]() {
perror("new");
exit(1);
});
for (int i = 0; i < 100; i++) {
auto p = new char[10ULL << 30];
printf("%p\n", p);
}
}
Run on Windows x64 with 32 gigabytes of RAM, it gives this result (specifics differ between runs, but always qualitatively similar):
0000013C881C1040
0000013F081D0040
00000141881E2040
00000144081F1040
0000014688200040
0000014908219040
0000014B88226040
0000014E08232040
0000015088246040
0000015308252040
0000015588260040
new: Not enough space
So it only allocates 110 gigabytes before failing. That is larger than physical RAM, but much smaller than the address space that should be available.
It is definitely not trying to actually write to the allocated memory (that would require the allocation of physical memory); I tried explicitly doing that with memset
immediately after allocation, and the program ran much slower, as expected.
So where is the limit on allocated virtual memory coming from?