4

Suppose I'm writing a program for an environment that has 32 bits for virtual spaces addresses (4294967296 addresses), what happens if create more than 4294967296 variables, effectively exceeding the number of possible addresses? Or all programs in an environment collectively use over 4294967296 addresses?

Max Koretskyi
  • 101,079
  • 60
  • 333
  • 488
  • "Understanding the Linux Kernel bt Bovet and Cesati" says that it is possible to run applications whose memory needs are larger than the available physical memory. It is related to the virtual memory abstraction – avatli Apr 07 '17 at 05:59
  • 1
    @Ali Volkan ATLI: The question is not about physical memory, The question is clearly about running out of *virtual address space*, which has nothing to do with physical memory at all. – AnT stands with Russia Apr 07 '17 at 06:02

5 Answers5

4

It depends precisely how you try to do it. It may crash, it may return an error, it may throw an exception.

David Schwartz
  • 179,497
  • 17
  • 214
  • 278
3

what happens if create more than 4294967296 variables

I guess that you are very confused. You don't create that much variables. You practically use C dynamic memory allocation (using malloc & free and friends) or C++ dynamic memory allocation (e.g. dynamic memory management, above low level memory management like ::operator new, which, in many implementations is using malloc).

Notice that malloc (in C) and new (in C++) don't create fresh variables. They are allocating fresh memory zones whose address could go into one pointer variable, if you code int *ptr = malloc(100000*sizeof(int); in C, or int* ptr = new int[100000]; in C++ ....

A variable (in C or C++) is a source code thing which has a name (e.g. ptr or x000002 or array in this answer here) and a scope. During execution, only locations matter (and variables do not exist). Read about memory addresses (what practically locations are).

So to have many variables, you'll need to have for example a huge source file with:

int x000001;
int x000002;

and so on. You probably can generate (with some other program) such a huge C or C++ source file, e.g. up to:

////etc
int x999998;
int x999999;

But even if you generate a four billion lines of source C file, you won't have patience to compile it. And if you did, the compilation will surely fail (at least at link time, which I view as part of the overall compilation of your program).

Notice that an array declaration defines only one variable:

/// one single variable, but a huge one
int array[4294967296];

declares one variable, named array. Again, that won't compile & link (and if the variable is a local one inside some function, you'll get at the very least a stack overflow at runtime). Typical call stacks are limited to one or a few megabytes (this is operating system & computer dependent).

Look at the picture in virtual address space wikipage and understand what pointer aliasing means and what virtual memory is.

In practice, on a 32 bits computer, the virtual address space is often limited to e.g. 3Gigabytes for a given process (each process is running some executable and has its own virtual address space). Details are operating system specific. On Linux, use setrlimit(2) - probably thru the ulimit builtin of your bash shell - to lower that limit.

On Linux, the dynamic memory allocation (malloc or new) is based upon system calls modifying the virtual address space, notably mmap(2). Such calls can fail (and then malloc would fail by returning NULL, and new would raise an exception), and on a 32 bits system they will fail before 3Gbytes. You probably want to disable memory overcommitment.

If you happen to use a Linux system, read about proc(5) and try

cat /proc/self/maps
cat /proc/$$/maps

then understand what is their output. You probably should also read Advanced Linux Programming.

I recommend taking several days to read: Operating Systems : Three Easy Pieces (it is freely downloadable).

(on Windows or MacOSX or Android, both malloc & new are also using some operating system primitive to increase the virtual address space. I leave you to find which ones)

Community
  • 1
  • 1
Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547
  • just to clarify, _In practice, on a 32 bits computer, the virtual address space is often limited to e.g. 3Gigabytes for a given process_ - this is address space limit (number of addresses), not total memory limit that can be used by a process, right? – Max Koretskyi Apr 07 '17 at 05:56
  • Your question is confusing. Please follow the many links I have gave. You need to spend several days or weeks on reading stuff. We can't answer to all that in a few paragraphs. You need to read several books. Before that, read all the wikipages I have referenced. Then spend weeks on reading books. – Basile Starynkevitch Apr 07 '17 at 05:58
  • Okay, sure, I will. I'm just under impression now that I have memory addresses range and then I have total memory space, so for example I can have 2 bits for the memory address space, but 100Mb for the memory. And I can have four pointers max because there are no more addresses, but a single pointer can reference 20 MB of memory (so the memory itself is not limited to 2 bits). Maybe that understanding is incorrect – Max Koretskyi Apr 07 '17 at 06:02
  • It looks incorrect, and you look confused. Sorry, you need to spend weeks in reading (or follow several computer science courses). I don't have the patience, time, or space, to teach all that here. I would have to write an entire book (and such huge answers don't fit on StackOverflow); and I don't have time for that. – Basile Starynkevitch Apr 07 '17 at 06:03
  • 1
    I understand, no problem, appreciate your help. Thanks for the links. Good luck – Max Koretskyi Apr 07 '17 at 06:03
3

If your specific process attempts to exceed the size of process virtual address space, it will simply run out of memory. What will happen is exactly what normally happens when your process runs out of memory - memory allocation function will return null pointer or something like that. Moreover, theoretically, running out of address space is the only way to "run out of memory" in a swap-enabled virtual-memory-based OS (real life is a bit more complicated than that, but in general it is true).

As for all processes on the system.... your question is misguided. The OS, even if it is a 32-bit OS, is not in any way restricted to single 32-bit address space for all processes. For all practical means and purposes, the OS can maintain virtually unrestricted number of simultaneous independent 32-bit address spaces for different processes.

AnT stands with Russia
  • 312,472
  • 42
  • 525
  • 765
  • _The OS, even if it is a 32-bit OS, is not in any way restricted to single 32-bit address space for all processes._ - appreciate your answer, very helpful – Max Koretskyi Apr 07 '17 at 06:46
2

If you create that many variables with static storage duration, or automatic variables within one block, the compiler or the linker will likely fail to create an executable.

If you create many automatic variables in many functions and cause all of them to be active at the same time, the program will crash because of stack overflow.

If you try to allocate that many bytes from dynamic storage, allocation will fail.

n. m. could be an AI
  • 112,515
  • 14
  • 128
  • 243
1

when the limit is reached, virtual allocations that commit memory fail. That means that even a standard 32-bit process may get virtual memory allocation failures. Maybe that link could help you: Pushing the Limits of Windows: Virtual Memory