2

I'm trying to write a c program to test how much memory is on my system. I'm planning to run it under various different conditions:

  1. With swap enabled
  2. With swap disabled and overcommit (/proc/sys/vm/overcommit_memory) set to false
  3. With swap disabled and overcommit (/proc/sys/vm/overcommit_memory) set to true
  4. Inside a virtual machine running on the system

I am doing this to learn more about how memory allocation behaves at the limits of the system's real and virtual memory.

I'm running this on a machine with 4GB RAM and 8GB Swap.

What I have currently is something like this:

#include <stdio.h>
#include <stdlib.h>

int main(void)
{
    int *ptr;
    int mb = 1048576;
    long long int i;

    for (i = 0; i < 1000000000000; i++)
    {
        printf("Trying to allocate %lld MBytes\n", i * 10 * sizeof(int) );

        ptr = (int*) calloc(10 * mb, sizeof(int));
        if ( ptr == 0 ) {
            // clean up
            free(ptr);
            printf("Ran out of memory\n");
            return 1;
        }
    }
}

I was hoping that this would continue to allocate blocks of 40mb (sizeof(int) is 4 on my system). Calloc would initialize the memory to zero. When no more memory was available, it would terminate the program and free up the memory.

When I run it, it continues to run beyond the limits of my memory. It finally died while printing the line: "Trying to allocate 5707960 MBytes." (Indicating almost 6000 GB of memory.)

Can anybody figure out where I'm going wrong.

Thanks @Blank Xavier for pointing out that the page file size should be considered when allocating this way.

I modified the code as follows:

int main(void)
{
    int *ptr;
    int mb = 1048576;
    int pg_sz = 4096;

    long long int i;

    for (i = 0; i < 1000000000000; i++)
    {
        printf("Trying to allocate %lld MBytes\n", i * pg_sz * sizeof(int) / mb );

        ptr = (int*) calloc(pg_sz, sizeof(int));
        if ( ptr == 0 ) {
            // clean up
            free(ptr);
            printf("Ran out of memory\n");
            return 1;
        }
    }
}

And now it bombs out printing:

"Trying to allocate 11800 MBytes"

which is what I expect with 4GB Ram and 8GB swap. By the way, it prints much more slowing after 4GB since it is swapping to disk.

Steve Walsh
  • 6,363
  • 12
  • 42
  • 54
  • So, to me this looks like it works just as written. You should account for the memory overcommit, you would be dead a lot sooner if you were to try and use any of the memory you have been given. – r_ahlskog Aug 23 '12 at 09:56
  • @r_ahlskog Doesn't calloc 'use' the memory by setting it to zero? – Steve Walsh Aug 23 '12 at 10:00
  • @ZincX new blocks of virtual memory are always initialised to zero. Calloc can take advantage of this to avoid expensively touching every single byte in the block. – JeremyP Aug 23 '12 at 11:03
  • @ZincX, ah yes, well from a quick googling I cannot say. But I see you improved on the situation anyways. – r_ahlskog Aug 23 '12 at 11:05

4 Answers4

3

First off, some advice:

Please don't cast the return value from the allocation functions in C - that can mask certain errors that will almost certainly bite you at some point (such as not having the prototype defined in a system where pointers and integers are different widths).

The allocation functions return void * which is perfectly capable of being implicitly cast to any other pointer.

In terms of your actual question, I'd be starting with a massive allocation at the front, ensuring that it fails, at least on a system without certain optimisations (a).

In that case, you then gradually step down until the first one succeeds. That way, you don't have to worry about housekeeping information taking up too much space in the memory arena or the potential for memory fragmentation: you simply find the largest single allocation that succeeds immediately.

In other words, something like the following psuedo-code:

allocsz = 1024 * 1024 * 1024
ptr = allocate_and_zero (allocsz)
if ptr != NULL:
    print "Need to up initial allocsz value from " + allocsz + "."
    exit
while ptr == NULL:
    allocsz = allocsz - 1024
    ptr = allocate_and_zero (allocsz)
print "Managed to allocate " + allocsz + " bytes."

(a): As to why calloc on your system may seem to return more memory than you have in swap, GNU libc will, above a certain threshold size, use mmap rather than the heap, hence your allocation is not restricted to the swap file size (its backing storage is elsewhere). From the malloc documentation under Linux:

Normally, malloc() allocates memory from the heap, and adjusts the size of the heap as required, using sbrk(2). When allocating blocks of memory larger than MMAP_THRESHOLD bytes, the glibc malloc() implementation allocates the memory as a private anonymous mapping using mmap(2).

This is a problem that won't be fixed by my solution above since it's initial massive allocation will also be above the threshold.

In terms of physical memory, since you're already using procfs, you should probably just have a look inside /proc/meminfo. MemTotal should give you the physical memory available (which may not be the full 4G since some of that address space is stolen for other purposes).

For virtual memory allocations, keep the allocation size below the threshold so that they come out of the heap rather than a mmap area. Another snippet from the Linux malloc documentation:

MMAP_THRESHOLD is 128 kB by default, but is adjustable using mallopt(3).

So your choices are to use mallopt to increase the threshold, or drop the 10M allocation size a bit (say 64K).

Community
  • 1
  • 1
paxdiablo
  • 854,327
  • 234
  • 1,573
  • 1,953
  • Have stated the problem now. Sorry about that. – Steve Walsh Aug 23 '12 at 09:38
  • @Zinc, that's _pseudo-code,_ I didn't specifically mean `malloc` - I'll change it. – paxdiablo Aug 23 '12 at 09:45
  • This doesn't work. Massive allocatio will only determine the size of the largest contigious virtual memory block, not the amount of physical memory. –  Aug 23 '12 at 09:53
  • @Blank, malloc can't give you the physical memory anyway (unless you have no swap). All it can give you is the largest virtual memory chunk you can get. For Linux, the physical memory size can best be calculated out of `/proc/meminfo`. In a newly started process, the max amount of memory you can allocate is the first, largest one. Any piecemeal allocations before that can lead to housekeeping and fragmentation. – paxdiablo Aug 23 '12 at 09:58
  • @Paxdiablo: if you loop allocating physical page sized blocks, you'll have a one-to-one mapping to physical memory, which will let you have some insight into available physical memory. –  Aug 23 '12 at 10:44
  • @Blank, no, not unless we have a different definition of physical memory. If your physical memory is 1G, you can _still_ allocate more than that in your address space, it'll just get swapped out. That's why it's virtual. – paxdiablo Aug 23 '12 at 12:16
  • @Paxdiablo: yes - "some insight" was all I claimed :-) you at least get away from only finding out the largest contigious virtual memory block, which is an improvement. –  Aug 23 '12 at 16:20
3

You can't determine physical memory by a huge allocation and then reducing the allocation size until it succeeds.

This will only determine the size of the largest available virtual memory block.

You need to loop, allocating physical-page-sized (which will also be virtual-page-sized) blocks until the allocation fails. With such a simple program, you don't need to worry about keeping track of your allocations - the OS will return them when the programme exits.

  • You are right! I changed the code to use page sized blocks and it bombs out at 12GB which is my RAM + VM. Thanks! – Steve Walsh Aug 23 '12 at 10:04
  • Actually, this is not right. Allocation can continue well beyond your physical memory limits. A machine with 1G of physical RAM can allocate way more than that by virtue of the fact it will swap pages in and out as needed. In fact, it will quite happily give 1G of virtual memory to twenty different processes, provided there' enough backing storage. Physical memory is best obtained from the OS (such as with /proc/meminfo. – paxdiablo Aug 23 '12 at 12:20
  • @Paxdiablo: I may be wrong, but I think that's only going to be true if lazy allocation is enabled (and it normally is). If the OS actually *does* consume a page for every page allocated, then it *will* have to come from RAM or the page file, because it has to be stored somewhere, so you *will* determine the size of the two combined. You can emulate this by writing to the freshly allocated page, which the OPs application does, because it uses calloc, which would be why he sees that result. –  Aug 23 '12 at 16:18
0

Well, the program sort of does what you intend, but the huge number needs an LL after it, or it'll be "randomly" truncated by the compiler which will treat the number as an integer.

However, when you free the pointer, which is a null pointer, it won't actually do anything. It won't free all the memory you've allocated, as you have no pointers to it.

Moreover if you fall out of the main loop, what gets return from main is undefined.

Tom Tanner
  • 9,244
  • 3
  • 33
  • 61
-1

You must free you allocated memory in case of success in your loop

for (i = 0; i < 1000000000000; i++)
{
    printf("Trying to allocate %lld MBytes\n", i * 10 * sizeof(int) );

    ptr = (int*) calloc(10 * mb, sizeof(int));
    if ( ptr == 0 ) {
        printf("Ran out of memory\n");
        return 1;
    }
    else 
        free(ptr);
}
Stephane Rouberol
  • 4,286
  • 19
  • 18
  • 1
    If I free the memory, it will continue to allocate the same 40 mb of memory each time. I want to accumulate memory until it dies. – Steve Walsh Aug 23 '12 at 09:38