So, I have this piece of code:
#include <stdio.h>
#include <stdlib.h>
int main()
{
char *p;
long n = 1;
while(1) {
p = malloc(n * sizeof(char));
//p = calloc(n, sizeof(char));
if(p) {
printf("[%ld] Memory allocation successful! Address: %p\n", n , p);
n++;
} else {
printf("No more memory! Sorry...");
break;
}
}
free(p);
getch();
return 0;
}
And I run in on Windows. Interesting thing:
if we use malloc, the program allocates about 430 MB of memory and then stops (photo here => https://i.stack.imgur.com/ZYg75.png)
if we use calloc, the program allocates about 2 GB of memory and then stops (photo here => https://i.stack.imgur.com/vqGQX.png)
(strange test): if we use both of them in the same time, it uses maximum of (~400MB + ~2GB) / 2 => ~1.2GB
However, if I run the same code on Linux, the allocation goes on and on (after 600k allocations and many GB used it still continues until eventually it is killed) and approximately the same amount of memory is used.
So my question is: shouldn't they allocate the same amount of memory? I thought the only difference was that calloc initialize the memory with zero (malloc returns uninitialized memory). And why it only happens on Windows? It's strange and interesting in the same time.
Hope you can help me with an explanation for this. Thanks!
Edit:
Code::Blocks 13.12 with GNU GCC Compiler
Windows 10 (x64)
Linux Mint 17.2 "Rafaela" - Cinnamon (64-bit) (for Linux testing)