-3

the following code caused segmentation fault (core dumped) with 1000000000 times loop. but by reducing the looping time to 100000, it goes ok.

so is it causing any thing wrong in cpu, hardware, or anywhere? is it caused by watchdog timer?

can anybody help to explain it for this? what happened when cpu goes to huge loops(finite loops with huge number repeating)? how does cpu tell the computing is infinite? many thanks.

#include <stdio.h>

int main () {
    int a[1000000000];
    int i = 0;

    for (i = 0;i < 1000000000; i++){
        if(i % 4 == 0){
            a[i] = i;
        }else {
            a[i] = 321;
        }
    }

    printf("run over");

    return 0;
}
  • 4
    Are you sure the issue is the number of iterations and not the size of the array you are allocating? – SJuan76 May 06 '22 at 07:45

3 Answers3

2

The overflowing of the stack is observed here. 1000000000 * sizeof(int) memory is supposed to be there for storing this array. In short, the problem is coming from the size of the array not the number iterations.

You can either make the array static or dynamically allocate the memory.

user2736738
  • 30,591
  • 5
  • 42
  • 56
1

Are you perhaps running out of memory ? Your 1 billion int array weighs 30 Gb if using 32bit ints.

gchapuis
  • 163
  • 8
1

This happened because stack has a memory limit. Your array will occupy a total of 1000000000 * sizeof(int) bytes, which will be equal to 3.725 Gigabytes on 64 bit machine.

You need to dynamically store the array on the heap memory like this:

int *array = malloc(1000000000 * sizeof(int));

Or, better break your array into several parts and process them and after processing store those results on a hard disk.

Also, you can see the maximum stack size on linux by using ulimit:

ulimit -s # stack size
ulimit -a # full details
Darth-CodeX
  • 2,166
  • 1
  • 6
  • 23