I've detected an unusual computational time when performing arithmetic operations with floating numbers of small precision. The following simple code exhibit this behavior:
#include <time.h>
#include <stdlib.h>
#include <stdio.h>
const int MAX_ITER = 100000000;
int main(int argc, char *argv[]){
double x = 1.0, y;
int i;
clock_t t1, t2;
scanf("%lf", &y);
t1 = clock();
for (i = 0; i < MAX_ITER; i++)
x *= y;
t2 = clock();
printf("x = %lf\n", x);
printf("Time: %.5lfsegs\n", ((double) (t2 - t1)) / CLOCKS_PER_SEC);
return 0;
}
Here are two different runs of the program:
With y = 0.5
x = 0.000000
Time: 1.32000segsWith y = 0.9
x = 0.000000
Time: 19.99000segs
I'm using a laptop with the following specs to test the code:
- CPU: Intel® Core™2 Duo CPU T5800 @ 2.00GHz × 2
- RAM: 4 GB
- OS: Ubuntu 12.04 (64 bits)
- Model: Dell Studio 1535
Could someone explain in detail why this behavior occurs? I'm aware that with y = 0.9 the x value goes to 0 more slowly than with y = 0.5, so I suspect the problem is directly related to this.