I want to calculate Rosenbrock's test function
I have implemented the following C/C++ code
#include <stdio.h>
/********/
/* MAIN */
/********/
int main()
{
const int N = 900000;
float *x = (float *)malloc(N * sizeof(float));
for (int i=0; i<N; i++) x[i] = 3.f;
float sum_host = 0.f;
for (int i=0; i<N-1; i++) {
float temp = (100.f * (x[i+1] - x[i] * x[i]) * (x[i+1] - x[i] * x[i]) + (x[i] - 1.f) * (x[i] - 1.f));
sum_host = sum_host + temp;
printf("%i %f %f\n", i, temp, sum_host);
}
printf("Result for Rosenbrock's test function calculation = %f\n", sum_host);
}
Since the x
array is initialized to 3.f
, then each summation term should be 3604.f
, so that the final summation involving 899999
terms should be 3243596396
. However, the result I get is 3229239296
, with an absolute error of 14357100
. If I measure the difference between two consecutive partial summations, I see that it is 3600.f
for the early partial summations and then it drops to 3584
for the last ones, while it should always be 3604.f
.
If I use Kahan summation algorithm as
sum_host = 0.f;
float c = 0.f;
for (int i=0; i<N-1; i++) {
float temp = (100.f * (x[i+1] - x[i] * x[i]) * (x[i+1] - x[i] * x[i]) + (x[i] - 1.f) * (x[i] - 1.f)) - c;
float t = sum_host + temp;
c = (t - sum_host) - temp;
sum_host = t;
}
the result I get is 3243596288
, with a much smaller absolute error of 108
.
I'm pretty sure that this effect I see should be ascribed to the precision of floating point arithmetics. Could someone confirm this and provide me an explanation of the mechanism according to which this occurs?