I searched the answer to this question a lot, and I found that, in my case, it's the opposite.
I am trying to sum single precision float numbers in descending and ascending order to figure it out which one gives the smallest error.
Intuitively I would say in ascending order, since adding small numbers you can get a number that can be compared with a bigger one:
In single-precision 10^-8 added 10^7 times will give 0.1 that summed to 1 will give 1.1
The other way around adding 10^-8 to 1 10^7 times will still give 1 since in single precision the bit is discarded.
However, doing this for a large number of numbers gives me the opposite:
def sum_increasing_magnitude(array):
increasing_array = np.sort(array)
increasing_sum = np.single(0)
for i in increasing_array:
increasing_sum += np.single(i)
return increasing_sum
def sum_decreasing_magnitude(array):
increasing_array = np.sort(array)
decreasing_array = increasing_array[::-1]
decreasing_sum = np.single(0)
for i in decreasing_array:
decreasing_sum += np.single(i)
return decreasing_sum
values = np.logspace(1,7,7)
decreasing = []
increasing = []
double = []
for val in values:
array = np.single(np.random.random(int(val)))
decreasing.append(sum_decreasing_magnitude(array))
increasing.append(sum_increasing_magnitude(array))
double.append(sum(np.double(array)))
print(([abs(decreasing[i] - double[i]) for i in range(len(double))]))
print(([abs(increasing[i] - double[i]) for i in range(len(double))]))
[3.5390257835388184e-07, 4.4330954551696777e-07, 0.00010590511374175549, 0.006269174667977495, 0.12608919621561654, 12.35097292996943, 406.54364286363125] [3.5390257835388184e-07, 3.3713877201080322e-06, 7.720035500824451e-05, 0.006757455917977495, 0.02625455378438346, 1.88222292996943, 58020.04364286363]
For 10^7 random numbers between 0 and 1, the ascending sum gave me a way bigger error compared to the descending sum.
Now the question is: why is this happening in this case?