In my program I have a running total of a particular number, declared as a float before the main so as to be universal, and on every iteration I add and subtract floats from it.
These floats are always numbers between 0 and 10, to one decimal place. However, the total deviates occasionally (very infrequently, but i'm dealing with billions of iterations) from this 1.d.p. accuracy, by 0.01 (i.e. I add 2.4 to 15.9 and get 18.31)
This minor deviation can lead to the program crashing, so is there any way to alleviate it?