I am trying to come up with a good tolerance when comparing doubles in unit tests.
If I allow a fixed tolerance as I've seen mentioned on this site (eg return abs(actual-expected) < 0.00001;
), this will frequently fail when numbers are very big due to the nature of floating point representation.
If I use a relative tolerance in terms of % error allowed (eg return abs(actual-expected) < abs(actual * 0.001);
this fails too often for small numbers (and for very small numbers, the computation itself can introduce rounding error). Additionally, it allows too much tolerance in certain ranges (eg comparing 2000 and 2001 would pass).
I'm wondering if there's any standard algorithm for allowing tolerance that will work for both small and large numbers. Should I try for some kind of base 2 logarithmic tolerance to mirror floating point storage? Should I do a hybrid approach based on the size of the inputs?
Since this is in unit test code, performance is not a big factor.