3

Is there a general best practice strategy for dealing with floating point inaccuracy?

The project that I'm working on tried to solve them by wrapping everything in a Unit class which holds the floating point value and overloads the operators. Numbers are considered equal if they "close enough," comparisons like > or < are done by comparing with a slightly lower or higher value.

I understand the desire to encapsulate the logic of handling such floating point errors. But given that this project has had two different implementations (one based on the ratio of the numbers being compared and one based on the absolute difference) and I've been asked to look at the code because its not doing the right, the strategy seems to be a bad one.

So what is best the strategy for try to make sure you handle all of the floating point inaccuracy in a program?

Winston Ewert
  • 44,070
  • 10
  • 68
  • 83
  • 1
    There are entire books written on this topic -- seriously. If it's of concern to you, then you should get one. I imagine you're going to get lots of links to them. – Ernest Friedman-Hill Sep 23 '11 at 16:27
  • I don't understand how you'd do *inequality* tests that allow for small errors. `a` is less than `b`, only it's not? – Kerrek SB Sep 23 '11 at 16:28
  • 1
    @Kerreck: if you want to retain the `<`, `==`, `>` trichotomy, and if you also allow a tolerance for `==` then you have to allow the same tolerance for `<`. So if `1.999` is considered equal to `2.000`, then it shouldn't *also* be considered less than it. For that simple strategy, instead of `a < b`, you'd need `a < (b - epsilon)`. Of course, this fuzzy inequality fails to be a strict weak order, just as a fuzzy equality check is not an equivalence relation. – Steve Jessop Sep 23 '11 at 16:33
  • This sort of thing isn't *solved* by implementing generic unit classes. How floating point numbers are compared is a high level concern, and pushing that logic into lower level classes just makes it worse. The algorithm using the floating point numbers can write or use generic comparators, but they shouldn't be baked in to the classes as defaults. – Tom Kerr Sep 23 '11 at 16:38
  • @TomKerr, that's certainly what I'm thinking after having to deal with the attempt to solve it that way. I was just wondering if there was a general strategy I could apply or if it just has to be dealt with on a problem by problem basis – Winston Ewert Sep 23 '11 at 16:46

4 Answers4

2

Check comparing floating point numbers and this post on deniweb and this on SO.

ajwood
  • 18,227
  • 15
  • 61
  • 104
Kashyap
  • 15,354
  • 13
  • 64
  • 103
2

You want to keep data as dumb as possible, generally. Behavior and the data are two concerns that should be kept separate.

The best way is to not have unit classes at all, in my opinion. If you have to have them, then avoid overloading operators unless it has to work one way all the time. Usually it doesn't, even if you think it does. As mentioned in the comments, it breaks strict weak ordering for instance.

I believe the sane way to handle it is to create some concrete comparators that aren't tied to anything else.

struct RatioCompare {
  bool operator()(float lhs, float rhs) const;
};

struct EpsilonCompare {
  bool operator()(float lhs, float rhs) const;
};

People writing algorithms can then use these in their containers or algorithms. This allows code reuse without demanding that anyone uses a specific strategy.

std::sort(prices.begin(), prices.end(), EpsilonCompare());
std::sort(prices.begin(), prices.end(), RatioCompare());

Usually people trying to overload operators to avoid these things will offer complaints about "good defaults", etc. If the compiler tells you immediately that there isn't a default, it's easy to fix. If a customer tells you that something isn't right somewhere in your million lines of price calculations, that is a little harder to track down. This can be especially dangerous if someone changed the default behavior at some point.

Tom Kerr
  • 10,444
  • 2
  • 30
  • 46
1

Both techniques are not good. See this article.

Google Test is a framework for writing C++ tests on a variety of platforms.

gtest.h contains the AlmostEquals function.

  // Returns true iff this number is at most kMaxUlps ULP's away from
  // rhs.  In particular, this function:
  //
  //   - returns false if either number is (or both are) NAN.
  //   - treats really large numbers as almost equal to infinity.
  //   - thinks +0.0 and -0.0 are 0 DLP's apart.
  bool AlmostEquals(const FloatingPoint& rhs) const {
    // The IEEE standard says that any comparison operation involving
    // a NAN must return false.
    if (is_nan() || rhs.is_nan()) return false;

    return DistanceBetweenSignAndMagnitudeNumbers(u_.bits_, rhs.u_.bits_)
        <= kMaxUlps;
  }

Google implementation is good, fast and platform-independent.

A small documentation is here.

Lior Kogan
  • 19,919
  • 6
  • 53
  • 85
1

To me floating point errors are essentially those which on an x86 would lead to a floating point exception (assuming the coprocessor has that interrupt enabled). A special case is the "inexact" exception i e when the result was not exactly representable in the floating point format (such as when dividing 1 by 3). Newbies not yet at home in the floating-point world will expect exact results and will consider this case an error.

As I see it there are several strategies available.

  • Early data checking such that bad values are identified and handled when they enter the software. This lessens the need for testing during the floating operations themselves which should improve performance.
  • Late data checking such that bad values are identified immediately before they are used in actual floating point operations. Should lead to lower performance.
  • Debugging with floating point exception interrupts enabled. This is probably the fastest way to gain a deeper understanding of floating point issues during the development process.

to name just a few.

When I wrote a proprietary database engine over twenty years ago using an 80286 with an 80287 coprocessor I chose a form of late data checking and using x87 primitive operations. Since floating point operations were relatively slow I wanted to avoid doing floating point comparisons every time I loaded a value (some of which would cause exceptions). To achieve this my floating point (double precision) values were unions with unsigned integers such that I would test the floating point values using x86 operations before the x87 operations would be called upon. This was cumbersome but the integer operations were fast and when the floating point operations came into action the floating point value in question would be ready in the cache.

A typical C sequence (floating point division of two matrices) looked something like this:

// calculate source and destination pointers

type1=npx_load(src1pointer);
if (type1!=UNKNOWN)    /* x87 stack contains negative, zero or positive value */
{
  type2=npx_load(src2pointer);
  if (!(type2==POSITIVE_NOT_0 || type2==NEGATIVE))
  {
    if (type2==ZERO) npx_pop();
    npx_pop();    /* remove src1 value from stack since there won't be a division */
    type1=UNKNOWN;
  }
  else npx_divide();
}
if (type1==UNKNOWN) npx_load_0();   /* x86 stack is empty so load zero */
npx_store(dstpointer);    /* store either zero (from prev statement) or quotient as result */

npx_load would load value onto the top of the x87 stack providing it was valid. Otherwise the top of the stack would be empty. npx_pop simply removes the value currently at the top of the x87. BTW "npx" is an abbreviation for "Numeric Processor eXtenstion" as it was sometimes called.

The method chosen was my way of handling floating-point issues stemming from my own frustrating experiences at trying to get the coprocessor solution to behave in a predictable manner in an application.

For sure this solution led to overhead but a pure

*dstpointer = *src1pointer / *src2pointer;

was out of the question since it didn't contain any error handling. The extra cost of this error handling was more than made up for by how the pointers to the values were prepared. Also, the 99% case (both values valid) is quite fast so if the extra handling for the other cases is slower, so what?

Olof Forshell
  • 3,169
  • 22
  • 28
  • I'm concerned with the inexact "errors" not so much the other ones. I have to admit calling them errors is problematic. I usually don't worry too much about the other errors because they produce nan or inf which spreads through all the numbers they touch usually making it obvious something went wrong. Is that insufficient? – Winston Ewert Oct 24 '11 at 12:34
  • +1 for writing a proprietary database engine over twenty years ago - that is awesome – totallyNotLizards Oct 24 '11 at 12:59
  • My application had to be able to handle numerical operations on non-numerical (but valid) data caused by users performing data on entire tables even though they contained both numeric and other data. As to inexacts I really can't see how you should be able to avoid them unless your values are reasonably small and you only use subtraction, addition and division. Other errors are under- and overflows which may take some time before they appear. For example, x=(x+tmpx)/2 where tmpx=0 may require quite a few invocations before an underflow condition occurs. – Olof Forshell Oct 24 '11 at 13:06
  • 1
    @jammypeach: I guess I should say something humble but I was pretty proud of the solutions it incorporated to achieve the performance it did. It was basically 16-bit PL/M-86 code with assembly routines to handle numeric processing. When the 80386 arrived these routines were modified such that they used 32-bit data and instruction overrides to handle 40MB data areas in RAM as compared to the 64KB in the pure 16-bit variant. The program was distributed on a diskette (240KB exe) when CDs were long since commonplace. "Awesome", I like that! – Olof Forshell Oct 24 '11 at 13:14
  • In my next-to-previous comment for "performing data" read "performing numeric operations". For "subtraction, addition and division" read "subtraction, addition and multiplication". This should clear up some questions. – Olof Forshell Oct 24 '11 at 14:36