0

According to this very elaborate answer I would estimate the maximum relative error δres,max of the following computation like this:

// Pseudo code    
float a, b, c; // Prefilled IEEE 754 floats with double precision    
res = a / b * c;

res = a * (1 + δa) / ( b * (1 + δb) ) * (1 + δa/b) * c * (1 + δc) * (1 + δa/b*c)

= a / b * c * (1 + δa) / (1 + δb) * (1 + δa/b) * (1 + δc) * (1 + δa/b*c)

= a / b * c * (1 + δres)

=> δres = (1 + δa) / (1 + δb) * (1 + δa/b) * (1 + δc) * (1 + δa/b*c) - 1

All δs are within the bounds of ± ε / 2, where ε is 2^-52.

=> δres,max = (1 + ε / 2)^4 / (1 - ε / 2) - 1 ≈ 2.5 * ε

Is this a valid approach for error estimation that can be used for every combination of basic floating-point operations?

PS:

Yes, I read "What Every Computer Scientist Should Know About Floating-Point Arithmetic". ;)

Thorsten
  • 103
  • 12
  • 2
    A paper you might find relevant and interesting: Claude-Pierre Jeannerod and Siegfried M. Rump. "On relative errors of floating-point operations: optimal bounds and applications." (2016) [(online)](https://hal.inria.fr/docs/00/93/44/43/PDF/JeannerodRump2014.pdf) – njuffa May 28 '17 at 19:39
  • Thanks for the link, @njuffa! Interesting, indeed. :) For now I'm just interested in safe error bounds. To tighten them would be my next step, if necessary. – Thorsten May 28 '17 at 23:23

1 Answers1

1

Well, it's probably a valid approach. I'm not sure how you've jockeyed that last line, but your conclusion is basically correct (though note that, since the theoretical error can exceed 2.5e, in practice the error bound is 3e).

And yes, this is a valid approach which will work for any floating-point expression of this form. However, the results won't always be as clean. Once you have addition/subtraction in the mix, rather than just multiplication and division, you won't usually be able to cleanly separate the exact expression from an error multiplier. Instead, you'll see input terms and error terms getting multiplied directly together, rather than the pleasantly relatively constant bound here.

As a useful example, try deriving the maximum relative error for (a+b)-a (assuming a and b are exact).

Sneftel
  • 40,271
  • 12
  • 71
  • 104
  • Thanks for your response, @sneftel! :) What do you mean by _almost valid_ & _theoretical error_? Concerning my last equation: I substituted the δs with the upper or lower bounds, so δres has the biggest possible value. – Thorsten May 28 '17 at 16:33
  • "Almost valid" is because, as I said, the error can exceed 2.5e. BTW, for the division operator, it's usually useful to use the alternative formulation fl[a•b] = a•b / (1+δ), which is also valid and avoids the wacky division stuff going on there. – Sneftel May 29 '17 at 07:06
  • Would you mind to explain, how δ could exceed 2.5e? And why I'm allowed to combine different floating-point models in the same equation? – Thorsten May 30 '17 at 12:03
  • 1
    Consider the simpler expression `fl(fl(a*b)*c)`. This works out to `a*b*c*(1+d1)*(1+d2)`, or `a*b*c * (1 + d1 + d2 + d1*d2)`. That extra multi-error term pushes the error above simply the sum of the individual d's. – Sneftel May 30 '17 at 14:48
  • As for your second question, I'm not sure what you mean. It's one floating-point model, with two invariants that you can potentially use when proving things. – Sneftel May 30 '17 at 14:49
  • **About the extra push:** I'm with you here, but I still don't understand why you recommend 3e as error bound for my expression. To my understanding, the extra push won't be big enough to increase the total d to 3e. So, wouldn't an approximation of 2.6e instead of 2.5e suffice to play save? **About the model invariants:** I interpreted your remark this way: Use the invariant `fl(x op y) = (x op y) / (1 + d)` _just for one_ operation and invariant `fl(x op y) = (x op y) * (1 + d)` for the others - what didn't make sense to me. – Thorsten May 30 '17 at 22:32