8

Since we know that 0.1 + 0.2 != 0.3 due to limited number representation, we need to instead check hat abs(0.1+0.2 - 0.3) < ε. The question is, what ε value should we generally choose for different types? Is it possible to estimate it depending on the number of bits and the number and types of operations that are likely to be performed?

comingstorm
  • 25,557
  • 3
  • 43
  • 67
Dmitri Nesteruk
  • 23,067
  • 22
  • 97
  • 166
  • 3
    Well, `0.1 + 0.2 = 0.3` for me. Do you want to narrow things to at least exclude decimal arithmetic? You care about language, chip, operating system, what? Seems a bit broad as it is. – Bill Woodger Feb 02 '16 at 16:34
  • this seems far too broad. – Meirion Hughes Feb 02 '16 at 16:40
  • If you're using floating point decimal, the same answers apply, even if that specific equality holds. A good question can support a fair amount of breadth. – comingstorm Feb 02 '16 at 16:54
  • @comingstorm you meant me? No, utter nonsense. I'm talking about decimal. Decimal arithmetic. Use it for money stuff, none of this tosh. – Bill Woodger Feb 02 '16 at 17:06
  • 1
    OK, you have a point: by context (and by tag), the question is really about floating point. – comingstorm Feb 02 '16 at 17:12
  • 1
    The answers so far, while as good as might be expected, seem to punt on the details, and OP seems to need the details. I'm starting to agree with @BillWoodger's original comment, that the question is too broad. Even taking decimal arithmetic out of consideration, this is an active research topic. – Erick G. Hagstrom Feb 02 '16 at 17:38

3 Answers3

9

A baseline value for epsilon is the difference between 1.0 and the next highest representable value. In C++, this value is available as std::numeric_limits<T>::epsilon().

Note that, at the minimum, you need to scale this value as a proportion of the actual number you're testing. Also, since the precision scales only roughly with the numeric value, you may want to increase your margin by a small factor to prevent spurious errors:

double epsilon = std::numeric_limits<double>::epsilon();

// C++ literals and math functions are double by default
bool is_near = abs(0.1+0.2 - 0.3) <= 0.3 * (2*epsilon);

As a more complete example, a function for comparing doubles:

bool is_approximately_equal(double a, double b) {
  double scale = max(abs(a), abs(b));
  return abs(a - b) <= scale * (2*epsilon);
}

In practice, the actual epsilon value you should use depends on what you're doing, and what kind of tolerance you actually need. Numeric algorithms will typically have precision tolerances (average and maximum) as well as time and space estimates. But the precision estimate typically starts with something like characteristic_value * epsilon.

comingstorm
  • 25,557
  • 3
  • 43
  • 67
  • What's the point of 0.3 * (2*epsilon) = 0.6(epsilon)? – Dmitri Nesteruk Feb 02 '16 at 18:29
  • `0.3` is the characteristic value. `2` is the small factor to increase your margin by. In an actual function, the characteristic value would be some kind of norm of the input... I'll edit my answer to include a more complete example. – comingstorm Feb 02 '16 at 19:53
  • @comingstorm Thanks for this answer! An additional remark: It seems that it should be checked if the arguments are `0.0`. Otherwise `epsilon == 0.0`, thus `is_approximately_equal(0.0, 0.0)` would return `false`. Followup question: How to check for equals `0.0` in this case? Using std::numeric_limits::min()? – SebastianK Aug 03 '18 at 11:51
  • 1
    I think changing the comparison to `<=` would do the trick. I'll edit the answer to make it so. – comingstorm Aug 03 '18 at 18:30
6

You can estimate the machine epsilon using the algorithm below. You need to multiply this epsilon with the integer value of 1+(log(number)/log(2)). After you have determined this value for all numbers in your equation, you can use error analysis to estimate the epsilon value for a specific calculation.

epsilon=1.0

while (1.0 + (epsilon/2.0) > 1.0) {
  epsilon = epsilon /2.0     
}
//Calculate error using error analysis for a + b
epsilon_equation=Math.sqrt(2*epsilon*epsilon)

document.write('Epsilon: ' + epsilon_equation+'<br>')
document.write('Floating point error: ' + Math.abs(0.2 + 0.4 -0.6)+'<br>')
document.write('Comparison using epsilon: ')
document.write(Math.abs(0.2 + 0.4 -0.6)<epsilon_equation)

Following your comment, I have tried the same approach in C# and it seems to work:

using System;

namespace ConsoleApplication
{
   
    public class Program
    {
        public static void Main(string[] args)
        {
            double epsilon = 1.0;

            while (1.0 + (epsilon/2.0) > 1.0)
            {
                epsilon = epsilon/2.0;
            }

            double epsilon_equation = Math.Sqrt(2*epsilon*epsilon);

            Console.WriteLine(Math.Abs(1.0 + 2.0 - 3.0) < Math.Sqrt(3.0 * epsilon_equation * epsilon_equation));
        }
    }
}
Sнаđошƒаӽ
  • 16,753
  • 12
  • 73
  • 90
Alex
  • 21,273
  • 10
  • 61
  • 73
  • Nope, this gets me the absolute epsilon which CANNOT be used in comparison, e.g., `abs(1.0+2.0 - 3.0) > epsilon` would give a value of `false`, unfortunately :( – Dmitri Nesteruk Feb 02 '16 at 18:57
  • The error in `abs(1.0+2.0-3.0)` is `sqrt(3*epsilon^2)`, so for that equation you should use that value. This is explained in page 3 of the PDF I linked. – Alex Feb 02 '16 at 19:00
  • here's the test I did ` Console.WriteLine(Math.Abs(1.0 + 2.0 - 3.0) < Math.Sqrt(3.0 * double.Epsilon * double.Epsilon));`, this is C# and it prints `False` – Dmitri Nesteruk Feb 02 '16 at 20:19
  • The error in `1.0 + 2.0 - 3.0` is zero. – tmyklebu Feb 03 '16 at 06:27
  • I will check your C# version, however, the imprecision generally only occurs for decimal fractions that, when converted, give infinite binary fractions. 1.0,2.0 and 3.0 can be perfectly represented as binary numbers, so I believe @tmyklebu is correct. – Alex Feb 03 '16 at 08:19
  • Also note that C#'s `Double.Epsilon` is not machine epsilon (2^(-53)), but rather the smallest normal `double` (2^(-1023) or so). So your multiplication underflows. – tmyklebu Feb 03 '16 at 14:39
  • @tmyklebu clearly it's not - if it did, we would have `1.0+2.0==3.0`, which is not the case. – Dmitri Nesteruk Feb 04 '16 at 20:45
  • @DmitriNesteruk: If you don't have `1.0 + 2.0 == 3.0`, then you have very broken floating-point arithmetic. – tmyklebu Feb 05 '16 at 03:52
1

I am aware of the following approach to exact floating-point predicates computation: calculate the value, using standard floating point types, and calculate the error. Usually, the predicate can be stated as p(x) == 0 or p(x) < 0, etc. If the absolute value of p(x) is greater than the error, the computations are considered exact. Otherwise, interval-based or exact rational arithmetic is used.

It is possible to estimate the error from the expression used. I've heard of automatic generators of this, but failed to find any reference.

As far as I know, exact computations are mainly used for geometry, and googling for "exact geometric computations" gives a lot on the topic.

Here is an article that somehow explains error estimation.

lisyarus
  • 15,025
  • 3
  • 43
  • 68