5

Some people say that machine epsilon for double precision floating point numbers is 2^-53 and other (more commonly) say its 2^-52. I have messed around estimating machine precision using integers besides 1 and aproaching from above and below (in matlab), and have gotten both values as results. Why is it that both values can be observed in practice? I thought that it should always produce an epsilon around 2^-52.

njvb
  • 1,377
  • 3
  • 18
  • 36

3 Answers3

8

There's an inherent ambiguity about the term "machine epsilon", so to fix this, it is commonly defined to be the difference between 1 and the next bigger representable number. (This number is actually (and not by accident) obtained by literally incrementing the binary representation by one.)

The IEEE754 64-bit float has 52 explicit mantissa bits, so 53 including the implicit leading 1. So the two consecutive numbers are:

1.0000  .....  0000
1.0000  .....  0001
  \-- 52 digits --/

So the difference betwen the two is 2-52.

Kerrek SB
  • 464,522
  • 92
  • 875
  • 1,084
3

It depends on which way you round.

1 + 2^-53 is exactly half way between 1 and 1 + 2^-52, which are consecutive in double-precision floating point. So if you round it up, it is different from 1; if you round it down, it is equal to 1.

Nemo
  • 70,042
  • 10
  • 116
  • 153
  • So is it the rounding error that allows the result to sometimes show up as 2^-53 and sometimes as 2^-52? because that is that part the really confused me. – njvb Nov 03 '11 at 13:28
  • Mathematically, as Kerrek points out,`1` and `1 + 2^-52` definitely have consecutive double-precision representations. Rounding is the only explanation I can imagine for why your experiments would show something else. Have you tried adding `1` and (e.g.) `1.5 * 2^-53`? – Nemo Nov 03 '11 at 15:48
2

There are actually two definitions of "machine precision" which sound quite identical on first sight, but aren't, as they yield different values for the "machine epsilon":

  1. The machine epsilon is the smallest floating-point number eps1 such that 1.0 + x > 1.0.
  2. The machine epsilon is the difference eps2 = x - 1.0 where x is the smallest representable floating-point number with x > 1.0.

Strictly mathematically speaking, the definitions are equivalent, i.e. eps1 == eps2, but we're not talking about real numbers here, but about floating-point numbers. And that means implicit rounding and cancellation, which means that the, approximatively, eps2 == 2 * eps2 (at least in the most common architectures using IEEE-754 floats).

In more detail, if we let some x go from 0.0 to some point where 1.0 + x > 1.0, this point is reached at x == eps1 (by definition 1). However, because of roundup, the result of 1.0 + eps1 is not 1.0 + eps1, but the next representable floating-point value larger than 1.0 -- that is, eps2 (by definition 2). So, in essence,

eps2 == (1.0 + eps1) - 1.0

(Mathematicians will cringe at this.) And due to the rounding behaviour, this means that

eps2 == eps1 * 2 (approximatively)

And that is why there are two definitions for "machine epsilon", both legitimate and correct.

Personally speaking, I find eps2 the more "robust" definition, as it does not depend on the actual rounding behaviour, only on the representation, but I wouldn't say it is more correct than the other. As ever so often, it all depends on the context. Just be clear about which definition you use when talking about "machine epsilon" to prevent confusion and bugs.

Franz D.
  • 1,061
  • 10
  • 23
  • Not sure why mathematicians would cringe at "this". A mathematician understands that floating-point numbers are distinct from real numbers, and in particular that addition of floating-point values is not associative. – Stephen Canon Feb 27 '15 at 19:53
  • That was not meant to be taken 110% serious, Stephen! I'm perfectly aware that mathematicians aren't dumb. But even I myself, as non-mathematician, find these formulas quite awkward. I mean, `1 + x - 1 == x * 2`? Nah... – Franz D. Feb 27 '15 at 20:38
  • Not to worry, I've just had to explain floating-point to engineering grad students (as a mathematician) way too many times. – Stephen Canon Feb 27 '15 at 21:28