3

I have the following piece of code

#include <iostream>
#include <iomanip>

int main()

{
    double  x = 7033753.49999141693115234375;
    double  y = 7033753.499991415999829769134521484375;
    double z = (x+ y)/2.0;

    std::cout  << "y is " << std::setprecision(40) << y << "\n";
    std::cout  << "x is " <<  std::setprecision(40) << x << "\n";
    std::cout  << "z is " << std::setprecision(40) << z << "\n";

    return 0;
}

When the above code is run I get,

y is 7033753.499991415999829769134521484375
x is 7033753.49999141693115234375
z is 7033753.49999141693115234375

When I do the same in Wolfram Alpha the value of z is completely different

 z = 7033753.4999914164654910564422607421875 #Wolfram answer

I am familiar with floating point precision and that large numbers away from zero can not be exactly represented. Is that what is happening here? Is there anyway in c++ where I can get the same answer as Wolfram without any performance penalty?

Morpheus
  • 3,285
  • 4
  • 27
  • 57
  • 5
    Double precision has a *maximum* precision of 17 digits (on most machines). Looks like you're way beyond that. – Mark Ransom Oct 15 '19 at 11:58
  • 3
    Wolfram Alpha is probably using arbitrary-precision decimals, which are going to be significantly slower than machine floating points. – walnut Oct 15 '19 at 12:00
  • 3
    You can use a big math library but that would come with a performance penalty. Wolfram Alpha most likely is doing this internally. – drescherjm Oct 15 '19 at 12:00
  • @MarkRansom so anything after the 17 digits is garbage? – Morpheus Oct 15 '19 at 12:02
  • 5
    You have 18 significant digits correct. That's as good as you can hope for when using `double`. If you need precision, don't use floating point numbers. – Yksisarvinen Oct 15 '19 at 12:02
  • 2
    @Morpheus Not necessarily. `0.5` can be represented without any error in floating point standard. But it's simply limited, as much as decimal system is limited when it comes to writing result of `1/3`. We (humans) don't do precise math in decimal fractions, we either use `1/3` notation or special symbols like pi or other things when precision is needed. – Yksisarvinen Oct 15 '19 at 12:18
  • 2
    It's OT, but here: [CppCon 2019: Marshall Clow “std::midpoint? How Hard Could it Be?”](https://www.youtube.com/watch?v=sBtAGxBh-XI) you can find all the excruciating details on how `(x + y)/2` could go wrong. – Bob__ Oct 15 '19 at 12:58

2 Answers2

6

large numbers away from zero can not be exactly represented. Is that what is happening here?

Yes.

Note that there are also infinitely many rational numbers that cannot be represented near zero as well. But the distance between representable values does grow exponentially in larger value ranges.

Is there anyway in c++ where I can get the same answer as Wolfram ...

You can potentially get the same answer by using long double. My system produces exactly the same result as Wolfram. Note that precision of long double varies between systems even among systems that conform to IEEE 754 standard.

More generally though, if you need results that are accurate to many significant digits, then don't use finite precision math.

... without any performance penalty?

No. Precision comes with a cost.

eerorika
  • 232,697
  • 12
  • 197
  • 326
0

Just telling IOStreams to print to 40 significant decimal figures of precision, doesn't mean that the value you're outputting actually has that much precision.

A typical double takes you up to 17 significant decimal figures (ish); beyond that, what you see is completely arbitrary.

Per eerorika's answer, it looks like the Wolfram Alpha answer is also falling foul of this, albeit possibly with some different precision limit than yours.

You can try a different approach like a "bignum" library, or limit yourself to the precision afforded by the types that you've chosen.

Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
  • 1
    (a) The value being output has infinite *precision*, because each floating-point number represents one real number exactly. It may not have *accuracy* in terms of being other number the programmer desires. (b) The digits beyond 17 are not arbitrary; they are the defined result of precisely specified mathematics. (c) 17 is wrong. Within its range, the IEEE-754 binary64 format can take and restore **any** number of 15 significant decimal digits. If you say it can do **some** up to 17 digits, then you must also admit it can do **some** to 99 digits or even one of 787 significant digits. – Eric Postpischil Oct 15 '19 at 13:47
  • Further, these are not nits: Getting the details right is essential to reasoning correctly about floating-point arithmetic. For example, [this deleted answer](https://stackoverflow.com/a/58277510/298225) attempted to use the “17 significant decimal figures” as a measure of accuracy. Or one can design code that takes advantage of exactly what the values are—and such code would fail if the digits beyond 17 were completely arbitrary. – Eric Postpischil Oct 15 '19 at 13:49
  • `double` gives 15-17 precise decimal places (I said "ish"!). You'll find this terminology everywhere, and it's well understood. The rest seems like pedantic bikeshedding. Sure, the remaining data is not arbitrary in the sense that the computer pulled it out of /dev/null, but it is arbitrary for the OP's purposes - that is, _the OP may ignore the differences observed in those positions_. I think that's pretty obvious at this level of reasoning. If you want to have an academic debate over more technical intricacies of the technology or of the mathematics, this is not the place. – Lightness Races in Orbit Oct 15 '19 at 14:15
  • 1
    The fact that terminology is everywhere is merely evidence of how widespread the error is. It is certainly not well understood, as it is wrong. The math here is actually not complicated: When any 15-digit decimal numeral within range is converted to IEEE-754 binary64 and back to 15 decimal digits, the result equals the input. The same is not true for 16 or 17 or any greater number. There is no reason to make incorrect statements about 17 digits. – Eric Postpischil Oct 15 '19 at 14:21
  • It is not wrong. You are talking about roundtrips. That is a valid observation. However that does not change the fact that a `double` has sufficient bits of information to reliably store what works out as 15-17 decimal significant figures, and no more than that. Period. Full stop. I'm not arguing about it any further. Write your own answer if everybody else is wrong. – Lightness Races in Orbit Oct 15 '19 at 15:03
  • No, that is wrong. It cannot reliably store 17 decimal significant figures. 15 is the most that is guaranteed, and, if we are measuring the quantity of information stored in units of “decimal digits”, it is about 15.95, which one might, misleadingly, round to 16. 17 is just plain wrong. You know where 17 comes from? That is the number of decimal digits you need to preserve the information in a binary64. It is the size of a container that holds a binary64, rather than the capacity of a binary64 to hold anything. So it is just plain wrong to use it as a measure of the capacity of a binary64. – Eric Postpischil Oct 15 '19 at 15:06