1

I'm trying to printf M_PI but i think i'm not using the correct format specifier. The output should be: 3.14159265358979323846 but i get 3.14159265358979300000.

int main(void)
{
    printf("%.20f\n", M_PI);
    return 0;
}

I've tried using %Lf, %Lg, %e, %g but none of them work so i'm not sure if this error comes from the format specifier or if it has something to do with the hardware i'm using.

qwerty000
  • 176
  • 11
  • 1
    In MSVC `M_PI` is a macro in `math.h` so there is no way of telling how that will be used. It could be a `long double` which AFAIK is not yet implemented, or some other purpose. – Weather Vane Jul 14 '18 at 23:04
  • Note that when you use the format specifier `%Lf` you will have to pass a `long double` not a `double`, perhaps like `printf("%.20Lf\n", (long double)M_PI);` – Weather Vane Jul 15 '18 at 08:15

4 Answers4

4

Assuming that your compiler maps the double floating-point type to IEEE 754 double-precision, then double has 53 binary digits of precision. The fact that floating-point numbers are represented in base 2 means that it's not the same set of numbers that can be represented as double and that have a short decimal representation. In particular, the exact value of the double closest to π has a compact representation with only 53 binary digits of precision, but is represented in decimal as 3.141592653589793115997963468544185161590576171875. This is not an approximation of π to 50ish decimal digits! It is an approximation of π with 53 binary digits of precision, which means that about the 17 first decimal digits are correct, and the other digits are only there because of the mismatch between the base 2 in which M_PI is represented and the base 10 in which we are discussing its value right now.

This means that you could expect a quality printf implementation to print 3.14159265358979311600. Note that this is not exactly the string you said you expected in your question, but it is the rounded decimal representation to 20 digits after . of the actual value of M_PI. Anyway, you could expect a quality printf implementation to print all the decimal digits that are necessary to show the exact value of a double, although in the worst cases there can be 750 or so.

The C standard does not force all printf implementations to have this property:

For e, E, f, F, g, and G conversions, if the number of significant decimal digits is at most DECIMAL_DIG, then the result should be correctly rounded. If the number of significant decimal digits is more than DECIMAL_DIG but the source value is exactly representable with DECIMAL_DIG digits, then the result should be an exact representation with trailing zeros. Otherwise, the source value is bounded by two adjacent decimal strings L < U, both having DECIMAL_DIG significant digits; the value of the resultant decimal string D should satisfy L <= D <= U, with the extra stipulation that the error should have a correct sign for the current rounding direction.

It is easier to make an implementation that prints at most the 17 first decimal digits of the double's exact value, and zeroes afterwards, and the C standard allows such an implementation. This is probably what happened with your compiler. (Compilers that take this shortcut usually do not implement the “current rounding direction” constraint either.)

Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281
1

There's only so much precision that a double precision floating point number has. It's less that 20 digits, so at some point no matter how many digits of precision you ask for, you'll either get meaningless noise or all zeros.

kshetline
  • 12,547
  • 4
  • 37
  • 73
  • The bound the OP is running into is not the precision of the `double` type; the format commonly used for `double` can represent 3.141592653589793115997963468544185161590576171875, which is closer to π than what OP is seeing. Likely, the bound they are running into is a poor implementation of `printf` that does not format the number with correct rounding (as defined by IEEE 754). – Eric Postpischil Jul 15 '18 at 01:22
  • The typical IEEE double precision format I'm familiar with is this: https://en.wikipedia.org/wiki/Double-precision_floating-point_format. As the article says, this is capable of "approximately 16 decimal digits" of accuracy. The value of π you provided was 49 decimal digits, which would take over twenty bytes of binary mantissa to represent -- not a typical floating point representation at all. – kshetline Jul 15 '18 at 01:36
  • @kshetline The very article that you quote actually says that the precision is 53 bits. 3.141592653589793115997963468544185161590576171875 is the decimal representation of a binary floating-point number with 53 binary digits of precision. – Pascal Cuoq Jul 15 '18 at 02:04
  • @kshetline: The IEEE-754 basic 64-bit binary floating-point representation, commonly used for `double`, uses 53 bits for the significand. The value I stated is exactly the decimal representation of that significand. It is, if you wish, representable in hexadecimal floating-point as `0x1.921fb54442d18p+1`. This is **the** closest value to π representable in the format, and it is the value used for `M_PI` by any halfway decent implementation. This and the behavior of Microsoft’s `printf`, as well as some others, are familiar to me, so I am confident this is a `printf` issue. – Eric Postpischil Jul 15 '18 at 02:05
  • Indeed, the closest representable value to OP’s output, “3.14159265358979300000”, is also `0x1.921fb54442d18p+1`, and so either the internal value their implementation has for `M_PI` is the same value and `printf` is implemented with the behavior we describe, or their implementation both has a different value for `M_PI` and has an even more broken `printf`. In any case, it is clear: `double` does provide a better approximation for π than what is printed by the OP’s implementation, and the `printf` of OP’s implementation leaves something to be desired. – Eric Postpischil Jul 15 '18 at 02:08
  • 53 DECIMAL digits of precision requires approximately 53 * 3.322 (`1 / log10(2)`) BINARY digits of precision, or about 176 BINARY digits. Please don't confuse decimal and binary precision. – kshetline Jul 15 '18 at 02:09
  • 1
    @kshetline: It requires about 53*3.322 bits to represent **any** decimal number of about 53 decimal digits. But **some** numbers of 53 decimal digits can be represented exactly with 53-bit significands, namely those that are the sums of powers of two spanning at most 53 binades. 3.141592653589793115997963468544185161590576171875 is one such number, and it is exactly represented in `double` per IEEE 754-2008 clause 3.3. – Eric Postpischil Jul 15 '18 at 02:12
  • Only the first 16 digits of `3.141592653589793115997963468544185161590576171875` match the real value of π. The rest is round-off garbage. If you're saying you can spend a lot of decimal digits trying to express exact binary values which aren't particularly precise values of π, yes, that's true. But what's the point? – kshetline Jul 15 '18 at 02:19
  • 1
    @kshetline: The point is we use math to understand the behavior of floating-point. We know how floating-point represents values and how numerals in input are elsewhere are converted to floating-point or vice-versa. To understand the steps used in floating-point operations, we do not approximate them with truncated decimal values. When “3.1415926535897932384626433” is properly converted to `double`, the result is **exactly** `0x1.921fb54442d18p+1`, which is **exactly** 3.141592653589793115997963468544185161590576171875. That’s the math. We do not care about the decimal digits humans use… – Eric Postpischil Jul 15 '18 at 02:22
  • … They are only a medium for conveying the value. We know the exact mathematical value, and it is a sum of powers of two, and it is the same number written in decimal above. The form in which we write the number is irrelevant; the value of the number is what matters. So we know what number is in the machine for `M_PI`. The question is then why the OP sees ”3.14159265358979300000”. It is **not** because 3.14159265358979300000 is in the machine for `M_PI`, because we know the machine has a different number for `M_PI`… – Eric Postpischil Jul 15 '18 at 02:24
  • … it is because `printf` takes that number for `M_PI` and converts it to the string “3.14159265358979300000”. Thus, the machine has a better approximation for π in its `double`, but `printf` is printing a worse approximation. Therefore, the bound on the quality of the result the OP is seeing is in `printf`, not in `double`. – Eric Postpischil Jul 15 '18 at 02:26
  • `printf` is not intended to serve as an I/O mechanism for precise storage of IEEE floating point numbers. There are better mechanisms for that. It's perfectly reasonable for `printf` to show no more than the normal number of decimal digits of precision, rather than going on and on with decimal digits until the last bit of internal binary precision has been nailed down. – kshetline Jul 15 '18 at 02:27
  • 1
    As an example, if you print the same number, `M_PI` or `0x1.921fb54442d18p+1` on macOS using Apple’s `printf`, it prints “3.14159265358979311600”, which is closer to π than OP’s “3.14159265358979300000” is. Both OP’s implementation and macOS have the same internal value for the `double`, but they have different `printf` implementations. Therefore, OP’s `printf` is the cause of the worse result. – Eric Postpischil Jul 15 '18 at 02:28
  • 1
    @kshetline: IEEE 754 recommends correctly rounded conversions, including in `printf` implementations. A good `printf` implementation does serve this purpose. In any case, whether `printf` is intended or design for such a purpose is irrelevant to OP’s question. They ask why they get the (bad) output they do, when a better result is possible. As demonstrated, the correct answer is that their `printf` is printing a string that does not well represent the value. Whether it was intended to do so or not does not alter the fact that it is not doing so, and that your answer to the question is wrong. – Eric Postpischil Jul 15 '18 at 02:31
  • @kshetline Re: "printf is not intended to serve as an I/O mechanism for precise storage of IEEE floating point numbers. " --> `printf()` and friends are used as I/O between systems that might not share the same FP implementation. – chux - Reinstate Monica Jul 15 '18 at 02:59
1

I know this has already been answered, but, assuming you are using c++, you can use

#define _USE_MATH_DEFINES 
#include <cmath>
#include <iostream>
using namespace std;

now in a class or int main();

double g = M_PI;
cout << setprecision(50) << g << endl;

output:

3.141592653589793115997963468544185161590576171875
AGMPenguin
  • 45
  • 8
0

double, as implemented in most architectures that follow IEEE-754 is only 64bit, one for the sign, 11 for the exponent, leaves 52 bits for the mantissa (53, because the first significant bit is always 1, and for this reason is not included in the format, so you have 53 significant digits in base 2), and that means roughly ((52+1)/ln(10)*ln(2) ~= 15.95 significant digits in base 10) The approximation of M_PI to that precision should be something similar to.

3.141592653589793

which is the aproximate value you get of exact digits of PI. There's an approximate difference of 2 * 10^(-16) between the value you got and the actual value of PI, probably due to the algorithm printf uses to get the decimal version of that number.

But you cannot expect more exact result from the IEEE-754 implementation, so the answer you get is affected by the number of digits you asked for.

By the way, have you tried to use "%.200f" format string? At least, in gcc, you get a lot of nonzero digits, to complete the full precision you request (in base 2, assumming from the last bit onwards you have all zero digits). In MSVC, I know it gets to a moment that it fills with zeros (using some kind of algorithm), which is what is happening to you. But always, get present that, on an IEEE-754 implementation of 64bit floating point numbers, you have a maximum of 16 significative/exact digits on the result (well, a little less than 16, as the result is close to it, but below.)

Luis Colorado
  • 10,974
  • 1
  • 16
  • 31