double
, as implemented in most architectures that follow IEEE-754 is only 64bit, one for the sign, 11 for the exponent, leaves 52 bits for the mantissa (53, because the first significant bit is always 1
, and for this reason is not included in the format, so you have 53 significant digits in base 2), and that means roughly ((52+1)/ln(10)*ln(2) ~= 15.95
significant digits in base 10) The approximation of M_PI
to that precision should be something similar to.
3.141592653589793
which is the aproximate value you get of exact digits of PI. There's an approximate difference of 2 * 10^(-16)
between the value you got and the actual value of PI, probably due to the algorithm printf
uses to get the decimal version of that number.
But you cannot expect more exact result from the IEEE-754 implementation, so the answer you get is affected by the number of digits you asked for.
By the way, have you tried to use "%.200f"
format string? At least, in gcc, you get a lot of nonzero digits, to complete the full precision you request (in base 2, assumming from the last bit onwards you have all zero digits). In MSVC, I know it gets to a moment that it fills with zeros (using some kind of algorithm), which is what is happening to you. But always, get present that, on an IEEE-754 implementation of 64bit floating point numbers, you have a maximum of 16 significative/exact digits on the result (well, a little less than 16, as the result is close to it, but below.)