0

In the below example the output is 3.1 so it starts at the first value.

double y = 3.14784;
cout << setprecision(2) << y;

in the following example the output precision starts at the decimal value

int x = 2;
double y = 3.0;
cout << setprecision(2) << x/y;

and yet in the following line of code - same x and y as declared above we get the precision starting not at all shown. (the only way for the below to print 6.00 is if we use fixed).

cout << setprecision(2) << x * y; // shows 6. 

if we aren't using fixed - just a setprecision(n) where does that n start? because it states that its a set precision is used for decimal precision. and yet in the first example it looks at the whole double value and not just the decimal.

please advise. thanks.

YelizavetaYR
  • 1,611
  • 6
  • 21
  • 37

2 Answers2

1

From http://www.cplusplus.com/reference/ios/ios_base/precision/

For the default locale:

  • Using the default floating-point notation, the precision field specifies the maximum number of meaningful digits to display in total counting both those before and those after the decimal point. Notice that it is not a minimum, and therefore it does not pad the displayed number with trailing zeros if the number can be displayed with less digits than the precision.
  • In both the fixed and scientific notations, the precision field specifies exactly how many digits to display after the decimal point, even if this includes trailing decimal zeros. The digits before the decimal point are not relevant for the precision in this case.
clcto
  • 9,530
  • 20
  • 42
  • This is where my confusion lies - by default it precision specifies max # digits (including before the decimal). and yet your second bullet states in scientific notation specifies amount of digits after the decimal point. how do I specify/distinguish between default and scientific notation? – YelizavetaYR Sep 16 '14 at 22:10
  • 1
    @YelizavetaYR the same way you would use `fixed`. `std::cout << std::scientific`. http://www.cplusplus.com/reference/ios/scientific/ – clcto Sep 16 '14 at 22:15
  • right, that I understand - but lets say the comparison is/was between code using only setprecision(n) (no fixed, no scientific, etc). – YelizavetaYR Sep 16 '14 at 22:17
  • right, and as I understand now - setprecision will start evaluating the precision (not from the decimal value as I suspected) but from the first whole/actual number. so if it was 0.67 it would start from the 6 where as if it was 1.67 it would start from the 1. etc. – YelizavetaYR Sep 16 '14 at 22:20
0

n starts from the first meaningful digits(non-zero)

rick
  • 58
  • 3
  • so if its after the decimal - 0.xxxx it will start at first value after x? where as if the output would be 34.xxxx it start at the 3 and then not even get to the decimal if setprecision(2)? – YelizavetaYR Sep 16 '14 at 22:14