2

Say I have the following code:

double factor;
double num = 4.35;
BigDecimal y = new BigDecimal(num);
BigDecimal n = new BigDecimal(factor);
BigDecimal asBigDecimal = y.multiply(n);
double asDouble = num * factor;
System.out.println("Double: " + asDouble + "\tBigDecimal: " + asBigDecimal);

This is what happens when I set factor to each of the following:

factor = 1:    Double: 4.35 BigDecimal: 4.3499999999999996447286321199499070644378662109375
factor = 10:   Double: 43.5 BigDecimal: 43.4999999999999964472863211994990706443786621093750
factor = 100:  Double: 434.99999999999994   BigDecimal: 434.9999999999999644728632119949907064437866210937500
factor = 1000: Double: 4350.0   BigDecimal: 4349.9999999999996447286321199499070644378662109375000

Also, when I run System.out.print(4.35 / 10); as a separate program, I get 0.43499999999999994 in the console. Why is it that multiplying by 1, 10, and 1000 give rounded answers (as doubles) in the console? I understand the basics of floating point precision and that 4.35 should not be able to be expressed exactly in binary form, so why is 4.35 printed to the console (asDouble)? Why doesn't multiplying by 100 or dividing by 10 automatically round?

null
  • 2,060
  • 3
  • 23
  • 42

1 Answers1

2

There are a couple of things going on. First, there is rounding in binary, and then, there is rounding in decimal.

Look at the binary representations of those BigDecimal values (I used my Decimal/Binary Converter):

factor = 1:    100.0101100110011001100110011001100110011001100110011         (52 bits)
factor = 10:   101011.011111111111111111111111111111111111111111111111       (53 bits)
factor = 100:  110110010.11111111111111111111111111111111111111111111011     (56 bits)
factor = 1000: 1000011111101.1111111111111111111111111111111111111111100111  (59 bits)

The results for factors 1 and 10 aren't rounded in binary; they are <= 53 bits. Rounded to 17 decimal digits for printing they are 4.35 and 43.5, respectively.

The results for factors 100 and 1000 ARE rounded. The factor 100 result is rounded down to this value, since bit 54 is 0:

   factor = 100:  110110010.11111111111111111111111111111111111111111111        (53 bits)

In decimal, that is 434.99999999999994315658113919198513031005859375. Rounded to 17 digits it's 434.99999999999994.

The factor 1000 result is rounded up to this value, since bits 54 and beyond are > 1/2 ULP:

factor = 1000: 1000011111110

That is 4350.

Rick Regan
  • 3,407
  • 22
  • 28
  • Marking this as the answer, but one question: what is so special about the 54th bit (and beyond)? I thought only 52 bits were used to store the fractional part of a double. – null Sep 29 '14 at 19:05
  • 1
    @Kootling 52 bits are stored explicitly, but there is an implicit leading 1 bit – Rick Regan Sep 29 '14 at 19:28
  • What is that 1 bit used for? Also, does this mean the rounding occurs at the 54th bit? Meaning, if the 54th bit is a 1, the number would round up and if it was a 0, it would round down? – null Sep 29 '14 at 19:30
  • @Kootling The 1 is just part of the number. Numbers are normalized, meaning the first bit is always 1, so no need to store it. (Look up the IEEE format for details.) Regarding bit 54: yes, round down if 0, round up if 1 and there's another 1 bit beyond it (meaning it's more than halfway). There is also a case where bit 54 is 1 and all bits beyond are 0, which represents a halfway case that is rounded up if bit 53 is a 1, or down if bit 53 is 0. – Rick Regan Sep 30 '14 at 12:03