Say I have the following code:
double factor;
double num = 4.35;
BigDecimal y = new BigDecimal(num);
BigDecimal n = new BigDecimal(factor);
BigDecimal asBigDecimal = y.multiply(n);
double asDouble = num * factor;
System.out.println("Double: " + asDouble + "\tBigDecimal: " + asBigDecimal);
This is what happens when I set factor
to each of the following:
factor = 1: Double: 4.35 BigDecimal: 4.3499999999999996447286321199499070644378662109375
factor = 10: Double: 43.5 BigDecimal: 43.4999999999999964472863211994990706443786621093750
factor = 100: Double: 434.99999999999994 BigDecimal: 434.9999999999999644728632119949907064437866210937500
factor = 1000: Double: 4350.0 BigDecimal: 4349.9999999999996447286321199499070644378662109375000
Also, when I run System.out.print(4.35 / 10);
as a separate program, I get 0.43499999999999994
in the console. Why is it that multiplying by 1, 10, and 1000 give rounded answers (as doubles) in the console? I understand the basics of floating point precision and that 4.35 should not be able to be expressed exactly in binary form, so why is 4.35 printed to the console (asDouble)? Why doesn't multiplying by 100 or dividing by 10 automatically round?