-3

XCode 9.4.1. In debugger console I see results that seem strange to me:

(lldb) print (double)0.07
(double) $0 = 0.070000000000000007
(lldb) print [(NSDecimalNumber*)[NSDecimalNumber decimalNumberWithString:@"0.07"] doubleValue]
(double) $1 = 0.069999999999999993

The same results I see if executing in compiled code. I don't understand why result is different when converting literal 0.07 to double, and when converting decimal 0.07 to double. Why precision is lost differently?

What am I missing?

jesse
  • 650
  • 5
  • 19
  • 1
    On the paper you have infinite precision but in computer calculations it's have limited precision. Go study numerical methods. – Cy-4AH Aug 30 '18 at 14:07
  • 1
    You just have double values from different sources, which interpret 0.7 number by their own. In first case it's LLVM in the second it's implementation of `NSDecimalNumber`. That's all. I think it's not big deal. – Cy-4AH Aug 30 '18 at 15:32
  • Why do you use `NSDecimalNumber` and why do you convert a `NSDecimalNumber` to a double? – Willeke Aug 31 '18 at 12:44

2 Answers2

1

The values are calulated differently

(lldb) p 7.0 / 100.0
(double) $0 = 0.070000000000000007
(lldb) p 7.0 / 10.0 / 10.0
(double) $1 = 0.069999999999999993
Willeke
  • 14,578
  • 4
  • 19
  • 47
  • Do you have any references explaining this? Why are they calculated this way? – jesse Aug 31 '18 at 08:09
  • I don't have any references, it's an educated guess. The compiler is parsing a string and `NSDecimalNumber` is converting a decimal number. Try `po [NSNumber numberWithDouble:0.07]`, similar effect. – Willeke Aug 31 '18 at 12:43
  • Well, `NSDecimalNumber` also gets a string as input. Compiler converts string directly to double, while `NSDecimalNumber` first converts it to decimal format. But between string representation and decimal format should not be any precision loses. So, I'm just curious why result is different. – jesse Aug 31 '18 at 16:19
0

NSDecimalNumber is designed to behave exactly like what you're seeing. It does "base 10 math" to avoid the very issue you are seeing -- traditional floating point representation of numbers can't accurately represent the base 10 numbers we're used to writing.

The comment instructing you to "go study numerical methods" is a bit brash but it's kind of heading in the right direction. A better use of your time would be to (also) take a look at the documentation for NSDecimalNumber and think about what it does and why it exists.

I had never heard of NSDecimalNumber before a couple minutes ago, so thanks for pointing me at some new knowledge. :-)

Craig
  • 3,253
  • 5
  • 29
  • 43
  • Probably I was not clear enough. The question was why in one case I see 0.070000000000000007 value, while in another case the value is 0.069999999999999993. They both produced from the same decimal value. Seems like compiler and NSDecimalNumber class do the conversion differently. – jesse Aug 30 '18 at 14:36
  • Different rounding probably – Sulthan Aug 30 '18 at 14:40
  • 1
    In the conversion from the base-10 representation to a base-2 representation there can sometimes be enough residual floating-point loss of precision to cause differences in representation in this way. This can't be helped much, and causes unexpected string representation differences as in https://bugs.swift.org/browse/SR-7054 – Itai Ferber Aug 30 '18 at 14:50
  • You were perfectly clear. The comments and answers you have received are directly addressing your question. When you convert 0.7 to binary to store it as a double (or when you simply cast it to double), the result is 0.0001000111101 plus a bunch more digits IN BINARY. If you convert back to base 10 from that value you don't get exactly .07. You get 1/16+1/256+1/512+1/1024+1/2048 which is about 0.06982421875. It will get closer to .07 as you extend the binary fraction to more digits but I don't think it will ever get there.... – Craig Aug 31 '18 at 19:13
  • ...If on the other hand you put ".07" into an `NSDecimalNumber` (*which was created for the very purpose of solving the problem you're observing*) it isn't stored as a binary fraction but rather as a large integer containing all the significant digits (in this case, just a 7) and an exponent (in this case -2). Since we're storing 7 and not .07, there's no loss of precision, because 7 can be easily represented in binary (111). You are correct that the "conversion is done differently". Exactly how/why it is done differently is what we have been trying to explain. :-) – Craig Aug 31 '18 at 19:22