11

I've stumbled onto an odd NSDecimalNumber behavior: for some values, invocations of integerValue, longValue, longLongValue, etc., return the an unexpected value. Example:

let v = NSDecimalNumber(string: "9.821426272392280061")
v                  // evaluates to 9.821426272392278
v.intValue         // evaluates to 9
v.integerValue     // evaluates to -8
v.longValue        // evaluates to -8
v.longLongValue    // evaluates to -8

let v2 = NSDecimalNumber(string: "9.821426272392280060")
v2                  // evaluates to 9.821426272392278
v2.intValue         // evaluates to 9
v2.integerValue     // evaluates to 9
v2.longValue        // evaluates to 9
v2.longLongValue    // evaluates to 9

This is using XCode 7.3; I haven't tested using earlier versions of the frameworks.

I've seen a bunch of discussion about unexpected rounding behavior with NSDecimalNumber, as well as admonishments not to initialize it with the inherited NSNumber initializers, but I haven't seen anything about this specific behavior. Nevertheless there are some rather detailed discussions about internal representations and rounding which may contain the nugget I seek, so apologies in advance if I missed it.

EDIT: It's buried in the comments, but I've filed this as issue #25465729 with Apple. OpenRadar: http://www.openradar.me/radar?id=5007005597040640.

EDIT 2: Apple has marked this as a dup of #19812966.

Cora Middleton
  • 956
  • 7
  • 13
  • In case it's relevant, the hex representations of these values are `0xFFFFFFF8` and `0x9`. Or in binary, `1111 1111 1111 1111 1111 1111 1111 1000` and `1001`. – nhgrif Mar 31 '16 at 00:57
  • Thanks @nhgrif; I've updated the title / body to reflect this. – Cora Middleton Mar 31 '16 at 01:10
  • [The reference](https://developer.apple.com/library/mac/documentation/Cocoa/Reference/Foundation/Classes/NSNumber_Class/) for `NSNumber` does include a warning: "Because numeric types have different storage capabilities, attempting to initialize with a value of one type and access the value of another type may produce an erroneous result" – Code Different Mar 31 '16 at 01:55
  • Have you tried this in Objective-C? Is this a problem with `NSDecimalNumber` or with Swift? – nhgrif Mar 31 '16 at 12:02
  • 1
    Also, that's not the two's compliment of the value.... – nhgrif Mar 31 '16 at 12:10
  • Same error in Objective-C ... – Martin R Mar 31 '16 at 16:01
  • @nhgrif Oh, duh. Too many 8s and 9s and 7s. You're correct; edited to fix. – Cora Middleton Mar 31 '16 at 16:29

2 Answers2

2

Since you already know the problem is due to "too high precision", you could workaround it by rounding the decimal number first:

let b = NSDecimalNumber(string: "9.999999999999999999")
print(b, "->", b.int64Value)
// 9.999999999999999999 -> -8

let truncateBehavior = NSDecimalNumberHandler(roundingMode: .down,
                                              scale: 0,
                                              raiseOnExactness: true,
                                              raiseOnOverflow: true,
                                              raiseOnUnderflow: true,
                                              raiseOnDivideByZero: true)
let c = b.rounding(accordingToBehavior: truncateBehavior)
print(c, "->", c.int64Value)
// 9 -> 9

If you want to use int64Value (i.e. -longLongValue), avoid using numbers with more than 62 bits of precision, i.e. avoid having more than 18 digits totally. Reasons explained below.


NSDecimalNumber is internally represented as a Decimal structure:

typedef struct {
      signed int _exponent:8;
      unsigned int _length:4;
      unsigned int _isNegative:1;
      unsigned int _isCompact:1;
      unsigned int _reserved:18;
      unsigned short _mantissa[NSDecimalMaxSize];  // NSDecimalMaxSize = 8
} NSDecimal;

This can be obtained using .decimalValue, e.g.

let v2 = NSDecimalNumber(string: "9.821426272392280061")
let d = v2.decimalValue
print(d._exponent, d._mantissa, d._length)
// -18 (30717, 39329, 46888, 34892, 0, 0, 0, 0) 4

This means 9.821426272392280061 is internally stored as 9821426272392280061 × 10-18 — note that 9821426272392280061 = 34892 × 655363 + 46888 × 655362 + 39329 × 65536 + 30717.

Now compare with 9.821426272392280060:

let v2 = NSDecimalNumber(string: "9.821426272392280060")
let d = v2.decimalValue
print(d._exponent, d._mantissa, d._length)
// -17 (62054, 3932, 17796, 3489, 0, 0, 0, 0) 4

Note that the exponent is reduced to -17, meaning the trailing zero is omitted by Foundation.


Knowing the internal structure, I now make a claim: the bug is because 34892 ≥ 32768. Observe:

let a = NSDecimalNumber(decimal: Decimal(
    _exponent: -18, _length: 4, _isNegative: 0, _isCompact: 1, _reserved: 0,
    _mantissa: (65535, 65535, 65535, 32767, 0, 0, 0, 0)))
let b = NSDecimalNumber(decimal: Decimal(
    _exponent: -18, _length: 4, _isNegative: 0, _isCompact: 1, _reserved: 0,
    _mantissa: (0, 0, 0, 32768, 0, 0, 0, 0)))
print(a, "->", a.int64Value)
print(b, "->", b.int64Value)
// 9.223372036854775807 -> 9
// 9.223372036854775808 -> -9

Note that 32768 × 655363 = 263 is the value just enough to overflow a signed 64-bit number. Therefore, I suspect that the bug is due to Foundation implementing int64Value as (1) convert the mantissa directly into an Int64, and then (2) divide by 10|exponent|.

In fact, if you disassemble Foundation.framework, you will find that it is basically how int64Value is implemented (this is independent of the platform's pointer width).

But why int32Value isn't affected? Because internally it is just implemented as Int32(self.doubleValue), so no overflow issue would occur. Unfortunately a double only has 53 bits of precision, so Apple has no choice but to implement int64Value (requiring 64 bits of precision) without floating-point arithmetics.

kennytm
  • 510,854
  • 105
  • 1,084
  • 1,005
0

I'd file a bug with Apple if I were you. The docs say that NSDecimalNumber can represent any value up to 38 digits long. NSDecimalNumber inherits those properties from NSNumber, and the docs don't explicitly say what conversion is involved at that point, but the only reasonable interpretation is that if the number is roundable to and representable as an Int, then you get the correct answer.

It looks to me like a bug in handling the sign-extension during the conversion somewhere, since intValue is 32-bit and integerValue is 64-bit (in Swift).

Ewan Mellor
  • 6,747
  • 1
  • 24
  • 39