Take the following piece of code:
NSError *error;
NSString *myJSONString = @"{ \"foo\" : 0.1}";
NSData *jsonData = [myJSONString dataUsingEncoding:NSUTF8StringEncoding];
NSDictionary *results = [NSJSONSerialization JSONObjectWithData:jsonData options:0 error:&error];
My question is, is results[@"foo"]
an NSDecimalNumber, or something with finite binary precision like a double or float? Basically, I have an application that requires the lossless accuracy that comes with an NSDecimalNumber
, and need to ensure that the JSON deserialization doesn't result in rounding because of doubles/floats etcetera.
E.g. if it was interpreted as a float, I'd run into problems like this with precision:
float baz = 0.1;
NSLog(@"baz: %.20f", baz);
// prints baz: 0.10000000149011611938
I've tried interpreting foo
as an NSDecimalNumber and printing the result:
NSDecimalNumber *fooAsDecimal = results[@"foo"];
NSLog(@"fooAsDecimal: %@", [fooAsDecimal stringValue]);
// prints fooAsDecimal: 0.1
But then I found that calling stringValue
on an NSDecimalNumber
doesn't print all significant digits anyway, e.g...
NSDecimalNumber *barDecimal = [NSDecimalNumber decimalNumberWithString:@"0.1000000000000000000000000000000000000000000011"];
NSLog(@"barDecimal: %@", barDecimal);
// prints barDecimal: 0.1
...so printing fooAsDecimal
doesn't tell me whether results[@"foo"]
was at some point rounded to finite precision by the JSON parser or not.
To be clear, I realise I could use a string rather than a number in the JSON representation to store the value of foo, i.e. "0.1"
instead of 0.1
, and then use [NSDecimalNumber decimalNumberWithString:results[@"foo"]
]. But, what I'm interested in is how the NSJSONSerialization class deserializes JSON numbers, so I know whether this is really necessary or not.