I'm literately just doing a multiplication of two floats. How come these statements produce different results ? Should I even be using floats ?
500,000.00 * 0.001660 = 830
I'm literately just doing a multiplication of two floats. How come these statements produce different results ? Should I even be using floats ?
500,000.00 * 0.001660 = 830
How come these statements produce different results ?
Because floating-point arithmetic is not exact and apparently you were not printing the multiplier precisely enough (i. e. with sufficient number of decimal digits). And it wasn't .00166
but something that seemed 0.00166
rounded.
Should I even be using floats ?
No. For money, use integers and treat them as fixed-point rational numbers. (They still aren't exact, but significantly better and less error-prone.)
You didn't show how you initialized periodicInterest
, and presumably you think you set it to 0.00166
, but in fact the error in your output is large enough that you must not have explicitly initialized it as periodicInterest = 0.00166
. It must be closer to 0.00165975
, and the difference between 0.00166
and 0.00165975
is definitely large enough not to just be a single floating-point rounding error.
Assuming you are working with monetary quantities, you should use NSDecimalNumber
or NSDecimal
.
One non-obvious benefit of using NSDecimalNumber
is that it works with NSNumberFormatter
, so you can let Apple take care of formatting currencies for all sorts of foreign locales.
In response to the comments:
“periodicInterest
is clearly not a monetary quantity” and “decimal is no more free of error when dividing by 12 than binary is” - for inexact quantities, I can think of two concerns:
One concern is using sufficient precision to give accurate results. NSDecimalNumber
is a floating-point number with 38 digits of precision and an exponent in the range -128…127. This is more than twice the number of decimal digits an IEEE 'double' can store. The exponent range is less than that of a double
, but that's unlikely to matter in financial computing. So NSDecimalNumber
s can definitely result in smaller error than float
s or double
s, even though none of them can store 1/12 exactly.
The other concern is matching the results computed by some other system, like your bank or your broker or the NYSE. In that case, you need to figure out how that other system is storing numbers and computing with them. If the other system is using a decimal format (which is likely in the financial sector), then NSDecimalNumber
will probably be useful.
“Wouldn't it be more efficient to use primitive types to do floating point arithmetic, specially thousands in real time.” Arithmetic on primitive types is far faster than arithmetic on NSDecimalNumber
s. I haven't measured it, but a factor of 100 would not surprise me.
You have to strike a balance between your requirements. If decimal accuracy is paramount (as it often is in financial programming), you must sacrifice performance for accuracy. If decimal accuracy is not so important, you can consider carefully using a primitive type, but you should be aware of the accuracy you're sacrificing. Even then, the size of a float
is so small (usually only 7 significant decimal digits) that you should probably be using double
(at least 15, usually 16 significant decimal digits).
If you need to perform millions of arithmetic operations per second with true decimal accuracy, you might be able to do it using double
s, if you are an IEEE 754 expert capable of analyzing your code to figure out where errors are introduced and how to eliminate them. Few people have this level of expertise. (I don't claim to.) You must also understand how your compiler turns your Objective-C code into machine instructions.
Anyway, perhaps you are just writing a casual app to compute a rough estimate of net present value or future value. In that case, using double
would probably suffice, but using NSDecimalNumber
would probably also be sufficiently fast. Without knowing more about the app you're writing, I can't give you more specific advice.