It seems that all those posts get close, but don't quite explain the crux of the issue. It's not that decimal
stores values more precisely or that double
has more digits or something like that. They each store values differently.
The decimal
type stores values in a decimal form. Like 1234.567
. The double
(and float
) stores values in a binary form, like 1101010.0011001
. (They also do have limits of how many digits they can store, but that's not relevant here - or ever. If you feel like you're running out of digits for precision, you're probably doing something wrong)
Note that there are certain values that cannot be stored precisely in either notation because they would require an infinite amount of digits after the decimal point. Like 1/3
or 1/12
. Such values get rounded a bit when stored, which is what you're seeing here.
The advantage of decimal
in financial calculations is that it can store decimal fractions precisely whereas double
can't. For example 0.1
can be stored precisely in decimal
but not in double
. Those are the kinds of values that money amounts usually take. You never need to store 2/3 of a dollar, you need 0.66 dollars exactly. Human currencies are decimal-based, so the decimal
type can store them well.
In addition, adding and subtracting decimal values works flawlessly with the decimal
type too. And that's the most common operation in financial calculations, so it's easier to program that way.
Multiplying decimal values works pretty well too, although it can increase the number of decimal places used to ensure an exact value.
But dividing is very risky because most values that you obtain by dividing won't be storable precisely and a rounding error will occur.
At the end of the day both double
and decimal
can be used to store monetary values, you just need to be very careful about their limitations. For a double
type you need to round the result after every calculation, even addition and subtraction. And whenever you display values to the user, you need to explicitly format them to have a certain number of decimal digits. In addition, when comparing numbers, take care that you compare only the first X decimal digits (usually 2 or 4).
For a decimal
type some of these restrictions can be relaxed since you know that your monetary value is stored precisely. You can usually skip rounding after addition and subtraction. If you only store X decimal digits in the first place, you don't need to worry about explicit display formatting and comparison. It does make things considerably easier. But you still need to round after multiplication and division.
There is one more elegant approach not discussed here. Change your monetary units. Instead of storing dollar values, store cent values. Or if you work with 4 decimal digits, store 1/100ths of a cent.
Then you can use int
or long
for everything!
This has most of the same advantages of a decimal
(values stored precisely, addition/subtraction works precisely), but the places you need to round things will become even more obvious. A slight drawback however is that formatting such values for display becomes a bit more complicated. On the other hand, if you forget to do it, that too will be obvious. This is my preferred approach so far.