it is astonishingly hard to find any example where floating point inaccuracy actually leads to a wrong result.
I would not say it is astonishingly hard. A famous real-world example, albeit not involving money was that the Patriot missile system code accumulated a floating point rounding error of 0.000000095 seconds per second; if the system was not rebooted every five days, it would be off by a fraction of a second. Since the missiles it intercepts can move several thousand meters per second, it would miss.
At least 28 people died as a result of this floating point error.
We can demonstrate the Patriot error without putting more lives at risk. Here's a little C# program. Suppose we are adding up dimes; how many do we have to add before we get a significant error?
double sum = 0.0;
long k = 0;
long report = 1;
while (true) {
k += 1;
sum += 0.1;
if (k == report) {
Console.WriteLine($"{k} {k / 10.0 - sum}");
report *= 10;
}
}
Let it run as long as you like. The output on my machine started:
1 0
10 1.11022302462516E-16
100 1.95399252334028E-14
1000 1.406874616805E-12
10000 -1.58820512297098E-10
100000 -1.88483681995422E-08
1000000 -1.33288267534226E-06
10000000 0.00016102462541312
100000000 0.0188705492764711
1000000000 1.25458218157291
10000000000 -163.12445807457
After only a hundred million computations -- so, $10M -- we are already off by two cents. By ten billion computations we are off by $163.12. Sure, that's a tiny error per transaction, and maybe $163.12 is not a lot of money in the grand scheme of things compared to a billion dollars, but if we cannot correctly compute 100 million times 0.1 then we have no reason to have confidence in any computation that comes out of this system.
The error could be guaranteed to be zero; why would you not want the error to be zero?
Exercise: You imply that you know where to put the roundings in to ensure that this error is eliminated. So: where do they go?
Some additional thoughts, inspired by your comment:
while I think a decimal data type certainly is required for a money class, I don't think it is enough. I think a money class should also (1) prevent adding non-money numbers (2) prevent adding two different currencies (3) not allow taking the power / roots.
If what you want is real-world examples of money errors involving units of measure not being trapped by the type system, there are many, many such examples.
I used to work at a company which writes software that detects software defects. One of the most magical defect checkers is the "cut and paste error" detector, and it found a defect in real world code like
dollarTot = (euros1 + euros2) * dollarEuroRate;
pesoTot = (euros3 + euros4) * pesoEuroRate;
... dozens more like this...
And then later on in the code
dollarTot = (yen1 + yen2) * yenDollarRate;
pesoTot = (yen3 + yen4) * pesoEuroRate;
...
Oops.
The major international trading house that had that defect called us up and said that the beer was on them next time we were in Switzerland.
Examples like these show why financial houses are so interested in languages like F# that make it super easy to track properties in the type system.
I did a series on my blog a few years ago about using the ML type system to find bugs when implementing virtual machines where an integer could mean an address of a dozen different data structures, or an offset into those structures. It finds bugs fast, and the runtime overhead is minimal. Units of measure types are awesome, even for simple problems like making sure you don't mix up dollars with yen.