my unit test failed because of this, so I wonder if I am using the right datatype?
Reading the specs of the double I'd think it should be ok, but this is what's happening:
I'm reading a string from a file with value 0,0175
(comma as decimal sep.)
then I convert it to a double and then multiply it by 10000.
The function which does the multiply is this:
private static double? MultiplyBy10000(double? input)
{
if (!input.HasValue)
{
return null;
}
return input.Value*10000;
}
next is from the immediate window:
input.Value
0.0175
input.Value*10000
175.00000000000003
And that is where my unit test fails, because I expect 175.
Is the double not accurate enough for this?
Checked other values too:
input.Value*1
0.0175
input.Value*10
0.17500000000000002
input.Value*100
1.7500000000000002
input.Value*1000
17.5
input.Value*10000
175.00000000000003
The weird thing is, I have 12 testcases 0,0155 0,0225 0,016 0,0175 0,0095 0,016 0,016 0,0225 0,0235 0,0265
and assert 4 of these, and the other 3 don't have this behaviour