Summation of these numbers give different results in .NET Core / C# and on other compilers.
3987908.698692091 + 92933945.11382028 + 208218.11919727124 + 61185833.06829034
.NET Core / C# : 158315904.99999997
Others: 158315905
Clearly the .NET Core / C# one is deviated.
Here is the code in C#:
double[] no = {
3987908.698692091,
92933945.11382028,
208218.11919727124,
61185833.06829034
};
Console.WriteLine("{0}", no.Sum());
Here is the code in C++
vector<double> no = {
3987908.698692091,
92933945.11382028,
208218.11919727124,
61185833.06829034
};
cout << setprecision(16);
cout << "sum: " << sum(no) << " \n";
double sum(vector<double> &fa)
{
double sum = 0.0;
for(double f : fa)
sum = sum + f;
return sum;
}
PS: Using decimal
also gives the same outcome. I believe mono C# compiler might give the same result as the C++ one.
Is there a way to fix the deviation problem either with compiler options or somehow inside C#?