The result of C# floating-point code can lead to different results.
This question is not about why 0.1 + 0.2 != 0.3
and the inherent imprecision of floating point machine numbers.
It is rather linked to the fact that the same C# code, with the same target architecture (x64 for instance), may lead to different results depending on the actual machine / processor that is used.
This question is directly related to this one: Is floating-point math consistent in C#? Can it be?, in which the C# problem is discussed.
For reference, this paragraph in the C# specs is explicit about that risk :
Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an "extended" or "long double" floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations. Other than delivering more precise results, this rarely has any measurable effects
Indeed we actually experienced an ~1e-14
order of magnitude difference in an algorithm using only double
, and we are afraid that this discrepancy will propagate for other iterative algorithm that use this result, and so forth, making our results not consistently reproducible for different quality / legal requirement we have in our field (medical imaging research).
C# and F# share the same IL and common runtime, however, as far as I understand, it may be more something driven by the compiler, which is different for F# and C#.
I feel not savvy enough to understand if the root of the issue is common to both, or if there is hope for F#, should we take the leap into F# to help us solve this.
TL;DR
This inconsistency problem is explicitly described in the C# language specs. We have not found the equivalent in F# specs (but we may not have searched at the right place).
Is there more consistency in F# in this regard?
i.e. If we switch to F#, are we guaranteed to get more consistent results in floating-point calculations across architectures?