9

I'm developing on OS portable software that has unit tests that must work on Linux, UNIX, and Windows.

Imagine this unit test that asserts that the IEEE single-precision floating point value 1.26743237e+015f is converted to a string:

void DataTypeConvertion_Test::TestToFloatWide()
{
    CDataTypeConversion<wchar_t> dataTypeConvertion;
    float val = 1.26743237e+015f;
    wchar_t *valStr = (wchar_t*)dataTypeConvertion.ToFloat(val);
    std::wcout << valStr << std::endl;
    int result = wcscmp(L"1.26743E+015", valStr);
    CPPUNIT_ASSERT_EQUAL(0, result);
    delete [] valStr;
}

My question is: Will all OS and processors convert the float to the string "1.26743E+015" as long as the float is a IEEE? I'm asking since I know the math CPUs may not return accurate results, and I was wondering if that would yield different results on different processors as they may have different hardware implementations of IEEE floating point operations internally inside the processor architecture.

zastrowm
  • 8,017
  • 3
  • 43
  • 63
  • The actual `float` precision that is supported on the current platform can be obtained via [`std::numeric_limits::epsilon()`](http://en.cppreference.com/w/cpp/types/numeric_limits/epsilon). It can be different for different machine architectures/FPU's. – πάντα ῥεῖ May 31 '14 at 08:18
  • 5
    Floating point calculations can produce inconsistent results on the same machine, there's no reason to assume it gets better across different operating systems. The snippet is *way* too artificial and incomplete to make a guess. Clearly you already invoke trouble by trying to store more digits in a *float* than it is capable of storing. Very unclear how the truncation was performed. It will work, little chance of compiler's optimizer and the floating point processor's rounding mode screwing up the result with this specific value. – Hans Passant May 31 '14 at 08:40
  • For X86 the internal floating point calculations can be performed in 32 bit, 64 bit, or 80 bit formats, depending on the setting of the floating point control word, which can vary from compiler to compiler. – rcgldr May 31 '14 at 09:44
  • @LưuVĩnhPhúc If you just use a non-antiquated Java standard without further making it clear that you want reproducible floating-point, you won't get reproducible floating-point. .NET is even worse: there is basically no way to specify that you want reproducible floating-point. And most other virtual machines are implemented in C and leave their users at the mercy of C's floating-point vagaries. Please explain what you mean in an answer or retract your comment. – Pascal Cuoq May 31 '14 at 17:49

1 Answers1

6

The answer, sadly, is most likely to be no. Conversion of a floating point number to and from arbitrary strings is not guaranteed across platforms.

In principle at least, all processors you are likely to come across conform to the IEEE 754 standard. The standard is reasonably tight, to the extent that it defines floating point arithmetic. You can add/subtract/multiple or divide floating point numbers with a reasonable expectation of getting identical results across platforms at the bit level.

The standard also defines conversion to and from a 'character representation'. In principle that requires complying implementations to be compatible but it has 'wiggle room'. Not all numbers have to produce identical results.

You should also be aware that the default precision and format may vary across platforms.

Having said all that, you may be able to achieve your desired results as long as (a) you control the width and precision of strings rather than leaving it to default (b) you choose a precision that is well within the maximum available for the particular format (c) avoid NaN and similar.

The article here is quite helpful.

david.pfx
  • 10,520
  • 3
  • 30
  • 63
  • The .NET methods which convert strings to `double` do not perform correct rounding in all cases; for some input strings they will round correctly in 32-bit mode when using x87 floating-point math but not in 64-bit mode which uses SSE floating-point math. All platforms should precisely convert every string which is within 1/4ulp of an exactly-representable value, but may differ when given a string which is nearly 1/2ulp from the nearest representable value (nothing can ever be more than 1/2ulp from the nearest representable value, since some other representable value would be closer to it) – supercat Nov 17 '14 at 01:08