1

Really weird:

double *data; // uncorrelated
double a,b,c;
double sigma = 1e-309; // denormalized number

try { data = new double[10]; } // uncorrelated
catch(...) { cout << "error"; return 1; }

a = 1/sigma;                    // infinite
b = exp(-1/sigma);              // 0
c = a * b;                      // NaN
cout << c << endl;
c = (1/sigma) * exp(-1/sigma);  // 0
cout << c << endl;

Ok, the second c result could be 0 because of some optimization.

BUT: when I delete the try/catch block, the second c remains NaN! Why this different behaviour??? My compiler is VC++ 2010 Express. OS Windows 7 64-bit. I use only standard libs like iostream and cmath.

Edit: my first observation was with Debug+Win32 default settings for an empty console application. With Release+Win32 the results are: first c 0, second c NaN - no matter if try/catch present or not! Summary:

                                 //Debug+Win32              // Release+Win32
                              //with try   //without     //with try   //without
c = a * b;                     // NaN         NaN             0           0
c = (1/sigma) * exp(-1/sigma); // 0           NaN            NaN         NaN

Edit 2: When I set the /fp:strict switch in C++/codegeneration, the result is same with Debug+Win32, but with Release+Win32 it changes to c = a * b; // NaN and c = (1/sigma) * exp(-1/sigma); // 0 no matter if with try or not. I don't get why it stays NaN+NaN with Debug+Win32 and no preceding try. How to debug a program that has to be floating-point safe, when results differ dispite /fp:strict from Release depending on preceding try?

Edit 3: Here a full program:

// On VC++ 2010 Express in default Win32-Debug mode for empty console application.
// OS: Windows 7 Pro 64-Bit, CPU: Intel Core i5.
// Even when /fp:strict is set, same behaviour.
//
// Win32-Release mode: first c == 0, second c == NaN (independent of try)
// with /fp:strict: first c == NaN, second c == 0 (also independent of try)

#include <iostream>
#include <cmath>

using namespace std;

int main()
{
    double *data; // uncorrelated
    double a,b,c;
    double sigma = 1e-309; // denormalized number

    try { data = new double[10]; } // uncorrelated
    catch(...) { cout << "error"; return 1; }

    a = 1/sigma;                    // infinite
    b = exp(-1/sigma);              // 0
    c = a * b;                      // NaN
    cout << c << endl;
    c = (1/sigma) * exp(-1/sigma);  // 0 with preceding try or
    cout << c << endl;              // NaN without preceding try

    cin.get();
    return 0;
}
mb84
  • 683
  • 1
  • 4
  • 13
  • are you using optimizations or just plain debug ? – Raxvan Nov 19 '13 at 10:00
  • No optimizations active (under Project properties/Config/C++/Optimization). Just defaut settings for an empty console application with Debug+Win32. And debugging from within VC++. Even when I cout the first and second c and I run the application as exe in Windows, same effects! – mb84 Nov 19 '13 at 10:11
  • What are your EH flags set to? This may be because you have /EHa on, which causes the compiler to set up to handle structured i.e. asynchronous exceptions -- I'd compare results with /EHa vs /EHsc and post them here. – LThode Nov 13 '14 at 19:57

1 Answers1

2

These sorts of things can happen due to differences in register allocation/usage. For example - with the try-catch block, the value of sigma may be being saved as a 64-bit double then reloaded from memory, while without the block it may use a higher-precision 80-bit register (see http://en.wikipedia.org/wiki/Extended_precision) without rounding to 64 bits. I suggest you check your assembly if you care.

Tony Delroy
  • 102,968
  • 15
  • 177
  • 252
  • Hmm. When `sigma` higher precision, then `1/sigma != infinity`. But `a` normal precision so that `a == infinity` and `a*b == NaN`. Makes sense (so you meant it otherwise: _with_ try-catch -> sigma 80-bit). – mb84 Nov 19 '13 at 10:57
  • Well, odds were the try block would trigger saving of the 80-bit value to memory - it's mildly surprising if it's happened the other way around, but the key point is that all sorts of little things might tip the balance between 80-bit and 64-bit precision. I'd hazard that the compiler's ideal should be to use 80-bit on a best-effort basis, rounding as infrequently as possible, on the assumption that that'll tend to provide better results. Can see it may be annoying for you if you want something more deterministic. – Tony Delroy Nov 19 '13 at 11:07
  • Just to elaborate a bit: the rule is that **storing** a floating-point value triggers a conversion to the appropriate type. So, for example, if the compiler generates 80-bit code (which it typically does), the stores into `a`, `b`, and `c` convert the calculated values to 64 bits. **But** most compilers **don't do this** by default; you have to set a compiler switch to tell the compiler to respect floating-point stores. Without that switch, optimizers freely interconvert between 80- and 64-bit representations in unpredictable ways. – Pete Becker Nov 19 '13 at 16:27
  • @PeteBecker: That comment is not clear. It is not clear what you mean by “storing” and “stores” (such as whether you mean store instructions in the machine or assignment operations). The C++ standard requires assignment operations (and casts) to convert to the nominal type; excess precision must be discarded. An implementation might execute store instructions when there is no assignment or might implement assignments without using store instructions. – Eric Postpischil Nov 19 '13 at 16:48
  • @EricPostpischil - we're talking about what the **language** C++ requires. Although, "store" can mean many other things in other contexts, I think it's meaning here is quite clear. – Pete Becker Nov 19 '13 at 16:50
  • @PeteBecker: Please explain your intended meaning. – Eric Postpischil Nov 19 '13 at 16:51
  • @EricPostpischil - no, I'm not going to go down your rathole. – Pete Becker Nov 19 '13 at 16:52
  • @PeteBecker: when you look at my Edit 2: is this the switch you meant? Why still `NaN+NaN` in Debug+Win32 without preceding try? – mb84 Nov 19 '13 at 18:58
  • @mb84 - that looks like the right switch. I haven't dug into the details of what's going on. It's always hard to analyze code snippets. You should change the example to a complete, minimal program that compiles and runs and shows the problem. – Pete Becker Nov 19 '13 at 18:59
  • @mb84 - if I was feeling snarky I'd point out that the new code doesn't compile: it's missing a header and several `std::` qualifiers. But fixing the obvious, with the compilers I have here (g++ and clang++) I get two NaNs regardless of whether the `try...catch` code is there. – Pete Becker Nov 19 '13 at 19:40
  • @PeteBecker: sorry for the forgotten and now fixed `using namespace std`. Here on my compiler it's like written. Can you at least produce a second c == 0 as sign of implicit 80-bit precision on intermediate results? Thank you! – mb84 Nov 19 '13 at 19:57