4
vector<double> pvec;

double firstnode=0.0;

for(iter2=svec.begin(); iter2!=svec.end(); iter2++)
{
    double price= 0.0;
    string sFiyat = iter2->substr(13);
    stringstream(sFiyat)>>price;
    price=log(price);

    if (iter2==iter)
    {
        firstnode = price;
    }
    price -= firstnode;

    pvec.push_back(price);
}

I got the code above and there is a miracalous difference in debug and release modes. The algorithm aims to make the first element of the vector equal to zero and then finds the differences of the logarithms of the first element with other elements.

In debug mode, this gives the result that I desire and the first element of the vector is always equal to zero. But when I switch to the release mode the first element of the vector is equal to some small number such as 8.86335e-019.

And that's not all. When I put the line "cout << price << endl;" after the line "price=log(price);" then the result I got from the release version is same with the one from the debug mode. Any explanations?

halilak
  • 41
  • 1
  • 4
  • Just to note: use `++iter2` against `iter2++`. `iter2++` source code usually looks something like this: `iterator operator++ (int i) { iterator temp = (*this); ++(*this); return temp; };` – Naszta Jun 21 '11 at 19:45
  • The problem is resolved with the most redicilous modification. I've added "price -= 0;" after the line "price=log(price);" – halilak Jun 21 '11 at 20:23
  • This is precisely what I meant when I said they're complex and fickle. That line, while seemingly innocuous, probably pushed `price` out of the higher precision internal memory, resulting in a slightly different value when you accessed it next. You can't rely on that always working... a different set of optimization settings, or another innocuous change nearby, could put you right back where you started. If you need the first element to be exactly zero, then set it to be exactly zero. And either way, always count on floating point errors. – Dennis Zickefoose Jun 21 '11 at 21:04

3 Answers3

6

The debug floating point stack uses full 80-bit precision available in the FPU. Release modes perform on more efficient 64-bit truncated results.

Modify your floating-point behavior to be build independent with /fp http://msdn.microsoft.com/en-us/library/e7s85ffb%28VS.80%29.aspx See http://thetweaker.wordpress.com/2009/08/28/debugrelease-numerical-differences/ as well

Some of the differences you are observing are simply to do with display precision. Make sure to set cout to be full precision before you compare it to the value displayed by the MSVC debugger.

totowtwo
  • 2,101
  • 1
  • 14
  • 21
  • there are no differences between the floating point models of the two versions but still results are different – halilak Jun 21 '11 at 20:08
  • 1
    I don't think your first paragraph is necessarily correct. The x86 FPU always uses 80 bit registers internally, and (AFAIK) by enabling optimizations, release builds can make it _more_ likely that values stay in 80 bit registers, keeping more precision than they would if they were stored back to in-memory 64-bit doubles. (This changes when using SSE instead of FPU, since SSE does truncate to 32-bit or 64-bit floating point.) – Josh Kelley Feb 18 '14 at 15:08
2

Try turning off optimizations in your release build...

therealmitchconnors
  • 2,732
  • 1
  • 18
  • 36
  • Make sure the IDE is set to release, then right click on the project, select properties. Under general, set Whole Program Optimization to No. Under C/C++, Optimization, set Optimization to disabled. This is just a shot in the dark, but it may work. – therealmitchconnors Jun 21 '11 at 19:47
  • from what I understand, this would turn off all the optimizations so that there are no diffirence between debug and release modes. If there is an alternative solution I should try to avoid this. It is a quite large project and I need as much speed as I can get – halilak Jun 21 '11 at 19:52
  • in C# you can just compare to double.MinValue and if they are equal, you basically have zero. Not sure if there is a similar test in c++... Maybe try `double comp = 0; return double==price ? 0 : price;` – therealmitchconnors Jun 21 '11 at 19:56
  • 1
    @therealmitchconnors in C++ that is std::numeric_limits::min(). – totowtwo Jun 21 '11 at 20:05
  • 1
    Wow. I am liking C# more and more. – therealmitchconnors Jun 21 '11 at 20:15
1

When you use floating point calculations, an error on the order of 8e-19 is as close to zero as you get.

You have an error that is less than a billionth of a billionth of the calculated value. That's pretty close!

Bo Persson
  • 90,663
  • 31
  • 146
  • 203
  • Well that's true but I need to compare the result with something else. Since the data is quite large, I can't compare them by myself. And they should be identical with each other in order to make a comparison with an algorithm – halilak Jun 21 '11 at 19:55
  • @halilak - Floating point calculations are just not exact, like discussed here [how-computer-does-floating-point-arithmetic](http://stackoverflow.com/questions/6033184/how-computer-does-floating-point-arithmetic). You have to settle for some small difference between numbers that should still be considered equal. – Bo Persson Jun 21 '11 at 20:15
  • @halilak: FPUs are complex, fickle devices. Even an identical series of C++ instructions can yield different results depending on the state of the FPU when you started them. You have to design your algorithms to be tolerant of these minor floating point errors, or you'll run into problems. – Dennis Zickefoose Jun 21 '11 at 20:28