0

Possible Duplicate:
Why does Visual Studio 2008 tell me .9 - .8999999999999995 = 0.00000000000000055511151231257827?
Why do simple doubles like 1.82 end up being 1.819999999645634565360?

I get a really weird behaviour while I compile and run the following program:

#include <iostream>

#define DIV 10000ll

long long gcd(long long a, long long b) {
  if(b==0) return a;
  else return gcd(b, a % b);
}

int main() {
  int t;
  std::cin >> t;
  while(t--) {
    double n1;
    std::cin >> n1;
    long long inum=(long long)(n1*DIV);
    std::cout << inum << std::endl;
    if(inum==0) { std::cout << 1 << std::endl; }
    else std::cout << DIV/gcd(inum,DIV) << std::endl;
  }
  return 0;
}

When I enter as input:

1
0.0006

I get as output

5
2000

That means: (long long)0.0006*10000 is equal to 5 and not to 6. Why is this happening?

Community
  • 1
  • 1
Rontogiannis Aristofanis
  • 8,883
  • 8
  • 41
  • 58

3 Answers3

7

0.0006 cannot be represented exactly as a double. Most likely the actual result stored is something like 0.00059999999... When this value is multiplied by 10000 it gives a number slightly smaller than 6. Casting this to an integer type truncates (rounds down) it to 5.

This is not weird behaviour. This is how floating point arithmetic works. If you wanted to get the value 6 you should round to the nearest integer instead of truncating.

Related

Community
  • 1
  • 1
Mark Byers
  • 811,555
  • 193
  • 1,581
  • 1,452
2

My guess is that it has to do with floating point precision. The .0006 is probably something like .0005999999999 in the computer, then you multiply that by 10,000 and then cast it as a long which drops the decimal so 5.999999 becomes 5.

ajon
  • 7,868
  • 11
  • 48
  • 86
0

Try using a long double instead, its a floating point precision error and the easy way to fix it is to use a more precise floating type. Unless its a truncation error, in which case you will need to round before you cast.

AJMansfield
  • 4,039
  • 3
  • 29
  • 50
  • 5
    Remove the first sentence and this answer is right. Increasing precision only makes a longer sequence of bits that's still incorrect. – Pete Becker Nov 09 '12 at 20:21