0

Is any difference in computation precision for these 2 cases:
1) x = y / 1000d;
2) x = y * 0.001d;

Edit: Shoudn't add C# tag. Question is only from 'floating-point' point of view. I don't wanna know what is faster, I need to know what case will give me 'better precision'.

apocalypse
  • 5,764
  • 9
  • 47
  • 95
  • 1
    Why not simply run it? – RononDex Jan 17 '14 at 14:14
  • 12
    First problem: 0.0001d isn't exactly a 1/1000... – Jon Skeet Jan 17 '14 at 14:16
  • I think @JonSkeet means that * 0.0001d is the same as / 10000... – Zache Jan 17 '14 at 14:32
  • 3
    @Zache: No, I really don't. 0.0001d isn't either the same as /10000 or /1000. – Jon Skeet Jan 17 '14 at 14:34
  • @JonSkeet My bad then, I realize that they aren't exactly the same but thought that the extra 0 was also something that was missed – Zache Jan 17 '14 at 14:34
  • Hah, I didn't spot that either. I took "isn't exactly" literally rather than as an understatement... – Rawling Jan 17 '14 at 14:36
  • @JonSkeet use your intelligence an fix 0.0001d to 0.001d. Sry I was tired. – apocalypse Jan 17 '14 at 17:05
  • 2
    @zgnilec: You missed my point. 0.001d isn't exactly 1/1000 either. – Jon Skeet Jan 17 '14 at 17:34
  • @zgnilec what Jon is saying is that a floating point is not an exact representation. So really, `/ 1000d` is NOT the same thing as `* .001d` – manuell Jan 17 '14 at 17:34
  • It's pretty trivial to show that this isn't the case though... just try it with `double.MaxValue`, for example... – Jon Skeet Jan 17 '14 at 17:43
  • @EricPostpischil: Um, no - `d` suffix represents `double` (including C#, which this language was originally tagged with), at least in every language I've used. In C#, the `m` suffix represents `decimal`. – Jon Skeet Jan 17 '14 at 17:44
  • @JonSkeet, 0.001 != (math) 1/1000 but 0.001 == 1.0/1000. At least this is the way C compiler converts "0.001" to double. – Aleksandr Pakhomov Jan 18 '14 at 11:54
  • 1
    @user3161163: Yes, but it's an approximation in both cases. So when you multiply by "the nearest double to 0.001" that's not necessarily the same as dividing by 1000. I'm sure I demonstrated this yesterday when the question was still closed, but I'm struggling right now. – Jon Skeet Jan 18 '14 at 11:58
  • @user3161163: And I've got an example now - see my answer. – Jon Skeet Jan 18 '14 at 12:15
  • you are programming in base 10 but the floating point is base 2 you CAN represent 1000 in base 2 but cannot represent 0.001 in base 2 so you have chosen bad numbers to ask your question, on a computer x/1000 != x*0.001, you might get lucky most of the time with rounding and more precision but it is not a mathematical identity. – old_timer Jan 26 '14 at 22:00

4 Answers4

2

No, they're not the same - at least not with C#, using the version I have on my machine (just standard .NET 4.5.1) on my processor - there are enough subtleties involved that I wouldn't like to claim it'll do the same on all machines, or with all languages. This may very well be a language-specific question after all.

Using my DoubleConverter class to show the exact value of a double, and after a few bits of trial and error, here's a C# program which at least on my machine shows a difference:

using System;

class Program
{
    static void Main(string[] args)
    {
        double input = 9;
        double x1 = input / 1000d;
        double x2 = input * 0.001d;

        Console.WriteLine(x1 == x2);
        Console.WriteLine(DoubleConverter.ToExactString(x1));
        Console.WriteLine(DoubleConverter.ToExactString(x2));
    }
}

Output:

False
0.00899999999999999931998839741709161899052560329437255859375
0.009000000000000001054711873393898713402450084686279296875

I can reproduce this in C with the Microsoft C compiler - apologies if it's horrendous C style, but I think it at least demonstrates the differences:

#include <stdio.h>

void main(int argc, char **argv) {
    double input = 9;
    double x1 = input / 1000;
    double x2 = input * 0.001;
    printf("%s\r\n", x1 == x2 ? "Same" : "Not same");
    printf("%.18f\r\n", x1);
    printf("%.18f\r\n", x2);
}

Output:

Not same
0.008999999999999999
0.009000000000000001

I haven't looked into the exact details, but it makes sense to me that there is a difference, because dividing by 1000 and multiplying by "the nearest double to 0.001" aren't the same logical operation... because 0.001 can't be exactly represented as a double. The nearest double to 0.001 is actually:

0.001000000000000000020816681711721685132943093776702880859375

... so that's what you end up multiplying by. You're losing information early, and hoping that it corresponds to the same information that you lose otherwise by dividing by 1000. It looks like in some cases it isn't.

Jon Skeet
  • 1,421,763
  • 867
  • 9,128
  • 9,194
1

you are programming in base 10 but the floating point is base 2 you CAN represent 1000 in base 2 but cannot represent 0.001 in base 2 so you have chosen bad numbers to ask your question, on a computer x/1000 != x*0.001, you might get lucky most of the time with rounding and more precision but it is not a mathematical identity.

Now maybe that was your question, maybe you wanted to know why x/1000 != x*0.001. And the answer to that question is because this is a binary computer and it uses base 2 not base 10, there are conversion problems with 0.001 when going to base 2, you cannot exactly represent that fraction in an IEEE floating point number.

In base 10 we know that if we have a fraction with a factor of 3 in the denominator (and lacking one in the numerator to cancel it out) we end up with an infinitely repeated pattern, basically we cannot accurately represent that number with a finite set of digits.

1/3 = 0.33333...

Same problem when you try to represent 1/10 in base 2. 10 = 2*5 the 2 is okay 1/2, but the 5 is the real problem 1/5.

1/10th (1/1000 works the same way). Elementary long division:

       0 000110011
     ----------
1010 | 1.000000
         1010
       ------
          1100 
          1010
          ----
            10000
             1010
             ----
              1100
              1010
              ----
                10

we have to keep pulling down zeros until we get 10000 10 goes into 16 one time, remainder 6, drop the next zero. 10 goes into 12 1 time remainder 2. And we repeat the pattern so you end up with this 001100110011 repeated forever. Floating point is a fixed number of bits, so we cannot represent an infinite pattern.

Now if your question has to do with something like is dividing by 4 the same as multiplying by 1/4th. That is a different question. Aanswer is it should be the same, consumes more cycles and/or logic to do a divide than multiply but works out with the same answer in the end.

old_timer
  • 69,149
  • 8
  • 89
  • 168
-2

Probably not. The compiler (or the JIT) is likely to convert the first case to the second anyway, since multiplication is typically faster than division. You would have to check this by compiling the code (with or without optimizations enabled) and then examining the generated IL with a tool like IL Disassembler or .NET Reflector, and/or examining the native code with a debugger at runtime.

TypeIA
  • 16,916
  • 1
  • 38
  • 52
-2

No, there is no any difference. Except if you set custom rounding mode.

gcc produces ((double)0.001 - (double)1.0/1000) == 0.0e0

When compiler converts 0.001 to binary it divides 1 by 1000. It uses software floating point simulation compatible with target architecture to do this.

For high precision there are long double (80-bit) and software simulation of any precision.

PS I used gcc for 64 bit machine, both sse and x87 FPU.

PPS With some optimizations 1/1000.0 could be more precise on x87 since x87 uses 80-bit internal representation and 1000 == 1000.0. It is true if you use result for next calculations promptly. If you return/write to memory it calculates 80-bit value and then rounds it to 64-bit. But SSE is more common to use for double.

Aleksandr Pakhomov
  • 753
  • 1
  • 5
  • 13
  • I was wrong, it produces different results for other than 1 numerator. 9/1000: error 0.0 for sse, -7e-16 for x87. 9*0.001: error 1.7e-15 for sse, 1.05e-15 for x87. – Aleksandr Pakhomov Jan 18 '14 at 12:39