2

I'm debugging some old C code and it has a definition #define PI 3.14... where ... is about 50 other digits.

Why is this? I said I could reduce the number to about 16 decimal places but my boss snarled at me saying that the other numbers are there for platform independence and forward compatibility. But will is slow the program down?

P45 Imminent
  • 8,319
  • 4
  • 35
  • 78
  • 3
    How could it possibly slow the program down? Do you understand what it means that C is a compiled language? – Ernest Friedman-Hill Mar 13 '14 at 22:16
  • I don't know. Sorry, it's really late and he said something like that too ;-) – P45 Imminent Mar 13 '14 at 22:17
  • 3
    Why are you asking whether it'd slow the program down when you've been asked to debug, not to optimize? – nhgrif Mar 13 '14 at 22:17
  • 2
    Probably not, and who cares? Unless you have evidence that your program is "Too slow", you shouldn't bother with micro-optimizations. If your program is "Too Slow", get a profiler - find the slowest part, then speed that part up. – Pete Baughman Mar 13 '14 at 22:17
  • He's moaning the code is slow. I think he means debug make it faster. – P45 Imminent Mar 13 '14 at 22:17
  • I'm trying to help and thought a shorter PI might help – P45 Imminent Mar 13 '14 at 22:17
  • 1
    You might want to read http://ericlippert.com/2012/12/17/performance-rant/ if you're thinking about performance. It's quite insightful. – Pete Baughman Mar 13 '14 at 22:18
  • Sorry to cause upset. But I *still* don't understand why my boss has PI to 50 decimal places! Should I delete this question? – P45 Imminent Mar 13 '14 at 22:25
  • 1
    For numerical constants, the standard provides a gaggle of them in the precissions the machine supports in the `math.h` header. Rip out the (probably wrong) values and use the compiler provided ones. – vonbrand Mar 13 '14 at 22:26
  • 1
    Your boss has Pi to 50 decimal places because he thinks it matters. It turns out that it probably doesn't. The compiler will probably convert the 50 decimal place value to something with less precision, depending on the target architecture, but that probably doesn't matter either. – Pete Baughman Mar 13 '14 at 22:32
  • @Yogi: It doesn't make the program slow, so if that's your main concern you're fine :) No need to delete the question, maybe someone will get around to write an explicit answer – Niklas B. Mar 13 '14 at 22:33
  • @PeteBaughman: so it will not affect performance at all? even a long double on solaris? Put as an answer and I'll upvote and accept ;-) – P45 Imminent Mar 13 '14 at 22:34
  • What does the -1 mean on the question? Can I improve it? – P45 Imminent Mar 13 '14 at 22:38
  • @Yogi: The literal itself will not affect anything. However, using `long double` instead of `double` or `float` *can* affect performance – Niklas B. Mar 13 '14 at 22:40

2 Answers2

7

No, this will not slow down the program, unless you are running on an incredibly underpowered 1MHz DSP chip that has to do floating point arithmetic in software as opposed to passing it off to a dedicated FPU. This would mean that any mathematical operations that use floating point data are much slower than just using integer arithmetic.

In general, greater precision is only going to introduce a slowdown if the most time-consuming part of your program is doing a lot of calculations in rapid succession, and floating point calculations are especially slow. On a modern CPU, this is generally not the case, with the possible exception of certain chips that cause an 80-cycle stall on things like floating point underflow. That kind of issue likely exceeds the domain of this question.

First, it's better to use a common standard definition of PI, like in the C standard header, <math.h>, where it is defined as #define M_PI 3.14159265358979323846. If you insist, you can go ahead and define it manually.

Also, the best precision currently available in C is the equivalent of about 19 digits.

According to Wikipedia, 80-bit "Intel" IEEE 754 extended-precision long double, which is 80 bits padded to 16 bytes in memory, has 64 bits mantissa, with no implicit bit, which gets you 19.26 decimal digits. This has been the almost universal standard for long double for ages, but recently things have started to change.

The newer 128-bit quad-precision format has 112 mantissa bits plus an implicit bit, which gets you 34 decimal digits. GCC implements this as the __float128 type and there is (if memory serves) a compiler option to set long double to it.

Personally, if I were required to use our own definition of pi, I'd write something like this:

#ifndef M_PI
#define PI 3.14159265358979323846264338327950288419716939937510
#else
#define PI M_PI
#endif

If the latest C standard supports an even wider floating point primitive data type, it's pretty much a guarantee that constants in the math library would be updated to support this.

References

  1. More Precise Floating point Data Types than double?, Accessed 2014-03-13, <https://stackoverflow.com/questions/15659668/more-precise-floating-point-data-types-than-double>
  2. Math constant PI value in C, Accessed 2014-03-13, <https://stackoverflow.com/questions/9912151/math-constant-pi-value-in-c>
Community
  • 1
  • 1
Cloud
  • 18,753
  • 15
  • 79
  • 153
  • 2
    `M_PI` is not defined by the C standard, so it may not be available. (It is required by POSIX.) And I think you want `#ifndef M_PI` rather than `#ifndef PI`. – Keith Thompson Mar 13 '14 at 22:36
  • thank you so much. And I see exactly the same code as you have. – P45 Imminent Mar 13 '14 at 22:36
  • Are you serious about formally citing Stack Overflow questions? If so, why don't you at least put a hyperlink there either? – Niklas B. Mar 13 '14 at 22:41
  • @NiklasB. I did. Check out the `References` section at the bottom of my answer. I use a GreaseMonkey script I wrote to keep track of the sites I start visiting when writing an answer. It's not just SO answers I quote. Also, the block answer from one of my sources is quoted, so I'm pretty sure I'm adhering to IEEE citation requirements. I don't like automatic hyperlinks because I want the actual URL included in the answer in case this ever gets reposted to another site. – Cloud Mar 13 '14 at 22:58
  • @KeithThompson That's what I meant to do. Thanks for catching that! – Cloud Mar 13 '14 at 22:59
  • 1
    @NiklasB: Oh, yeah I was referring to the references section actually. I would have liked to see clickable links in there ;) You can easily make the URL itself a link. Maybe a simple addition to your script? Pretty weird stuff. – Niklas B. Mar 13 '14 at 23:00
  • @Dogbert: The URLs in your references section are not recognized as hyperlinks, probably because of the surrounding `<` and `>` characters. In any case, I don't think that kind of formality is necessary when linking on the same site. – Keith Thompson Mar 13 '14 at 23:00
  • @NiklasB. Thanks! I could include them, but I want it to be consistent across all entries in my References section. Does SO allow non-SO pages to be formatted as HTML hyperlinks that are clickable? I can't seem to get them working. – Cloud Mar 13 '14 at 23:01
  • 1
    @Dogbert: I fixed that for you. I also think you are missing a reference from the quote to the reference (or whatever that is called) – Niklas B. Mar 13 '14 at 23:02
  • @KeithThompson The inclusion of the chevrons `<>` is intentional, as it is usually used in a lot of technical docs and papers when I write them. I also want the page name and URL to be explicitly rendered simultaneously. – Cloud Mar 13 '14 at 23:02
  • And there are thought SO was fun rather than work. I think you just proved me wrong, but I'm not sure – Niklas B. Mar 13 '14 at 23:03
  • @NiklasB. Thanks. Monospaced font and clickable. Yay! Yeah, SO is fun, I just have a lot of habits that are hard to break after so many years. Also, I like to make sure the sources get as much exposure as possible, as they deserve a few upvotes if deemed valid. – Cloud Mar 13 '14 at 23:05
4

The number of digits in a macro definition almost certainly will have no effect at all on run-time performance.

Macro expansion is textual. That means that if you have:

#define PI 3.14159... /* 50 digits */

then any time you refer to PI in code to which that definition is visible, it will be as if you had written out 3.14159....

C has just three floating-point types: float, double, and long double. There sizes and precisions are implementation-defined, but they're typically 32 bits, 64 bits, and something wider than 64 bits (the size of long double typically varies more from system to system than the other two do.)

If you use PI in an expression, it will be evaluated as a value of some specific type. And in fact, if there's no L suffix on the literal, it will be of type double.

So if you write:

double x = PI / 2.0;

it's as if you had written:

double x = 3.14159... / 2.0;

The compiler will probably evaluate the division at compile time generating a value of type double. Any extra precision in the literal will be discarded.

To see this, you can try writing a small program that uses the PI macro and examining an assembly listing.

For example:

#include <stdio.h>

#define PI 3.141592653589793238462643383279502884198716939937510582097164

int main(void) {
    double x = PI;
    printf("x = %g\n", x);
}

On my x86_64 system, the generated machine code has no reference to the full precision value. The instruction corresponding to the initialization is:

movabsq $4614256656552045848, %rax

where 4614256656552045848 is a 64-bit integer corresponding to the binary IEEE double-precision representation of a number as close as possible to 3.141592653589793238462643383279502884198716939937510582097164.

The actual stored floating-point value on my system happens to be exactly:

3.1415926535897931159979634685441851615905761718750000000000000000

of which only about 16 decimal digits are significant.

Keith Thompson
  • 254,901
  • 44
  • 429
  • 631