I recently wrote some code similar to this:
// Calculate a ^ b
unsigned int result = 1;
for (unsigned int i = 0; i < b; i++) {
result *= a;
}
I got the comment that I should have used pow
from math.h
, and I was all ready to point out the floating-point precision issues because pow
returns a double
. I even found an existing Stack Overflow question that illustrates the problem. Except when I tried to run the code from that other question, it "worked" just fine (i.e. no off-by-1 issues).
Here's the code I've been testing:
#include <math.h>
#include <stdio.h>
int main()
{
unsigned int a = 10;
unsigned int result;
for (unsigned int b = 1; b <= 9; b++) {
result = (unsigned int)pow(a, b);
printf("%u\n", result);
}
return 0;
}
And here's how I'm compiling and running it (on Ubuntu 18.04.3 with GCC version 7.4.0):
$ gcc -O3 -std=c99 -Wall -Wextra -Werror -Wpedantic -pedantic-errors -o pow_test pow_test.c -lm
$ ./pow_test
10
100
1000
10000
100000
1000000
10000000
100000000
1000000000
So why does (unsigned int)pow(a, b)
work? I'm assuming the compiler is doing some magic to prevent the normal floating-point issues? What is it doing and why? Seems like kind of a bad idea to encourage the ignorance of these issues if not all compilers do this. And if all modern compilers do do this, is this not a problem you have to worry about as much anymore?
I've also seen people suggest something like (unsigned int)(pow(a, b) + 0.1)
to avoid off-by-1 issues. What are the advantages/disadvantages of doing that vs. my solution with the loop?