In my program I have a function that takes a byte that is always a power of two (never zero) and returns the position of the 1 (high bit) as an integer.
e.g. f(0010 0000) -> 2, f(0000 0001) -> 7
This C program has to run this function millions of times so I need it to be very fast. I've written two implementations for this function, and I don't really know which is faster.
int f(unsigned char bit) {
//ln2 is the log base e of two. Basically I'm doing 7 minus log base 2 of the input
return (int)(7 - round(log(bit) / ln2));
}
int f(unsigned char bit) {
if (bit == 0x00) /**/ return 0;
else if (bit == 0x01) return 7;
else if (bit == 0x02) return 6;
else if (bit == 0x04) return 5;
else if (bit == 0x08) return 4;
else if (bit == 0x10) return 3;
else if (bit == 0x20) return 2;
else if (bit == 0x40) return 1;
else if (bit == 0x80) return 0;
}
I have no computer science education, I'm just a hobbyist so a lot of these problems are hard for me to figure out on my own. I figure that the log() function is slow, but I know that if statements take cycles and branch prediction can cause stuff to slow down. I honestly don't know if that is correct though, just guessing.
Could anyone provide insight for me on which is faster and why? Or if you have an alternative, even better way I'm open to suggestions! Thanks!