I quickly whipped up this (very crude, pardon) function:
#include <iostream>
#include <random> // std::mt19937()
typedef unsigned long long uint64;
uint64 SET_BITMASK[64];
void init_bitmask()
{
for(int i = 0; i < 64; i++) SET_BITMASK[i] = 1ULL << i;
}
int main()
{
std::mt19937 gen_rand(42);
uint64 bb = 0ULL;
double avg1, avg2;
init_bitmask();
for(unsigned int i = 0; i < 10; i++)
{
std::clock_t begin = std::clock();
for(unsigned int j = 0; j < 99999999; j++)
{
bb |= 1ULL << (gen_rand() % 64);
}
std::clock_t end = std::clock();
std::cout << "For bitshifts, it took: " << (double) (end - begin) / CLOCKS_PER_SEC << "s." << std::endl;
avg1 += (double) (end - begin) / CLOCKS_PER_SEC;
bb = 0ULL;
begin = std::clock();
for(unsigned int j = 0; j < 99999999; j++)
{
bb |= SET_BITMASK[gen_rand() % 64];
}
end = std::clock();
std::cout << "For lookups, it took: " << (double) (end - begin) / CLOCKS_PER_SEC << "s." << std::endl << std::endl;
avg2 += (double) (end - begin) / CLOCKS_PER_SEC;
}
std::cout << std::endl << std::endl << std::endl;
std::cout << "For bitshifts, the average is: " << avg1 / 10 << "s." << std::endl;
std::cout << "For lookups, the average is: " << avg2 / 10 << "s." << std::endl;
std::cout << "Lookups are faster by " << (((avg1 / 10) - (avg2 / 10)) / (avg2 / 10))*100 << "%." << std::endl;
}
An average of ten over one hundred million bit sets for each iteration is 1.61603s
for bitshifts and 1.57592s
for lookups consistently (even for different seed values).
Lookup tables astonishingly seem consistently faster by roughly 2.5%
(in this particular use case).
Note: I used random numbers to prevent any inconsistencies, as shown below.
If I use i % 64
to shift/index, bitshifting is faster by about 6%
.
If I use a constant to shift/index, the output is varied by about 8%, between -4% and 4%, which makes me think that some funny guessing business is in play. Either that, or they average to 0% ;)
I cannot draw a conclusion since this is certainly not a real scenario, as even in a chess engine, these set bit cases won't follow each other in rapid succession. All I can say is that the difference is probably negligible. I can also add that lookup tables are inconsistent, as you are at the mercy of whether the tables have been cached. I'm personally going to use bitshifts in my engine.