I recently posted an answer where I suggested using bitshift instead of multiplication as a performance boost. It was pointed out to me that this isn't the case with the following example:
from timeit import repeat
for e in ['x*2 ', 'x<<1'] * 3:
print(e, min(repeat(e, 'x=5')))
x*2 0.015567475988063961
x<<1 0.024531989998649806
x*2 0.01551242297864519
x<<1 0.024578287004260346
x*2 0.015560572996037081
x<<1 0.02448918900336139
This is the case for values of x
up to 1,000,000,000. Note that this value decreases as the value by which x
is being multiplied/bit-shifted increases as well.
This doesn't make sense to me as bitshift is objectively a simpler and faster operation. And we can see this as the bitshift speeds up as x
grows. So, why is it slower for smaller values of x
?
Moreover, changing my code up a bit yielded some interesting results:
for e in ['x*2 ', 'x<<1'] * 3:
print(e, max(repeat(e, 'x=5')))
x*2 0.054458492988487706
x<<1 0.02453691599657759
x*2 0.015550968993920833
x<<1 0.0246038619952742
x*2 0.015542584005743265
x<<1 0.024583352991612628
As we can see from this, the usage of multiplication ran much slower than bitshift on the initial pass, although all subsequent usages had comparable times. It looks like there's some sort of caching operation going on here but I can't fathom why that would result in different runtimes.