I want to measure the number of clock cycles it takes to do an addition operation in Python 3.
I wrote a program to calculate the average value of the addition operation:
from timeit import timeit
def test(n):
for i in range(n):
1 + 1
if __name__ == '__main__':
times = {}
for i in [2 ** n for n in range(10)]:
t = timeit.timeit("test(%d)" % i, setup="from __main__ import test", number=100000)
times[i] = t
print("%d additions takes %f" % (i, t))
keys = sorted(list(times.keys()))
for i in range(len(keys) - 2):
print("1 addition takes %f" % ((times[keys[i+1]] - times[keys[i]]) / (keys[i+1] - keys[i])))
Output:
16 additions takes 0.288647
32 additions takes 0.422229
64 additions takes 0.712617
128 additions takes 1.275438
256 additions takes 2.415222
512 additions takes 5.050155
1024 additions takes 10.381530
2048 additions takes 21.185604
4096 additions takes 43.122559
8192 additions takes 88.323853
16384 additions takes 194.353927
1 addition takes 0.008292
1 addition takes 0.010068
1 addition takes 0.008654
1 addition takes 0.010318
1 addition takes 0.008349
1 addition takes 0.009075
1 addition takes 0.008794
1 addition takes 0.008905
1 addition takes 0.010293
1 addition takes 0.010413
1 addition takes 0.010551
1 addition takes 0.010711
1 addition takes 0.011035
So according to this output one addition takes approximately 0.0095 usecs. Following this page instructions I calculated that one addition takes 25 CPU cycles. Is this a normal value and why? Because assembly instruction ADD only takes 1-2 CPU cycles.