I used a function in Python/Numpy to solve a problem in combinatorial game theory.
import numpy as np
from time import time
def problem(c):
start = time()
N = np.array([0, 0])
U = np.arange(c)
for _ in U:
bits = np.bitwise_xor(N[:-1], N[-2::-1])
N = np.append(N, np.setdiff1d(U, bits).min())
return len(*np.where(N==0)), time()-start
problem(10000)
Then I wrote it in Julia because I thought it'd be faster due to Julia using just-in-time compilation.
function problem(c)
N = [0]
U = Vector(0:c)
for _ in U
elems = N[1:length(N)-1]
bits = elems .⊻ reverse(elems)
push!(N, minimum(setdiff(U, bits)))
end
return sum(N .== 0)
end
@time problem(10000)
But the second version was much slower. For c = 10000, the Python version takes 2.5 sec. on an Core i5 processor and the Julia version takes 4.5 sec. Since Numpy operations are implemented in C, I'm wondering if Python is indeed faster or if I'm writing a function with wasted time complexity.
The implementation in Julia allocates a lot of memory. How to reduce the number of allocations to improve its performance?