check my following code; it is part of sigma_2 function (using crude sieving) implemented in python which is one of divisor functions http://mathworld.wolfram.com/DivisorFunction.html
from time import time
from itertools import count
import numpy
def sig2(N, nump=False):
init = time()
#initialize array with value=1 since every positive integer is divisible by 1
if nump:
print 'using numpy'
nums = numpy.ones((N,), dtype=numpy.int64)
else:
nums = [1 for i in xrange(1, N)]
#for each number n < N, add n*n to n's multiples
for n in xrange(2, N):
nn = n*n
for i in count(1):
if n*i >= N: break
nums[n*i-1] += nn
print 'sig2(n) done - {} ms'.format((time()-init)*1000)
I tried it with varying values and with numpy it is quite disappointing.
for 2000:
sig2(n) done - 4.85897064209 ms
took : 33.7610244751 ms
using numpy
sig2(n) done - 31.5930843353 ms
took : 55.6900501251 ms
for 200000:
sig2(n) done - 1113.80600929 ms
took : 1272.8869915 ms
using numpy
sig2(n) done - 4469.48194504 ms
took : 4705.97100258 ms
it goes on and my code isn't really scalable - for it not being O(n), but with these two and besides these two result using numpy causes performance problems. Shouldn't numpy be faster than python lists and dicts? That was my impression on numpy.