I hear all the time that Numpy arrays are quicker for dealing with large amounts of data. But just how much data does a Numpy array need to have for it to supersede the efficiency of a standard Python array (technically list)?
Thanks.
I hear all the time that Numpy arrays are quicker for dealing with large amounts of data. But just how much data does a Numpy array need to have for it to supersede the efficiency of a standard Python array (technically list)?
Thanks.
For instance, if you want to set all elements to 1, then numpy is faster for me at 10 elements, maybe earlier I didn't check:
>>> import timeit
>>> timeit.timeit('for i in r: a[i] = 1', setup='a = [0]*10; r=range(len(a))')
0.3777730464935303
>>> timeit.timeit('b[:] = 1', setup='import numpy; b=numpy.array([0]*10)')
0.3234739303588867
With 1000 elements, the first is about 100 times as slow, the second only about 2 times.
But it all depends on what you need to do. If you can avoid for loops by using numpy-isms (like assinging to b[:]) then numpy is blazing fast; if you have to use a for loop, then it won't help much.