I'm testing defaultdict()
vs dict.setdefault()
efficiency using timeit
. For some reason, execution of timeit
returns two values;
from collections import defaultdict from timeit import timeit
dd = defaultdict(list)
ds = {}
def dset():
for i in xrange(100):
ds.setdefault(i, []).append(i)
def defdict():
for y in xrange(100):
dd[y].append(y)
Then I print the execution time of both functions and I get 4 values reteurned;
print timeit('dset()', setup='from def_dict import dset')
print timeit('defdict()', setup='from def_dict import defdict')
22.3247003333
23.1741990197
11.7763511529
12.6160995785
Time number executions of the main statement. This executes the setup statement once, and then returns the time it takes to execute the main statement a number of times, measured in seconds as a float. The argument is the number of times through the loop, defaulting to one million. The main statement, the setup statement and the timer function to be used are passed to the constructor.
I'm on Python 2.7.
- Shouldn't the timeit return one value? Also examples I've seen online returns a single value.
- What is the second value then?