everyone! I am trying to test the creation performance of dictionary with objects, but I get some weird results. I used three different methods to measure the time to create lots of dictionary in Python. The first solution is time module. I know it is not accurate. The test file is "node_time.py"
from __future__ import print_function
from time import time
class node(object):
def __init__(self, key, value):
self.key = key
self.value = value
self.right = None
self.left = None
self.parent = None
self.depth = 0
return
begin = time()
content = [node(i,i) for i in range(1000000)]
print(time()-begin)
The second method is timeit module. It should be a much better choice. The test file is "node_timeit.py"
from __future__ import print_function
from timeit import repeat
class node(object):
def __init__(self, key, value):
self.key = key
self.value = value
self.right = None
self.left = None
self.parent = None
self.depth = 0
return
cmd = "content = [node(i,i) for i in range(1000000)]"
prepare = "from __main__ import node"
cost = min(repeat(cmd, prepare, repeat=1, number =1))
print(cost)
The third method is to use the system command "time" in Linux. The test file is "node_sys.py"
from __future__ import print_function
class node(object):
def __init__(self, key, value):
self.key = key
self.value = value
self.right = None
self.left = None
self.parent = None
self.depth = 0
return
content = [node(i,i) for i in range(1000000)]
Finally the result is quite different.
-bash-4.2$ python2 node_time.py
5.93654894829
-bash-4.2$ python2 node_timeit.py
2.6723048687
-bash-4.2$ time python2 node_sys.py
real 0m8.587s
user 0m7.344s
sys 0m0.716s
The result with time module method (measure the wall-clock time) should be greater than the currect value. But with Linux command "time", the sum of user CPU time and sys CPU time would be as much as 8.060 s. Which result is the correct one? And why they are so much different? Thanks for any comment!