1

I was reviewing code and updating the import statements based on general guidelines, changing “from xxx import *” to “from xxx import m, n, p”. The difference in script execution time, however, was noticeable:

From collections import OrderedDict
From definitions import *

Average script time 22ms

From collections import OrderedDict
From definitions import a, b, c, d, e, f

Average script time 48ms

The topic of import performance has been taken up several times on SE, and these results seem to run counter to some answers. Why would the import statement in this case cause such a significant difference in the script performance?

A follow-up question: This package uses a "definitions.py" to store the package general purpose (mostly static) classes and functions. What is the best way to import all classes from a module, without needing to prefix with "definitions." every time they are used?

EDIT More information... Curiouser and Curiouser

Script timing is done using time.clock() over >50 iterations

It turns out that OrderedDict is already imported in definitions, so when I import it from there then the script time is back down:

From definitions import a, b, c, d, e, f, OrderedDict

Average script time 22ms

Just to thicken the plot, there was also a "from System import Array" statement though this has no effect on the script time. The way that the script imports OrderedDict appears to be the issue.

  • How did you time it? Include the code in the question so we can reproduce the results. – Peter Wood Feb 05 '16 at 20:37
  • Quick test. Try to do ```import definitions``` and then see if a, b, c, d, e, and f are all in ```dir(definitions)```. ```dateutil``` is an interesting example of a library, as ```from dateutil import *``` doesn't give any of its features. Instead you need to do ```from dateutil import parser```. – limasxgoesto0 Feb 05 '16 at 21:14

0 Answers0