Our Python application (a cool web service) has a full suite of tests (unit tests, integration tests etc.) that all developers must run before committing code.
I want to add some performance tests to the suite to make sure no one adds code that makes us run too slow (for some rather arbitrary definition of slow).
Obviously, I can collect some functionality into a test, time it and compare to some predefined threshold.
The tricky requirements:
- I want every developer to be able to test the code on his machine (varies with CPU power, OS(! Linux and some Windows) and external configurations - the Python version, libraries and modules are the same). A test server, while generally a good idea, does not solve this.
- I want the test to be DETERMINISTIC - regardless of what is happening on the machine running the tests, I want multiple runs of the test to return the same results.
My preliminary thoughts:
- Use timeit and do a benchmark of the system every time I run the tests. Compare the performance test results to the benchmark.
- Use cProfile to instrument the interpreter to ignore "outside noise". I'm not sure I know how to read the
pstats
structure yet, but I'm sure it is doable.
Other thoughts?
Thanks!
Tal.