What is considered current best practice to test own strategies which are based on hypothesis? There are e.g. tests about how good examples shrink HypothesisWorks/hypothesis-python/tests/quality/test_shrink_quality.py. However I could not find examples so far which test the data generation functionality of strategies (in general, performance, etc.).
Asked
Active
Viewed 209 times
1 Answers
1
Hypothesis runs a series of health checks on each strategy you use, including for time taken to generate data and the proportion of generation attempts that succeed - try e.g. none().map(lambda x: time.sleep(2)).example()
or integers().map(lambda x: x % 17 == 0).example()
to see them in action!
In most cases you do not need to test your own strategies beyond using these healthchecks. Instead, I would check that your tests are sufficient by using a code coverage library.

Zac Hatfield-Dodds
- 2,455
- 6
- 19
-
Right, the HealthChecks are quite helpful. Is there a way to agregate the coverage of several subsequent hypothesis runs? (In case the achievable full coverage is not achieved during a single run.) – thinwybk Apr 06 '18 at 06:23
-
Thanks to a really cool internal feature, you shouldn't need to! When you run a Hypothesis test, it runs under branch coverage, and biases the example generation to cover as many branches as possible. A diverse set of base inputs is also saved in the database, so this shouldn't backslide much at all. If you somehow *still* have uncovered paths, I'd suggest adding an explicit @example or more tests rather than aggregating multiple runs. – Zac Hatfield-Dodds Apr 10 '18 at 13:17
-
1Wow, that's a really powerful feature!!! In my current use case I am using Hypothesis not for Python code testing but remote system testing. This means it won't help me in this case... – thinwybk Apr 10 '18 at 13:28