hypothesis allows two different ways to define derived strategies, @composite
and flatmap
. As far as I can tell the former can do anything the latter can do. However, the implementation of the numpy arrays
strategy, speaks of some hidden costs
# We support passing strategies as arguments for convenience, or at least
# for legacy reasons, but don't want to pay the perf cost of a composite
# strategy (i.e. repeated argument handling and validation) when it's not
# needed. So we get the best of both worlds by recursing with flatmap,
# but only when it's actually needed.
which I assume means worse shrinking behavior but I am not sure and I could not find this documented anywhere else. So when should I use @composite
, when flatmap
and when should I go this halfway route as in the implementation linked above?