0

I want to use hypothesis to test a tool we've written to create avro schema from Django models. Writing tests for a single model is simple enough using the django extra:

from avro.io import AvroTypeException

from hypothesis import given
from hypothesis.extra.django.models import models as hypothetical

from my_code import models

@given(hypothetical(models.Foo))
def test_amodel_schema(self, amodel):
    """Test a model through avro_utils.AvroSchema"""
    # Get the already-created schema for the current model:
    schema = (s for m, s in SCHEMA if m == amodel.model_name)
    for schemata in schema:
        error = None
        try:
            schemata.add_django_object(amodel)
        except AvroTypeException as error:
            pass
        assert error is None

...but if I were to write tests for every model that can be avro-schema-ified they would be exactly the same except for the argument to the given decorator. I can get all the models I'm interested in testing with ContentTypeCache.list_models() that returns a dictionary of schema_name: model (yes, I know, it's not a list). But how can I generate code like

for schema_name, model in ContentTypeCache.list_models().items():
    @given(hypothetical(model))
    def test_this_schema(self, amodel):
        # Same logic as above

I've considered basically dynamically generating each test method and directly attaching it to globals, but that sounds awfully hard to understand later. How can I write the same basic parameter tests for different django models with the least confusing dynamic programming possible?

kojiro
  • 74,557
  • 19
  • 143
  • 201

2 Answers2

2

You could write it as a single test using one_of:

import hypothesis.strategies as st

@given(one_of([hypothetical(model) for model in ContentTypeCache.list_models().values()]))
def test_this_schema(self, amodel):
   # Same logic as above

You might want to up the number of tests run in this case using something like @settings(max_examples=settings.default.max_examples * len(ContentTypeCache.list_models())) so that it runs the same number of examples as N tests.

DRMacIver
  • 2,259
  • 1
  • 17
  • 17
0

I would usually solve this kind of problem by parametrising the test, and drawing from the strategy internally:

@pytest.mark.parametrize('model_type', list(ContentTypeCache.list_models().values()))
@given(data=st.data())
def test_amodel_schema(self, model_type, data):
    amodel = data.draw(hypothetical(model_type))
    ...
Zac Hatfield-Dodds
  • 2,455
  • 6
  • 19