3

I'm writing a set of tools to test the behavior of a custom HTTP server: whether it is setting appropriate response codes, header fields etc. I'm using pytest to write tests.

The goal is to make requests to several resources, and then evaluate the response in multiple tests: each test should test a single aspect of the HTTP response. However, not every response is tested with every test and vice-versa.

To avoid sending the same HTTP request multiple time and reuse HTTP responses messages, I'm thinking of using pytest's fixtures, and to run the same tests on different HTTP responses I'd like to use pytest's generate test capabilities. import pytest import requests

def pytest_generate_tests(metafunc):
    funcarglist = metafunc.cls.params[metafunc.function.__name__]
    argnames = sorted(funcarglist[0])
    metafunc.parametrize(argnames, [[funcargs[name] for name in argnames]
                                    for funcargs in funcarglist])


class TestHTTP(object):
    @pytest.fixture(scope="class")
    def get_root(self, request):
        return requests.get("http://test.com")

    @pytest.fixture(scope="class")
    def get_missing(self, request):
        return requests.get("http://test.com/not-there")

    def test_status_code(self, response, code):
        assert response.status_code == code

    def test_header_value(self, response, field, value):
        assert response.headers[field] == value

    params = {
        'test_status_code': [dict(response=get_root, code=200),
                             dict(response=get_missing, code=404), ],
        'test_header_value': [dict(response=get_root, field="content-type", value="text/html"),
                              dict(response=get_missing, field="content-type", value="text/html"), ],
    }

The problem appears to be in defining params: dict(response=get_root, code=200) and similar definitions do not realize, I'd like to bind on the fixture and on on the actual function reference.

When running tests, I get this kinds of errors:

________________________________________________ TestHTTP.test_header_value[content-type-response0-text/html] _________________________________________________

self = <ev-question.TestHTTP object at 0x7fec8ce33d30>, response = <function TestHTTP.get_root at 0x7fec8ce8aa60>, field = 'content-type', value = 'text/html'

    def test_header_value(self, response, field, value):
>       assert response.headers[field] == value
E       AttributeError: 'function' object has no attribute 'headers'

test_server.py:32: AttributeError

How may I convince the pytest to take the fixture value instead of the function?

David
  • 166
  • 8

1 Answers1

3

No need to generate tests from fixtues, just parameterize your fixture and write regular tests for the values it returns:

import pytest
import requests


should_work = [
    {
        "url": "http://test.com",
        "code": 200,
        "fields": {"content-type": "text/html"}
    },
]

should_fail = [
    {
        "url": "http://test.com/not-there",
        "code": 404,
        "fields": {"content-type": "text/html"}
    },
]

should_all = should_work + should_fail


def response(request):
    retval = dict(request.param)  # {"url": ..., "code": ... }
    retval['response'] = requests.get(request.param['url'])
    return retval  # {"reponse": ..., "url": ..., "code": ... }


# One fixture for working requests
response_work = pytest.fixture(scope="module", params=should_work)(response)
# One fixture for failing requests
response_fail = pytest.fixture(scope="module", params=should_fail)(response)
# One fixture for all requests
response_all = pytest.fixture(scope="module", params=should_all)(response)


# This test only requests failing fixture data
def test_status_code(response_fail):
    assert response_fail['response'].status_code == response_fail['code']


# This test all requests fixture data
@pytest.mark.parametrize("field", ["content-type"])
def test_header_content_type(response_all, field):
    assert response_all['response'].headers[field] == response_all['fields'][field]
Nils Werner
  • 34,832
  • 7
  • 76
  • 98
  • Thank you, Nils. However, this code will run all tests on all fixtures, which is not what I desire. I'd like to specify which tests and which fixtures should be combined. For the sake of argument, imagine you do not want to check the return code on a normal request, only on the one that should return 404. Also, I want that each test to check only one aspect of the response, that is, to execute only one assertion. So, if one header is missing, I should get an error that for that header only, while for the others, tests should pass. – David Apr 07 '17 at 09:42
  • I just realized, I could skip check with `key in response` to see whether the `response` object has a `key`, and if so, I can `pytest.skip()` it. – David Apr 07 '17 at 10:20
  • Why not test the return value of every request? The less special treatment of certain conditions, the less likely you are to make mistakes. – Nils Werner Apr 07 '17 at 10:22
  • I have changed the last test function to accept a `field` name parameter, so you have one test per name. – Nils Werner Apr 07 '17 at 10:23
  • I have updated the answer to be able to separate working from failing test data, but I think you will not have double requests. – Nils Werner Apr 07 '17 at 10:43
  • Thanks for all the help, Nils. I'm glad to mark your response as the solution. As a side note, I think I have mislead you with status code. It was meant only as an example: on some request I want to test status codes, on others content-length, on another date etc. It depends on the initial request, what I want to test. So I'll go with the approach you suggested earlier by writing a function for each property I want to check. However, I can safeguard this checks with if statements and skip tests for properties that are not relevant for a particular response. – David Apr 07 '17 at 10:56