28

I'm trying to pass the result of one test to another in pytest - or more specifically, reuse an object created by the first test in the second test. This is how I currently do it.

@pytest.fixture(scope="module")
def result_holder:
    return []

def test_creation(result_holder):
    object = create_object()
    assert object.status == 'created' # test that creation works as expected
    result_holder.append(object.id) # I need this value for the next test

# ideally this test should only run if the previous test was successful
def test_deletion(result_holder):
    previous_id = result_holder.pop()
    object = get_object(previous_id) # here I retrieve the object created in the first test
    object.delete()
    assert object.status == 'deleted' # test for deletion

(before we go further, I'm aware of py.test passing results of one test to another - but the single answer on that question is off-topic, and the question itself is 2 years old)

Using fixtures like this doesn't feel super clean... And the behavior is not clear if the first test fails (although that can be remedied by testing for the content of the fixture, or using something like the incremental fixture in the pytest doc and the comments below). Is there a better/more canonical way to do this?

Drise
  • 4,310
  • 5
  • 41
  • 66
pills
  • 656
  • 1
  • 5
  • 10
  • 3
    I'd say that passing stuff around tests it's a bad idea as it beats the purpose of *unit*testing. Anyhow, you can use something like what it's described here https://docs.pytest.org/en/latest/example/simple.html#incremental-testing-test-steps to run a test only if the previous one was successful – Ignacio Vergara Kausel Mar 12 '18 at 15:26
  • Yeah I'm aware of what a unit test is supposed to do. However how would you suggest I test the deletion of an object without creating it first ? (I'm not being flippant - if there's a standard, best practice way to do these kinds of tests I'm interested. At the moment the best I can come up with is to re-use the object created in the creation test as a fixture for the deletion test) – pills Mar 12 '18 at 15:34
  • 1
    you have a fixture with the object created that hopefully will also be useful for other tests (function scope) and then you delete the object. And you test the creation separately. Just an observation (which might have to do with trying to provide a MCVE), keep in mind that your `test_put_stuff` is not really a test and is working as some sort of extended fixture. – Ignacio Vergara Kausel Mar 12 '18 at 15:39
  • I know, the second snippet is just a PoC to see if fixtures were writable. I'll roll the two snippets in one to make it clearer. – pills Mar 12 '18 at 15:45
  • Why does everyone think this is a bad idea. I am testing that some firmware files are as specified by hashing them. I do not know what the name of these files are beforehand so I hash all the files in the firmware directory and get the names of the files that match the control hashes. I then want to use these file names in the next test that tests loading the firmware in an MCU. Why would I go through hashing all the files again rather than just share the file names across tests. Is it better to hash everything in a fixture to get the names, then hash only these files again in the hash test? – jacob Feb 10 '23 at 23:13

2 Answers2

39

For sharing data between tests, you could use the pytest namespace or cache.

Namespace

Example with sharing data via namespace. Declare the shared variable via hook in conftest.py:

# conftest.py

import pytest


def pytest_namespace():
    return {'shared': None}

Now access and redefine it in tests:

import pytest


def test_creation():
    pytest.shared = 'spam'
    assert True


def test_deletion():
    assert pytest.shared == 'spam'

Cache

The cache is a neat feature because it is persisted on disk between test runs, so usually it comes handy when reusing results of some long-running tasks to save time on repeated test runs, but you can also use it for sharing data between tests. The cache object is available via config. You can access it i.e. via request fixture:

def test_creation(request):
    request.config.cache.set('shared', 'spam')
    assert True


def test_deletion(request):
    assert request.config.cache.get('shared', None) == 'spam'

ideally this test should only run if the previous test was successful

There is a plugin for that: pytest-dependency. Example:

import pytest


@pytest.mark.dependency()
def test_creation():
    assert False


@pytest.mark.dependency(depends=['test_creation'])
def test_deletion():
    assert True

will yield:

$ pytest -v
============================= test session starts =============================
...
collected 2 items

test_spam.py::test_creation FAILED                                      [ 50%]
test_spam.py::test_deletion SKIPPED                                     [100%]

================================== FAILURES ===================================
________________________________ test_creation ________________________________

    def test_creation():
>       assert False
E       assert False

test_spam.py:5: AssertionError
===================== 1 failed, 1 skipped in 0.09 seconds =====================
hoefling
  • 59,418
  • 12
  • 147
  • 194
  • 7
    `pytest_namespace` was [removed in Pytest 4.0](https://docs.pytest.org/en/latest/deprecations.html#pytest-namespace) with no replacement. We just added attributes to `__class__`. – Noumenon Oct 23 '19 at 19:19
  • please note`@pytest.mark.dependency()` is about ordering tests and not sharing objects between tests. – Sławomir Lenart Aug 27 '21 at 11:52
  • `pytest-dependency` does not order test cases, it only skips test cases if the test case they depend on is skipped or failed. `pytest-order` can be used for ordering. – Sai Prannav Krishna Sep 09 '22 at 06:34
  • The `request` name is only defined within a fixture. Your cache example should fail with `NameError: name 'request' is not defined`, as you are not within a `fixture` decorator context. – jacob Feb 10 '23 at 23:07
  • @jacob `request` is a builtin fixture itself and is meant to be used as a test argument. Why do you think it can be defined only within a fixture? – hoefling Feb 10 '23 at 23:13
  • @hoefling Because I am getting that exact `NameError` when trying a similar thing. Where are you defining request? Is it imported somewhere? I was looking at the docs here: https://pytest.org/en/6.2.x/reference.html#request – jacob Feb 10 '23 at 23:17
  • @hoefling Also, [here](https://pytest.org/en/6.2.x/cache.html), describes the cache as only being used for persistence across test runs, not between tests in the same run (which is what I need to do). – jacob Feb 10 '23 at 23:20
  • @hoefling well, totally missed that you had `request` as a fixture parameter there. Reread your comment and it hit me. First time with pytest here :) Works great. Upvoting now. – jacob Feb 10 '23 at 23:54
2
#Use return and then call it later so it'll look like: 

def test_creation():
    object = create_object()
    assert object.status == 'created'
    return(object.id) #this doesn't show on stdout but it will hand it to what's calling it


def test_update(id):
    object = test_creation
    object.id = id
    object.update()
    assert object.status == 'updated' # some more tests

#If this is what youre thinking of there ya go
nAZklX
  • 43
  • 5
  • 1
    This is pretty close to what I want, except that `test_creation()` ends up being called twice. As these tests have side effects (unfortunate but unavoidable in my context) this is something I'd rather avoid. – pills Mar 12 '18 at 15:53
  • Can you make the id variable be in creation rather than update? – nAZklX Mar 12 '18 at 16:52
  • not sure I understand your question. Please check the original question, which I've updated to be more specific – pills Mar 12 '18 at 17:07