Consider the following example (which is a simple template of my real issue, of course):
import time
import pytest
@pytest.fixture(scope="session", params=[1, 2, 3, 4, 5])
def heavy_computation(request):
print(f"heavy_computation - param is {request.param}")
time.sleep(10)
return request.param
@pytest.mark.parametrize("param", ["A", "B", "C", "D", "E"])
def test_heavy_computation(param, heavy_computation):
print(f"running {param} with {heavy_computation}")
We have parametrized tests (with 5 parameters) dependent on parameterized fixtures (with 5 different parameters), giving a total of 25 tests. As you can guess by its name, the fixture does some heavy computation that takes a while.
TL;DR - how can I use pytest-xdist such that each worker will run one heavy_computation
and its dependent tests right afterward (without separating this file into 5 files)?
Now the full details:
In order to speed up the testing process, I'm using pytest-xdist. A known issue of pytest-xdist, is that it does not support running fixtures once, which means, for example, that if we have 5 workers that will grab tests 1-A, 1-B, ..., 1-E (to be clear: x-y is a combination the fixture and test parameters), all 5 workers will run the heavy computation, which yields the same result - we don't want that.
In the official docs of the package, there's a proposed solution that suggests using a file lock. The problem with this approach, as far as I understand, is that all tests waiting for a fixture to be ready will hang until the process that started first will finish, instead of waiting inside that process, leaving the other workers to start computing the rest of the fixtures.
My goal is to gather a fixture and its dependent tests to run as a group inside a single worker, without blocking other workers. Is there an elegant* way to do this?
- yes, of course, one solution is separating this file into 5 files with hard-coded fixtures - I don't want to do that if there's a nicer solution.
Hope that it all makes sense. Thanks!