If you specifically want session
scope, you may be out of luck in terms of cooperatively scheduled servers with pytest-asyncio
. If you're willing to settle for function
scope, I've gotten it to work. Of course, this means your server will be started and stopped for each test, which isn't a lot of overhead for the trivial echo server here, but may be for your actual server, whatever that may be. Here's an adaptation of your example that works for me.
HOST = "localhost"
@pytest.fixture()
def server(event_loop, unused_tcp_port):
cancel_handle = asyncio.ensure_future(main(unused_tcp_port), loop=event_loop)
event_loop.run_until_complete(asyncio.sleep(0.01))
try:
yield unused_tcp_port
finally:
cancel_handle.cancel()
async def handle_echo(reader, writer):
data = await reader.read(100)
message = data.decode()
addr = writer.get_extra_info('peername')
print(f"SERVER: Received {message!r} from {addr!r}")
writer.write(data)
await writer.drain()
print(f"SERVER: Sent: {message!r}")
writer.close()
print("SERVER: Closed the connection")
async def main(port):
server = await asyncio.start_server(handle_echo, HOST, port)
addr = server.sockets[0].getsockname()
print(f'SERVER: Serving on {addr[0:2]}')
async with server:
await server.serve_forever()
@pytest.mark.asyncio
async def test_something(server):
message = "Foobar!"
reader, writer = await asyncio.open_connection(HOST, server)
print(f'CLIENT: Sent {message!r}')
writer.write(message.encode())
await writer.drain()
data = await reader.read(100)
print(f'CLIENT: Received {data.decode()!r}')
print('CLIENT: Close the connection')
writer.close()
await writer.wait_closed()
The astute reader will notice the asyncio.sleep(0.01)
in the server fixture. I don't know whether the non-determinism is inherent in the asyncio
implementation, or specific to pytest's use of it, but without that sleep
about 20% of the time (on my machine, naturally) the server will not have started listening before the test tries to connect to it, meaning the test then fails with ConnectionRefusedError
. I played around with it quite a bit... spinning the event loop once(via loop._run_once()
) doesn't guarantee the server will be listening. Sleeping for 0.001s
still fails about 1% of the time. Sleeping for 0.01s
seems to pass 100% over 1,000 runs, but if you want to be really sure, you'd do something like this:
# Replace `event_loop.run_until_complete(asyncio.sleep(0.01))` with this:
event_loop.run_until_complete(asyncio.wait_for(_async_wait_for_server(event_loop, HOST, unused_tcp_port), 5.0))
async def _async_wait_for_server(event_loop, addr, port):
while True:
a_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
await event_loop.sock_connect(a_socket, (addr, port))
return
except ConnectionRefusedError:
await asyncio.sleep(0.001)
finally:
a_socket.close()
This will keep trying to connect until it succeeds (or, very unlikely, times out after 5 seconds) before running the test. This is how I'm doing it in my "real" tests.
Now, about the scope. From looking at the source, it looks like pytest-asyncio
has decided that event_loop
is a function-scoped fixture. I tried writing my own module/session scoped version of it, but they're using it internally to schedule each test on its own event loop (presumably to prevent tests from somehow stepping on each other). So unless you want to give up on pytest-asyncio
and "roll your own" test harness to run tests as async coroutines, I think you're pretty much out of luck on the larger scopes.
FWIW, I tried the "background thread", module-scoped solution before I figured out this cooperative solution, and it was a bit of a pain. First, your server needs a way to do a thread-safe, clean shutdown, trigger-able from your fixture that will, itself, be running on the main thread. Second, (and this may not matter to you, but it certainly did to me) debugging was absolutely maddening. It's hard enough to follow the (proverbial) "thread" of coroutine execution on a single event loop running in a single OS thread. Trying to work that out across two threads, each with their own event loop, but only one of which stops at any given time... well, it's difficult. The basic scenario was like this: I had a file with a hundred tests in it. I run it. ~50 tests fail. That's odd, I only changed one little thing... I can see the backtrace in the console output, something is raising an exception deep inside the server code. No problem, I'll put a breakpoint there. Run again in the debugger. Execution stops at the breakpoint. Great! OK, now, which of the 50 tests is it that triggered this error? Oh! I can't know because only the background thread is stopped in the debugger. I eventually figure out the bug, fix it, run again, and 100% of tests pass. Huh? Oh... yeah... because the server runs across the whole session, and was having its internal state scrambled by one test, certain other tests would fail after that scrambling.
Long story short, the background thread/broader scoped solution is possible, but not as nice as this. The second lesson is that you actually probably want a server-per-test/function-scoped fixture, so that your tests are isolated from one another.
As an aside: Being a bit of a testing nerd, I struggled with the idea of even doing this (testing client and server end-to-end in pytest
). As I said in my initial comment, it isn't really "unit testing" any more at this point, it's "integration testing", so it's not all that surprising that a unit testing framework isn't set up to do it very well right out of the box. Fortunately, for all my doubts, doing this has helped me to find (and fix) probably a dozen bugs so far that I'm really glad I can find/replicate in a headless test harness, and not by writing a bunch of selenium
scripts or worse, manually clicking around on a web page. And with the server running cooperatively with the tests in a single thread, it's even pretty easy to use the debugger. Have fun!