1

I'm trying to attach an event listener to a python subprocess, that is called whenever the process closes/exits. The callback will respawn the subprocess and attach the same event listener to the new one.

Something akin to calling .on from child_process in node.js

I'm aware of this thread already. But I'd ideally like to do this using asyncio.subprocess instead of running each event listener in a separate thread.

I was thinking of something like this-

from asyncio import subprocess as asubprocess
from typing import Callable, Awaitable

async def handle_close(proc: asubprocess.Process, spawner: Callable[[], Awaitable[asubprocess.Process]]):
    while True:
        await proc.wait()
        proc = await spawner()

(This function can also be extended to take a list of processes and their respective spawners and asyncio.wait on all of them, once any of them stops - they are respawned and the process is repeated)

The proc argument would be the process returned by asubprocess.create_subprocess_exec and spawner would be an async function that respawns that subprocess. (or any other callback)

The only issue is, I don't know how to run this in the background without polluting my entire codebase with async. Ideally, there will be multiple subprocesses that will require their own handle_close in the background. While all of these handlers are running, the caller code should not be blocked.

In my own usecase, I don't need the process handles so those can be discarded, as long as the processes keep running and respawning in case they stop and the caller code that spawns all the processes has control to do other things, it's fine.

Chase
  • 5,315
  • 2
  • 15
  • 41

1 Answers1

1

The only issue is, I don't know how to run this in the background without polluting my entire codebase with async.

It's not entirely clear from this sentence what the actual constraint is, i.e. how far you are willing to go with async code. Normally a program that uses asyncio is assumed to run inside an asyncio event loop, and then the whole program is async, i.e. uses callbacks and/or coroutines. However, if you have a large code base using blocking code and threads, you can also introduce asyncio in a separate thread. For example:

_loop = asyncio.new_event_loop()
def _run():
    asyncio.set_event_loop(_loop)
    _loop.run_forever()
threading.Thread(target=_run, daemon=True).start()

def async_submit(coro):
    return asyncio.run_coroutine_threadsafe(coro, _loop)

With the event loop running in the background, you can submit tasks to it from your blocking code. For example:

from typing import List

async def handle_process(cmd: List[str]):
    while True:
        p = await asubprocess.create_subprocess_exec(*cmd)
        await p.wait()
        print('restarting', cmd)

# tell the event loop running in the background to respawn "sleep 1":
async_submit(handle_process(["sleep", "1"]))

Note that all interaction with the event loop must be executed through run_coroutine_threadsafe or its cousin call_soon_threadsafe.

Chase
  • 5,315
  • 2
  • 15
  • 41
user4815162342
  • 141,790
  • 18
  • 296
  • 355
  • that seems to work great! Is there anything I should note when the main thread eventually decides to shutdown the thread running the loop and hence all the process handlers? Should iterating through `asyncio.all_tasks(_loop)` to cancel them, followed by a `_loop.stop()` and `thrd.join()` be enough? assuming `thrd` is the thread handle – Chase Nov 26 '20 at 17:20
  • @Chase Thanks for the typo fixes. You can call `_loop.call_soon_threadsafe(_loop.stop)` at shutdown to stop the event loop and then join the thread, but to be honest, I wouldn't even bother. While canceling a task you spawned is a well-defined operation in asyncio, canceling _all_ tasks, including those spawned internally, is not something the system is prepared for and might cause additional exceptions. Most importantly, it doesn't buy you anything, since the process is about to exit anyway. – user4815162342 Nov 26 '20 at 18:07