2

I have written a simple function to run jobs with dependency tracking. The exact code isn't important, but the way I did it is to create a job monitoring function that I fork with multiprocessing.Process, I send and get jobs to/from it with two multiprocessing Queue objects.

It works great, but because I use an infinite loop, when the parent process hangs on exit, because python is still waiting for the child. Is there a good way to kill a child process immediately on exit? Maybe by catching a signal?

My actual code is here: https://gist.github.com/MikeDacre/e672969aff980ee950b9dfa8b2552d40 A more complete example is here: http://nbviewer.jupyter.org/github/MikeDacre/python-cluster/blob/cluster/tests/local_queue.ipynb

A toy example is probably better though:

import multiprocessing as mp
from time import sleep
def runner():
    while True:
        sleep(2)

runner = mp.Process(target=runner)
runner.start()
exit()

That will happily hang until Ctrl-C is entered.

I don't think signal catching will work, as there are no signals sent on normal exit. Is there any way to catch exit()? If not is there any way to create a Process in a way that will terminate naturally?

Mike D
  • 727
  • 2
  • 10
  • 26

1 Answers1

3

Thanks everyone, shortly after writing this question, I figured out the solution using python's atexit module:

import atexit
import multiprocessing as mp
from time import sleep
def runner():
    while True:
        sleep(2)

def kill_runner(runner):
    runner.terminate()

runner = mp.Process(target=runner)
runner.start()
atexit.register(kill_runner, runner)
exit()

That works as expected.

Théo Rubenach
  • 484
  • 1
  • 4
  • 12
Mike D
  • 727
  • 2
  • 10
  • 26