I have a Python function that calls a wrapper to a C function (which I can't change). Most of the time the C function is very fast, but when it fails the call just hangs on forever. To palliate this, I time-out the call using multiprocessing
:
pool = multiprocessing.Pool(processes=4)
try:
res = pool.apply_async(my_dangerous_cpp_function, args=(bunch, of, vars))
return res.get(timeout=1.)
except multiprocessing.TimeoutError:
terminate_pool(pool)
pool = multiprocessing.Pool(processes=4)
How can I terminate the pool when the function being called doesn't answer any signal?
If I replace terminate_pool(pool)
by pool.terminate()
, then the call to pool.terminate()
hangs as well. Instead, I'm currently sending SIGKILL to all sub-processes:
def terminate_pool(pool):
for p in pool._pool:
os.kill(p.pid, 9)
pool.close() # ok, doesn't hang
#pool.join() # not ok, hangs forever
This way, hanging sub-processes stop eating 100% CPU, however I can't call pool.terminate()
or pool.join()
(they hang), so I just leave the pool object behind and create a new one. Even though they received a SIGKILL, sub-processes are still open, so my number of Python processes never stops increasing...
Is there a way to annihilate the pool and all its sub-processes once and for all?