I'm aware of using the subprocess
module to isolate functions that might segfault. This works:
import subprocess
# Blocking for simplicity
res = subprocess.check_output(["python", "c_library_wrapper.py", arg0, arg1, ...])
What I'm trying to figure out is why multiprocessing
doesn't have the same effect. This doesn't seem to work:
import multiprocessing
from c_library_wrapper import f
# Assume that f puts the return value into a shared queue
p = multiprocessing.Process(target=f, args=(arg0, arg1, ...))
p.start()
p.join()
Isn't this also creating an independent process? Is there a core concept I'm missing here?
Background: I'm isolating a large third-party C library to protect against segfaults. I know that the best way to handle segfaults is to fix them, but this library is really big.