I'm trying to emulate the subprocess.Popen stdout / stderr
for Python methods targets, meaning, I want to create N parallel jobs, with unique stdout / stderr handlers.
This is my current code:
#!/bin/python
import time
import multiprocessing as mp
class Job(object):
def __init__(self, target, *a, **kw):
self.target = target
self.args = a
self.kwargs = kw
def parallelize(jobs):
"""
Args:
jobs (list): list of the jobs to run. with all its params
"""
procs = [mp.Process(target=job.target, args=job.args, kwargs=job.kwargs) for job in jobs]
for p in procs:
p.start()
for p in procs:
p.join()
def dummy(*a, **kw):
print a
time.sleep(1)
print kw
def main():
parallelize([Job(dummy) for i in range(2)])
main()
which only parallelizes jobs. The output is still shown on the screen. If I would have used subprocess.Popen()
I could have added stdout=PIPE()
argument for each process creation and store its value in an object, later to be returned from parallelize()
, but I don't see how.
multiprocessing gives some options, such as using conn
, but that does not help because I'm running black box methods, and I can't override them and make themn print send to conn
instead print.
This page suggests capturing the stdout, but I think its not good, because parallel processes would output to the same place, without separation ability...
How can I achieve parallel processes (with Python methods targets) with IO control?
I though of something like subprocess.Popen('python -c "import my_module; my_module.my_method()", stdout=subprocess.PIPE)
but I think it is too hacky.