I have a set of python scripts (script<X>.py). All scripts have a run() function, which should be executed in a new Process. The execution is triggered by a web server (server.py), which imports all scripts. I have access to the server code only.
Is there a way to redirect the script's stdout to a separate file?
In a simplified form the setup looks like this:
=== script1.py ===
def run(**kwargs):
<do stuff>
print(<stuff>)
=== server.py ===
import multiprocessing
import script1
p = multiprocessing.Process(target=script1.run) # <--- This output should be written to file, e.g. script1.txt
p.start()
Update: (ugly) workaround
As suggested in Log output of multiprocessing.Process it helps to replace sys.stdout by the desired file handle in the target function. As my target function lies in a different module , which I am not allowed to change, I manipulated the code text during the import step.
=== script1.py ===
def run(**kwargs):
<do stuff>
print(<stuff>)
=== server.py ===
import multiprocessing
import importlib
def replace_stdout(code):
""" Function to add sys.stdout = open(<file>) in run()"""
replaced_code = []
code_lines = code.split('\n')
for i,line in enumerate(code_lines):
replaced_code.append(line)
if line.startswith("def run(") :
# Check indentation if tab or spaces
indent = "\t" if code_lines[i+1][0] == '\t' else " "*( len(line.strip())-len(line))
# Add line to code
replaced_code.append(f'{indent}import sys')
replaced_code.append(f'{indent}sys.stdout=open("script1.txt","a")')
return "\n".join(replaced_code)
# Read script code from file, manipulate target function code and create module object
code = replace_stdout( open("script1.py","r").read() )
spec = importlib.util.spec_from_loader('script1', loader=None)
module_script1 = importlib.util.module_from_spec(spec)
exec(code, module_script1 .__dict__)
p = multiprocessing.Process(target=module_script1 .run) # <--- This output should be written to file, e.g. script1.txt
p.start()