Part of a large project I have includes a section, in python, like this:
failcount = 0
done = False
while done == False:
try:
result = subprocess.check_output(program)
done = True
except subprocess.CalledProcessError as e:
failcount += 1
logwrite('logfile.txt', 'Failed. Counter = {0}\nError message: {1}\n-'.format(failcount, e.returncode))
if failcount == 20:
print 'It failed 20 times, aborting...'
quit()
What this is meant to do is run "program" from the command line. "program" is a large computational chemistry package which fails sometimes, so I run it in a loop here. If it fails 20 times, then my python script terminates. This works just fine and it does what is intended. However, my issue is that my chemistry package takes about three hours for each attempt and I want to monitor it as it's going.
If I run it from the command line manually I can simply do "program > logfile" and then tail -f the logfile to watch it go. However it seems you can't do something in python like:
subprocess.check_output(['program', '>', 'logfile'])
Is there a way to have python print out the contents of subprocess.check_output as it is being filled? I think subprocess.check_output just contains whatever is in stdout. Can I clone it between python and a pipe somehow?
Possible workaround: I made a bash script called run_program.sh which just does program > logfile as I listed above, and then I used python's subprocess to execute run_program.sh. This way I can monitor is as desired but now the output of program is in a file, instead of in python, so I would have to have python read a large logfile, and capture error messages if needed, so I would prefer if I could avoid something like this.