I have Python Celery running as a daemon with Celerybeat picking up tasks on an Amazon Linux box. When I run my test task, it completes without incident.
@app.task
def test():
print("Testing 123")
return 0
However, when I try to fire off a Ruby script using Python's subprocess
module, it polls a handful of times and then exits with a return code of 1
.
@app.task
def run_cli():
try:
process = subprocess.Popen([filepath, "run"], stdout=subprocess.PIPE)
while not process.poll():
data = process.stdout.readline()
if data:
sys.stdout.write("Polling: " + data.decode("utf-8"))
else:
sys.stdout.write("Polling: No Data.")
return process.wait()
except Exception as e:
print(e)
I've confirmed that the Ruby script runs cleanly when executed by a Celery worker in a Python shell using tasks.run_cli.apply()
. So why isn't the Celery Daemon executing this tasks?
Forewarning: I'm pretty new to Python & Celery and my Linux skills are patchy so I apologize if it's something obvious. Any help is much appreciated.