1

I have Python Celery running as a daemon with Celerybeat picking up tasks on an Amazon Linux box. When I run my test task, it completes without incident.

@app.task
def test():
    print("Testing 123")
    return 0

However, when I try to fire off a Ruby script using Python's subprocess module, it polls a handful of times and then exits with a return code of 1.

@app.task
def run_cli():
    try:
        process = subprocess.Popen([filepath, "run"], stdout=subprocess.PIPE)

        while not process.poll():
            data = process.stdout.readline()
            if data:
                sys.stdout.write("Polling: " + data.decode("utf-8"))
            else:
                sys.stdout.write("Polling: No Data.")

        return process.wait()
    except Exception as e:
        print(e)

I've confirmed that the Ruby script runs cleanly when executed by a Celery worker in a Python shell using tasks.run_cli.apply(). So why isn't the Celery Daemon executing this tasks?

Forewarning: I'm pretty new to Python & Celery and my Linux skills are patchy so I apologize if it's something obvious. Any help is much appreciated.

kellanburket
  • 12,250
  • 3
  • 46
  • 73

1 Answers1

1

For polling to end the Ruby script had to send a non-zero signal back to the Python script, otherwise the process completes and Python continues polling without receiving data. It's also possible to break out of the loop in the else condition since a falsey response to process.stdout.readline()indicates (I suspect) that processing of the script has completed.

kellanburket
  • 12,250
  • 3
  • 46
  • 73