2

Our deploy process restarts celery using a kill on all the celery proceses in the process list.

Sometimes I see "worker lost" in the celery logs. If a task is running when this happens, will the task be re-run or will it be lost? We are using redis.

Douglas Ferguson
  • 1,242
  • 2
  • 14
  • 25

1 Answers1

3

Basically, you should send SIGTERM to each celery worker. http://docs.celeryproject.org/en/latest/userguide/workers.html#stopping-the-worker.

If you want to kill your processes and restart tasks you can set setting ACK_LATE, which will return task back to query if worker willn't successfully finish the task and halt. http://docs.celeryproject.org/en/latest/configuration.html?highlight=acks_late#celery-acks-late

Rustem
  • 2,884
  • 1
  • 17
  • 32
  • Thanks. We are running a super old version of celery, so upgrading might be a good idea. I'm guessing that late acknowledgement is only as reliable as your queue. We are on redis, so switching to RabbitMQ might be a good idea here. Also, I was reading on that page, that retry is an option vs. ACK_LATE, but this solution might note be good for us because if we are restarting celery we are likely also restarting django, and so there would be no running app to perform the retry... – Douglas Ferguson Apr 05 '13 at 11:40