I am using Celery with supervisor running the workers and Redis as the broker, and I'm having an issue with a Celery worker apparently freezing up, making it unable to process any more tasks and causing its task queue in Redis to fill up to the point of causing some memory issues. I tried setting the expires
option when I called the task, thinking that this would take advantage of Redis' support for key expiry:
some_task.apply_async(args=('foo',), expires=60)
but this didn't work, and when I inspected the corresponding list in the Redis CLI, it just kept expanding – perhaps unsurprisingly, because it sounds like list expiry is not built-in functionality in Redis. The Celery docs say that the expiration time corresponds to time after "publishing" the task, but I couldn't find any mention of what "publishing" actually means. I had assumed it referred to adding the task to the Redis list, so either that presumption is wrong or something else is going on that I don't understand (or both).
Am I wrong about task expiry time? And if so, is there any way to cause the messages to expire within Redis?