I'm currently building a small django project and I decided to use cookiecutter-django for a base as everything I need was included. When setting up the project I asked cookiecutter-django to include the settings for Celery and I can find everything in the project, so far so good. However I do have some issues with getting Celery to run as it should. When I start a task from from an app nothing happens.
The docker containers are all started properly. Django and Postgres work, Redis is up and I was able to bash into the container and query it. From the console output I see that the celeryworker container is up and running. I also see that my tasks are being recognized by Celery:
celeryworker_1 | [tasks]
celeryworker_1 | . metagrabber.taskapp.celery.debug_task
celeryworker_1 | . scan_and_extract_meta
celeryworker_1 | . start_job
After puzzling a lot about this I decided to create a new Docker container for Flower to see what's happening under the hood. Interestingly enough I can see that there is a worker and its name is correct (I compared the worker ID in the Celery container). However if I start a task from one of my views like this:
celery_task_id = start_job_task.delay(job.id)
I don't see any tasks coming in on Flower. I can see that celery_task_id
gets a UUID, but that's everything. Nothing to see on Flower (all counts for active, processed, failed, succeeded, retried remain on 0). If I execute bash in my Redis container and use redis-cli
I don't see a queue called celery
, which just means that there is no Celery task. There might (I say might) be a clue left by Flower on the logs:
flower_1 | [W 180304 10:31:05 control:44] 'stats' inspect method failed
flower_1 | [W 180304 10:31:05 control:44] 'active_queues' inspect method failed
flower_1 | [W 180304 10:31:05 control:44] 'registered' inspect method failed
flower_1 | [W 180304 10:31:05 control:44] 'scheduled' inspect method failed
But to be honest I don't know how this could help me.
So I went ahead and added some logging to see what's going on and I found out that actually my Django container tries to do the work when I execute start_job_task.delay(job.id)
instead of handing over to Celery. I somehow have the feeling that all this has some bad connection to Redis. But how? The configuration in the docker-compose file just looks fine the way that cookiecutter set things up:
redis:
image: redis:3.0
celeryworker:
<<: *django
depends_on:
- redis
- postgres
environment:
- C_FORCE_ROOT=true # just for local testing
ports: []
command: /start-celeryworker.sh
I also tried exposing the port on the Redis container manually, but that didn't get me any further.
For completeness, here are my Celery settings from my config files. As set up with cookiecutter:
# base.py
INSTALLED_APPS += ['mailspidy.taskapp.celery.CeleryConfig']
CELERY_BROKER_URL = env('CELERY_BROKER_URL', default='django://')
if CELERY_BROKER_URL == 'django://':
CELERY_RESULT_BACKEND = 'redis://'
else:
CELERY_RESULT_BACKEND = CELERY_BROKER_URL
# local.py
CELERY_ALWAYS_EAGER = True
Any ideas why the tasks don't get sent to Celery?