11

I followed the official Celery documentation regarding how to configure Celery to work with Django (python 3) and RabbitMQ. I already have a systemd service to start my Django Application using Gunicorn, NGINX is used as a reverse reverse-proxy.

Now I need to daemonize Celery itself based on the offical documentation but my current settings doesn't seem to work properly as my application is not recognized, I get error below at Celery systemd service start:

# systemctl start celery-my_project
# journalctl -xe

Error: 
Unable to load celery application
The module celery-my_project.celery was not found
Failed to start Celery daemon

As a test, I got a rid of all the systemd/Gunicorn/NGINX and basically started my virtualenv/Django application & Celery worker manually: Celery tasks are properly detected by Celery worker:

celery -A my_project worker -l debug 

How to properly configure systemd unit so that I can daemonize Celery?

Application service (systemd unit)

[Unit]
Description=My Django Application
After=network.target

[Service]
User=myuser
Group=mygroup
WorkingDirectory=/opt/my_project/
ExecStart=opt/my_project/venv/bin/gunicorn --workers 3 --log-level debug --bind unix:/opt/my_project/my_project/my_project.sock my_project.wsgi:application

[Install]
WantedBy=multi-user.target

Celery service (systemd unit)

[Unit]
Description=Celery daemon
After=network.target

[Service]
Type=forking
User=celery
Group=mygroup
EnvironmentFile=/etc/celery/celery-my_project.conf
WorkingDirectory=/opt/my_project
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} --pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'

[Install]
WantedBy=multi-user.target

Celery service configuration file (systemd EnvironmentFile)

# Name of nodes to start
# here we have a single node
CELERYD_NODES="w1"

# Absolute or relative path to the 'celery' command:
CELERY_BIN="/opt/my_project/venv/bin/celery"

# App instance to use
CELERY_APP="my_project"

# How to call manage.py
CELERYD_MULTI="multi"

# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"

# - %n will be replaced with the first part of the nodename.
# - %I will be replaced with the current child process index
#   and is important when using the prefork pool to avoid race conditions.
CELERYD_PID_FILE="/var/run/celery/%n.pid"
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_LOG_LEVEL="DEBUG"

Django project layout

# Project root: /opt/my_project

my_project
    manage.py
    my_project
        __init__.py
        settings.py
        celery.py
    my_app
        tasks.py
        forms.py
        models.py
        urls.py
        views.py    
    venv

__init__.py

from .celery import app as celery_app

__all__ = ('celery_app',)

celery.py

import os
from celery import Celery

os.environ.setdefault('DJANGO_SETTINGS_MODULE','my_project.settings')

app = Celery('my_project')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
donmelchior
  • 893
  • 3
  • 13
  • 31
  • 2
    Hey! Where you able to solve this problem? I faced a similar one and ended up just using `celery worker` instead of `celery multi` – categulario Dec 17 '20 at 21:16

0 Answers0