4

I have setup Celery on my Django project with Redis. The scheduled tasks are running without issues. The problems come when triggering an async task using the delay(). The execution stops and it's like is blocked in the loop of kombu.utils.retry_over_time.

I checked and Redis is up and running. I don't really know how to debug this issue.

Here's some package versions

Django==2.1.2
celery==4.2.1
django-celery-beat==1.4.0
django-celery-results==1.0.4
redis==3.2.0
kombu==4.4.0

The settings

CELERY_REDIS_HOST = 'localhost'
CELERY_REDIS_PORT = 6379
CELERY_REDIS_DB = 1 # # Redis DB number, if not provided the default will be 0
CELERY_REDIS_PASSWORD = ''

CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler'

CELERY_BROKER_URL = 'redis://{host}:{port}/{db}'.format(host=CELERY_REDIS_HOST, port=CELERY_REDIS_PORT, db=CELERY_REDIS_DB)
CELERY_RESULT_BACKEND = 'django-db'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json' # Result serialization format
CELERY_TASK_SERIALIZER = 'json' # String identifying the serializer to be used

CELERY_BROKER_TRANSPORT_OPTIONS = {
    'visibility_timeout': 3600, # 1 hour, default Redis visibility timeout
}

How Celery and Celery Beat is launched

Shell script that adds Celery and Celery Beat to Supervisor:

#!/usr/bin/env bash

# Create required directories
sudo mkdir -p /var/log/celery/
sudo mkdir -p /var/run/celery/

# Create group called 'celery'
sudo groupadd -f celery
# add the user 'celery' if it doesn't exist and add it to the group with same name
id -u celery &>/dev/null || sudo useradd -g celery celery
# add permissions to the celery user for r+w to the folders just created
sudo chown -R celery:celery /var/log/celery/
sudo chown -R celery:celery /var/run/celery/

# Get django environment variables
celeryenv=`cat ./env_vars | tr '\n' ',' | sed 's/%/%%/g' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
celeryenv=${celeryenv%?}

# Create CELERY configuration script
celeryconf="[program:celeryd]
directory=/home/ubuntu/splityou/splityou
; Set full path to celery program if using virtualenv
command=/home/ubuntu/splityou/splityou-env/bin/celery worker -A config.celery.celery_app:app --loglevel=INFO --logfile=\"/var/log/celery/%%n%%I.log\" --pidfile=\"/var/run/celery/%%n.pid\"

user=celery
numprocs=1
stdout_logfile=/var/log/celery-worker.log
stderr_logfile=/var/log/celery-worker.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

environment=$celeryenv"


# Create CELERY BEAT configuration script
celerybeatconf="[program:celerybeat]
; Set full path to celery program if using virtualenv
command=/home/ubuntu/splityou/splityou-env/bin/celery beat -A config.celery.celery_app:app --loglevel=INFO --logfile=\"/var/log/celery/celery-beat.log\" --pidfile=\"/var/run/celery/celery-beat.pid\"

directory=/home/ubuntu/splityou/splityou
user=celery
numprocs=1
stdout_logfile=/var/log/celerybeat.log
stderr_logfile=/var/log/celerybeat.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 60

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=999

environment=$celeryenv"

# Create the celery supervisord conf script
echo "$celeryconf" | tee /etc/supervisor/conf.d/celery.conf
echo "$celerybeatconf" | tee /etc/supervisor/conf.d/celerybeat.conf

# Enable supervisor to listen for HTTP/XML-RPC requests.
# supervisorctl will use XML-RPC to communicate with supervisord over port 9001.
# Source: https://askubuntu.com/questions/911994/supervisorctl-3-3-1-http-localhost9001-refused-connection
if ! grep -Fxq "[inet_http_server]" /etc/supervisor/supervisord.conf
  then
    echo "[inet_http_server]" | tee -a /etc/supervisor/supervisord.conf
    echo "port = 127.0.0.1:9001" | tee -a /etc/supervisor/supervisord.conf
fi

# Reread the supervisord config
sudo supervisorctl reread

# Update supervisord in cache without restarting all services
sudo supervisorctl update

# Sleep for 15 seconds to give enough time to previous supervisor instance to shutdown
# Source: https://stackoverflow.com/questions/50135628/celery-django-on-elastic-beanstalk-causing-error-class-xmlrpclib-fault/50154073#50154073
sleep 15

# Start/Restart celeryd through supervisord
sudo supervisorctl restart celeryd
sudo supervisorctl restart celerybeat
Fabio
  • 1,272
  • 3
  • 21
  • 41

1 Answers1

5

As pointed out in the First steps with Django Celery Tutorial, we have to import the app object in proj/__init__.py module. It makes sure the app is always imported when Django starts so that shared_task will use the same.

I completely forgot it, so I solved the problem by putting inside __init__.py the followings:

from __future__ import absolute_import, unicode_literals
from .celery import app as celery_app

__all__ = ('celery_app',)
Fabio
  • 1,272
  • 3
  • 21
  • 41
  • I have also found that there is some, of course undocumented, requirements that the namespaces in the worker app, the location of celery.py, and your django installed_apps string must match. Else, of course, you will get a silent error / hang when trying to call task.delay. Beautiful technology here, love the "guess how it works without documentation" philosophy they have. – Vigrond Jun 12 '19 at 10:44