2

I have Django and Celery set up. I am only using one node for the worker.

I want to use use it as an asynchronous queue and as a scheduler.

I can launch the task as follows, with the -B option and it will do both.

celery worker start 127.0.0.1 --app=myapp.tasks -B 

However it is unclear how to do this on production when I want to daemonise the process. Do I need to set up both the init scripts?

I have tried adding the -B option to the init.d script, but it doesn't seem to have any effect. The documentation is not very clear.

physicalattraction
  • 6,485
  • 10
  • 63
  • 122
wobbily_col
  • 11,390
  • 12
  • 62
  • 86

2 Answers2

1

Personally I use Supervisord, which has some nice options and configurability. There are example supervisord config files here

reptilicus
  • 10,290
  • 6
  • 55
  • 79
0

A couple of ways to achieve this: http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html 1. Celery distribution comes with a generic init scripts located in path-to-celery/celery-3.1.10/extra/generic-init.d/celeryd this can be placed in /etc/init.d/celeryd-name and configured using a configuration file also present in the distribution which would look like the following

# Names of nodes to start (space-separated)
#CELERYD_NODES="my_application-node_1"
# Where to chdir at start. This could be the root of a virtualenv.
#CELERYD_CHDIR="/path/to/my_application"
# How to call celeryd-multi
#CELERYD_MULTI="$CELERYD_CHDIR/bin/celeryd-multi
# Extra arguments
#CELERYD_OPTS="--app=my_application.path.to.worker --time-limit=300 --concurrency=8 --loglevel=DEBUG"
# Create log/pid dirs, if they don't already exist
#CELERY_CREATE_DIRS=1

# %n will be replaced with the nodename
#CELERYD_LOG_FILE="/path/to/my_application/log/%n.log"
#CELERYD_PID_FILE="/var/run/celery/%n.pid"

# Workers run as an unprivileged user
#CELERYD_USER=my_user
#CELERYD_GROUP=my_group

You can add the following celerybeat elements for celery beat configuration to the file

# Where to chdir at start.
CELERYBEAT_CHDIR="/opt/Myproject/"
# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"

This config should be then saved in (atleast for centos) /etc/default/celeryd-config-name Look at the init file for the exact location. now you can run celery as a daemon by running commands /etc/init.d/celeryd star/restart/stop

  1. Using supervisord. As mentioned in the other answer. The superviosord configuration files are also in the distribution path-to-dist/celery-version/extra/supervisord Configure using the files and use superviosrctl to run the service as a daemon
cmidi
  • 1,880
  • 3
  • 20
  • 35
  • There seems to be two init scripts, one for celerybeat and one for celery daemon. My question is do I need to use both, or can I do the equivalent of -B from the celeryd script? – wobbily_col Apr 23 '15 at 18:06
  • You need one of these init scripts. if you are using celerybeat use the celerybeat script. if the motive is to just demonize the worker use the generic init script called celeryd – cmidi Apr 23 '15 at 18:14
  • 1
    I want to use both. I have asynchronous tasks, as well as scheduled tasks. – wobbily_col Apr 23 '15 at 18:15
  • in that case use celerybeat script plus add the celerybeat configuration elements in the configuration file the celerybeat configuration elements are mentioned in the documentation linked in the answer – cmidi Apr 23 '15 at 18:27
  • I dont use celerybeat but I did a quick test on my system using the init celerybeat script and ps showed both beat and worker instance,try running it and let me know. – cmidi Apr 23 '15 at 18:47
  • Looking at this answer, and with my experimentation today, it seems that I need to run both the celery daemon and celery worker scripts. I can get celery beat working on its own, but it doesn't send the tasks anywhere. – wobbily_col Apr 27 '15 at 09:59