1

I am trying to run celery daemon on Centos 7 which has systemd / systemctl. It is not working.

  • I tried a non-daemon case and it worked
  • I ran ~mytask and that freezes up on the client machine and on the server where the celery daemon is running I get absolutely nothing logged.
  • I have noticed that actually no celery processes are running.

Any suggestions as how to fix this?

Here is my daemon default configuration:

CELERYD_NODES="localhost.localdomain"
CELERY_BIN="/tmp/myapp/venv/bin/celery"
CELERY_APP="pipeline"
CELERYD_OPTS="--broker=amqp://192.168.168.111/"
CELERYD_LOG_LEVEL="INFO"
CELERYD_CHDIR="/tmp/myapp"
CELERYD_USER="root"

Note: I am starting the daemon with

sudo /etc/init.d/celeryd start

and I got my celery daemon script from: https://raw.githubusercontent.com/celery/celery/3.1/extra/generic-init.d/celeryd

I also tried the one from: https://raw.githubusercontent.com/celery/celery/3.1/extra/generic-init.d/celeryd but this one showed me an error when trying to start the daemon:

systemd[1]: Starting LSB: celery task worker daemon...
celeryd[19924]: basename: missing operand
celeryd[19924]: Try 'basename --help' for more information.
celeryd[19924]: Starting : /etc/rc.d/init.d/celeryd: line 193: multi: command not found
celeryd[19924]: [FAILED]
systemd[1]: celeryd.service: control process exited, code=exited status=1
systemd[1]: Failed to start LSB: celery task worker daemon.
systemd[1]: Unit celeryd.service entered failed state.
Greg Dubicki
  • 5,983
  • 3
  • 55
  • 68
max
  • 9,708
  • 15
  • 89
  • 144

2 Answers2

6

As @ChillarAnand has answered before, don't use celeryd.

But it is actually not as simple as he has written to run celery with celery multi with systemd.

Here are my working, non-obvious (I think) examples.

They have been tested on Centos 7.1.1503 with celery 3.1.23 (Cipater) running in a virtualenv, with the tasks.py example app from Celery tutorial.

Running a single worker

[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=vagrant
Group=vagrant

# directory with tasks.py
WorkingDirectory=/home/vagrant/celery_example

# !!! using the below systemd is REQUIRED in this case!
# (you will still get a warning "PID file /var/run/celery/single.pid not readable (yet?) after start." from systemd but service will in fact be starting, stopping and restarting properly. I haven't found a way to get rid of this warning.)
PIDFile=/var/run/celery/single.pid

# !!! using --pidfile option here and below is REQUIRED in this case!
# !!! also: don't use "%n" in pidfile or logfile paths - you will get these files named after the systemd service instead of after the worker (?)
ExecStart=/home/vagrant/celery_example/venv/bin/celery multi start single-worker -A tasks --pidfile=/var/run/celery/single.pid --logfile=/var/log/celery/single.log "-c 4 -Q celery -l INFO"

ExecStop=/home/vagrant/celery_example/venv/bin/celery multi stopwait single-worker --pidfile=/var/run/celery/single.pid --logfile=/var/log/celery/single.log

ExecReload=/home/vagrant/celery_example/venv/bin/celery multi restart single-worker --pidfile=/var/run/celery/single.pid --logfile=/var/log/celery/single.log

# Creates /var/run/celery, if it doesn't exist
RuntimeDirectory=celery

[Install]
WantedBy=multi-user.target

Running multiple workers

[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=vagrant
Group=vagrant

# directory with tasks.py
WorkingDirectory=/home/vagrant/celery_example

# !!! in this case DON'T set PIDFile or use --pidfile or --logfile below or it won't work!
ExecStart=/home/vagrant/celery_example/venv/bin/celery multi start 3 -A tasks "-c 4 -Q celery -l INFO"

ExecStop=/home/vagrant/celery_example/venv/bin/celery multi stopwait 3

ExecReload=/home/vagrant/celery_example/venv/bin/celery multi restart 3

# Creates /var/run/celery, if it doesn't exist
RuntimeDirectory=celery

[Install]
WantedBy=multi-user.target

(Note that I am running workers with -c / --concurrency > 1 but it also works with it set to 1 or the default . Also this should work if you would not use virtualenv, but I strongly recommend you to use it.)

I don't really get why systemd can't guess the PID of the forked process in the first case and why putting pidfiles in specific place breaks the second case, so I have filed a ticket here: https://github.com/celery/celery/issues/3459 . If I will get answers or come up with some explanations on my own, then I will post them here.

Greg Dubicki
  • 5,983
  • 3
  • 55
  • 68
2

celeryd is depricated. If you are able to run in a non-daemon mode say

celery worker -l info -A my_app -n my_worker

You can simply daemonize it by using celery multi

celery multi my_worker -A my_app -l info

That being said, if you still want to use celeryd try these steps.

Community
  • 1
  • 1
Chillar Anand
  • 27,936
  • 9
  • 119
  • 136
  • Great. I did not know this. Do you know why the document shows an example with "celery multi" followed by "celery worker" commands? I'm guessing that they are comparing and they are independent. Correct? – max May 09 '15 at 16:22
  • I tried it out. Celery multi would only work if you specify the pidfile and logfile. – max May 09 '15 at 17:01
  • yes, they are showing celery multi equivalent of worker commands. there is no need specify pid, logfile; they are optional – Chillar Anand May 09 '15 at 17:27