14

I'm trying to deploy a simple example of celery in my production server, I've followed the tutorial in the celery website about running celery as daemon http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#daemonizing, and I got the config file in /etc/default/celeryd

  1 # Name of nodes to start
  2 # here we have a single node
  3 CELERYD_NODES="w1"
  4 # or we could have three nodes:
  5 #CELERYD_NODES="w1 w2 w3"
  6 
  7 # Where to chdir at start.
  8 CELERYD_CHDIR="/home/audiwime/cidec_sw"
  9 
 10 # Python interpreter from environment.
 11 ENV_PYTHON="/usr/bin/python26"
 12 
 13 # How to call "manage.py celeryd_multi"
 14 CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
 15 
 16 # # How to call "manage.py celeryctl"
 17 CELERYCTL="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryctl"
 18 
 19 # Extra arguments to celeryd
 20 CELERYD_OPTS="--time-limit=300 --concurrency=8"
 21 
 22 # Name of the celery config module.
 23 CELERY_CONFIG_MODULE="celeryconfig"
 24 
 25 # %n will be replaced with the nodename.
 26 CELERYD_LOG_FILE="/var/log/celery/%n.log"
 27 CELERYD_PID_FILE="/var/run/celery/%n.pid"
 28 
 29 # Workers should run as an unprivileged user.
 30 CELERYD_USER="audiwime"
 31 CELERYD_GROUP="audiwime"
 32 
 33 export DJANGO_SETTINGS_MODULE="cidec_sw.settings"

but if I run

celery status

in the terminal, i got this response:

Error: No nodes replied within time constraint

I can restart celery via the celeryd script provided in https://github.com/celery/celery/tree/3.0/extra/generic-init.d/

/etc/init.d/celeryd restart
celeryd-multi v3.0.12 (Chiastic Slide)
> w1.one.cloudwime.com: DOWN
> Restarting node w1.one.cloudwime.com: OK

I can run python26 manage.py celeryd -l info and my tasks in django run fine, but if I let the daemon do its work I don't get any results, don't even errors in /var/log/celery/w1.log

I know that my task has been registered because I did this

from celery import current_app
def call_celery_delay(request):
    print current_app.tasks
    run.delay(request.GET['age'])
    return HttpResponse(content="celery task set",content_type="text/html")

and I get a dictionary in which my task appear

{'celery.chain': <@task: celery.chain>, 'celery.chunks': <@task: celery.chunks>, 'celery.chord': <@task: celery.chord>, 'tasks.add2': <@task: tasks.add2>, 'celery.chord_unlock': <@task: celery.chord_unlock>, **'tareas.tasks.run': <@task: tareas.tasks.run>**, 'tareas.tasks.add': <@task: tareas.tasks.add>, 'tareas.tasks.test_two_minute': <@task: tareas.tasks.test_two_minute>, 'celery.backend_cleanup': <@task: celery.backend_cleanup>, 'celery.map': <@task: celery.map>, 'celery.group': <@task: celery.group>, 'tareas.tasks.test_one_minute': <@task: tareas.tasks.test_one_minute>, 'celery.starmap': <@task: celery.starmap>}

but besides that I get nothing else, no result from my task, no error in the logs, nothing. What can be wrong?

Jason Aller
  • 3,541
  • 28
  • 38
  • 38

3 Answers3

1

Use the following command to find the problem :

C_FAKEFORK=1 sh -x /etc/init.d/celeryd start

This usually happens because there are problems in your source project(permission issues, syntax error etc.)

As mentioned in celery docs:-

If the worker starts with “OK” but exits almost immediately afterwards and there is nothing in the log file, then there is probably an error but as the daemons standard outputs are already closed you’ll not be able to see them anywhere. For this situation you can use the C_FAKEFORK environment variable to skip the daemonization step

Good Luck

Source: Celery Docs

rohan
  • 1,606
  • 12
  • 21
0

its because the celery daemon might not be started. This is one reason. So kindly restart it using python manage.py celeryd --loglevel=INFO

Bastin Robin
  • 907
  • 16
  • 30
-3

I solved my problem, it was a very simple solution, but it was also a weird one: What I did was:

$ /etc/init.d/celerybeat restart
$ /etc/init.d/celeryd restart
$ service celeryd restart

I had to do this in that order, other way I'd get an ugly Error: No nodes replied within time constraint.

  • I don't think you need both `celerybeat` and `celeryd`. You can run `celeryd -B`, which is the same. I am still quite curious how did you get a `service celeryd` :) (P.S. I still get this error) – Houman Jan 29 '13 at 00:39
  • 1
    @kave this error primarily comes when all the file permissions are not correct – Akash Deshpande Jan 29 '13 at 10:15
  • @AkashDeshpande Thanks for your response. May you please be so kind and have a look at this paste: http://pastebin.com/e3GK4eax This is how I have set-up the permissions, do you see anything obvious out of order? – Houman Jan 29 '13 at 14:52
  • @Kave the chmod 770 command gives 0 permissions to the other users who access the computer. i.e. your user celery has no access to the files. I think u should try with chmod 777. I know this can be a security vulnerability, but this is most probably your issue. U can change the permissions once you get the issue. – Akash Deshpande Jan 30 '13 at 05:46
  • @AkashDeshpande I have now tried `chmod -R 777` on both directories without any difference. I don't think that is the problem anyway `chmod 770` gives full access to user and group, which in this case is celery and celerygroup, both are defined in celery config. Hence the celery daemon should use that user in that group and access the directory without any trouble. Very strange... – Houman Feb 01 '13 at 22:03
  • I know is a little bit late, but have you tried running python manage.py celeryd -l INFO ? It could be an error in your python files – Hector Armando Vela Santos Feb 14 '13 at 17:57
  • Oh my god I've spent hours chasing this one. Chmod on my working directory fixed it. – DBrowne Aug 21 '15 at 05:40