1

I'm using Django/Celery Quickstart... or, how I learned to stop using cron and love celery, and it seems the jobs are getting queued, but never run.

tasks.py:

from celery.task.schedules import crontab
from celery.decorators import periodic_task

# this will run every minute, see http://celeryproject.org/docs/reference/celery.task.schedules.html#celery.task.schedules.crontab
@periodic_task(run_every=crontab(hour="*", minute="*", day_of_week="*"))
def test():
    print "firing test task"

So I run celery:

bash-3.2$ sudo manage.py celeryd -v 2 -B -s celery -E -l INFO  

/scratch/software/python/lib/celery/apps/worker.py:166: RuntimeWarning: Running celeryd with superuser privileges is discouraged!
  'Running celeryd with superuser privileges is discouraged!'))

 -------------- celery@myserver v3.0.12 (Chiastic Slide)
---- **** ----- 
--- * ***  * -- [Configuration]
-- * - **** --- . broker:      django://localhost//
- ** ---------- . app:         default:0x12120290 (djcelery.loaders.DjangoLoader)
- ** ---------- . concurrency: 2 (processes)
- ** ---------- . events:      ON
- ** ---------- 
- *** --- * --- [Queues]
-- ******* ---- . celery:      exchange:celery(direct) binding:celery
--- ***** ----- 

[Tasks]
  . GotPatch.tasks.test

[2012-12-12 11:58:37,118: INFO/Beat] Celerybeat: Starting...
[2012-12-12 11:58:37,163: INFO/Beat] Scheduler: Sending due task GotPatch.tasks.test (GotPatch.tasks.test)
[2012-12-12 11:58:37,249: WARNING/MainProcess] /scratch/software/python/lib/djcelery/loaders.py:132: UserWarning: Using settings.DEBUG leads to a memory leak, never use this setting in production environments!
  warnings.warn("Using settings.DEBUG leads to a memory leak, never "
[2012-12-12 11:58:37,348: WARNING/MainProcess] celery@myserver ready.
[2012-12-12 11:58:37,352: INFO/MainProcess] consumer: Connected to django://localhost//.
[2012-12-12 11:58:37,700: INFO/MainProcess] child process calling self.run()
[2012-12-12 11:58:37,857: INFO/MainProcess] child process calling self.run()
[2012-12-12 11:59:00,229: INFO/Beat] Scheduler: Sending due task GotPatch.tasks.test (GotPatch.tasks.test)
[2012-12-12 12:00:00,017: INFO/Beat] Scheduler: Sending due task GotPatch.tasks.test (GotPatch.tasks.test)
[2012-12-12 12:01:00,020: INFO/Beat] Scheduler: Sending due task GotPatch.tasks.test (GotPatch.tasks.test)
[2012-12-12 12:02:00,024: INFO/Beat] Scheduler: Sending due task GotPatch.tasks.test (GotPatch.tasks.test)

The tasks are indeed getting queued:

python manage.py shell
>>> from kombu.transport.django.models import Message
>>> Message.objects.count()
234

And the count increases over time:

>>> Message.objects.count()
477

There are no lines in the log file that seem to indicate the task is being executed. I'm expecting something like:

[... INFO/MainProcess] Task myapp.tasks.test[39d57f82-fdd2-406a-ad5f-50b0e30a6492] succeeded in 0.00423407554626s: None

Any suggestions how to diagnose / debug this?

dsm2005
  • 36
  • 1
  • 5

3 Answers3

0

I'm new to celery as well, but from the comments on the link you provided, it looks like there was an error in the tutorial. One of the comments points out:

At this command

sudo ./manage.py celeryd -v 2 -B -s celery -E -l INFO

You must add "-I tasks" to load tasks.py file ...

Did you try that?

pvans
  • 1,017
  • 2
  • 10
  • 15
0

You should check that you specify BROKER_URL parameter inside django's settyngs.py.

BROKER_URL = 'django://'

And you should check that your timezones in django, mysql and celery is equal. It helped me.

P.s.:

[... INFO/MainProcess] Task myapp.tasks.test[39d57f82-fdd2-406a-ad5f-50b0e30a6492] succeeded in 0.00423407554626s: None

This line means that your task was scheduled (!not executed!)

Please check your config and i hope that it helps you.

Denti
  • 424
  • 1
  • 5
  • 12
0

I hope someone could learn from my experience in hacking this.

After setting everything up according to the tutorial I noticed that when I call

add.delay(4,5)

nothing happens. the worker did not receive the task (nothing was printed on stderr).

The problem was with the rabbitmq installation. It turns out the default free disk size requirements is 1GB which was way too much for my VM.

what put me on track was to read the rabbitmq log file. to find it I had to stop and start the rabbitmq server

sudo rabbitmqctl stop
sudo rabbitmq-server

rabbitmq dumps the log file location to the screen. in the file I noticed this:

=WARNING REPORT==== 14-Mar-2017::13:57:41 ===
disk resource limit alarm set on node rabbit@supporttip.

**********************************************************
*** Publishers will be blocked until this alarm clears ***
**********************************************************

I then followed the instruction here in order to reduce the free disk limit Rabbitmq ignores configuration on Ubuntu 12

As a baseline I used the config file from git https://github.com/rabbitmq/rabbitmq-server/blob/stable/docs/rabbitmq.config.example

The change itself:

{disk_free_limit, "50MB"}
Community
  • 1
  • 1
elewinso
  • 2,453
  • 5
  • 22
  • 27