0

I am using celery in django project. I have tried using rabbitmq and redis backend but neither does work. Used celery version is 3.1.26.post2. I have to call 2, 3 sometimes 5 times task.delay() to see the task running. And sometimes usually after frequently calling the same task its "execution rate" increases and executes the task 70-80% of the time. For example, it drops 1 or 2 of 5 task.delay() calls, but executes 3-4 of them. Did you experience something like this? What can be the reason?

Alihaydar Gubatov
  • 988
  • 1
  • 12
  • 27
  • 1
    I was going to use Celery & RabbitMQ in a Django projects once. Then I came to my senses. I instead wrote a custom management command that made use of Threads and that gets run from cron. For "messaging" I used Django model. – Red Cricket Dec 27 '18 at 02:28
  • But due to current circumstances I have to use celery. – Alihaydar Gubatov Dec 27 '18 at 02:31
  • When you call this task what does `rabbitmqctl list_queues -p your_vhost_name` say? (run as root) – Greg0ry Dec 27 '18 at 20:45
  • rabbitmqctl list_queues shows something like this `Listing queues ... 13936400801f4fbf863ac1065d172041 1 1e12015a2dfe4c3cae3da0106de45de3 1 celery 361 celery@Debian-84-jessie-64-minimal.celery.pidbox 0 celeryev.195a0af6-3058-4e14-bfbd-79d013fead8b 0 celeryev.2a18f5f9-c83e-4c6e-958e-2970d67ab9f0 0 celeryev.39795174-31bb-43d0-a83f-d43615feca08 0 e8be18ad864048ed82909ba711ee22be 0 eeca2b77716846d089806204115c4adb 1 topaz 0 topaz-worker@Debian-84-jessie-64-minimal.celery.pidbox 0 w1@Debian-84-jessie-64-minimal.celery.pidbox 0 w2@Debian-84-jessie-64-minimal.celery.pidbox 0 ...done.` – Alihaydar Gubatov Dec 27 '18 at 21:37
  • But when I add host name it says "listing queues, ...done". I also noticed that value of the celery increases every time I call a task. – Alihaydar Gubatov Dec 27 '18 at 21:38

1 Answers1

1

OK, based on your description there are a few bits I don't know (and they would help):

  • how do you start your workers (i.e. celery worker -A your_package_name)
  • are you sure you subscribe to the same broker you later check with rabbitmqctl

Based on your feedback I guess your tasks either take very long to complete or in some weird way hang and never finish. They definitely land within default queue created by celery worker upon start (called celery).

Posting code of sample task you try to insert into the queue and also code sample of how you try to insert it into the queue would help too.

I would normally define my task like this (in my package that defines what tasks are supposed to do, this code will be executed by celery worker):

from your_package_name.celery import app
@app.task
my_task_name(my_param):
    #do something here!
    return True

I would insert my task into the queue like this (i.e. from python shell or from my other package that is supposed to insert tasks into the queue):

my_task_name.apply_async(
    args=(my_param,),
    queue='my_queue_name',)

Somewhere in your_package_name there is a bit of code where you define your broker (in my case I keep it in celeryconfig.py but it's up to you)

BROKER_URL = 'amqp://your_user_name:very_secret_pwd@localhost:5672/your_vhost'

Do not confuse vhost with your host name.

If like me you use rabbitmq then you need to create vhost, user and password before attempting to use the broker (run below in bash as root)

sudo -u rabbitmq -n rabbitmqctl add_user your_user_name very_secret_pwd
sudo -u rabbitmq -n rabbitmqctl add_vhost your_vhost
sudo -u rabbitmq -n rabbitmqctl set_user_tags your_user_name your_example_tag
sudo -u rabbitmq -n rabbitmqctl set_permissions -p your_vhost your_user_name ".*" ".*" ".*"

I would start my worker like this:

python -m celery worker -A your_package_name -Q my_queue_name -c 1 -f /tmp/celery.log --loglevel="INFO"

And then I would look at celery logs within /tmp/celery.log and also list my queues like this (in bash as root):

rabbitmqctl list_queues -p your_vhost

Hope this will help you get on the right tracks.

Greg0ry
  • 931
  • 8
  • 25