13

I would like to leverage Celery (with RabbitMQ as backend MQ) to execute tasks of varying flavors via different Queues. One requirement is that consumption (by the workers) from a particular Queue should have the capability to be paused and resumed.

Celery, seems to have this capability via calling add_consumer and cancel_consumer. While I was able to cancel the consumption of tasks from a queue for a particular worker, I cannot get the worker to resume consumption by calling add_consumer. The code to reproduce this issue is provided here. My guess is likely I'm missing some sort of a parameter to be provided either in the celeryconfig or via the arguments when starting the workers?

Would be great to get some fresh pairs of eyes on this. There is not much discussion on Stackoverflow regarding add_consumer nor in Github. So I'm hoping there's some experts here willing to share their thoughts/experience.

--

I am running the below:

Windows OS, RabbitMQ 3.5.6, Erlang 18.1, Python 3.3.5, celery 3.1.15

pangyuteng
  • 1,749
  • 14
  • 29

2 Answers2

4

To resume from queue, you need to specify queue name as well as target workers. Here is how to do it.

app.control.add_consumer(queue='high', destination=['celery@asus'])

Here is add_consumer signature

def add_consumer(state, queue, exchange=None, exchange_type=None,
             routing_key=None, **options):

In your case, you are calling with

app.control.add_consumer('high', destination=['celery@high1woka'])

So high is getting passed to state and queue is empty. So it is not able to resume.

Chillar Anand
  • 27,936
  • 9
  • 119
  • 136
  • Thanks for the attempt. The multi-file gist I have provided for reproducing this behaviour is using add_consumer. While the worker acknowledges execution of add_consumer, it does not resume consumption of tasks in the queue. I'm suspecting my celeryconfig is not properly set or there is a potential bug in celery. – pangyuteng Aug 27 '17 at 18:41
  • @teng Did you try with `queue=high` while adding consumer? I used your gist and reproduced the behavior. After passing proper params, it is working correctly. – Chillar Anand Aug 28 '17 at 07:18
  • I have added the `queue=high` as the kwarg to add_consumer, as oppose to just the arg, and after execution of resume.py, the worker is still not consuming tasks. Would you mind letting me know what your OS is, and the rabbitmq and celery versions you are using? – pangyuteng Aug 29 '17 at 16:19
  • Ubuntu 14.04, Celery 4, rabbitmq 3.4 – Chillar Anand Aug 30 '17 at 09:32
  • 1
    I have upgraded to Celery 4.1.0, which doesn't work straight 'out of the box' with Windows, however, I was able to get the worker to resume consuming tasks from the specified queue via add_consumer. Likely there are updates made between Celery v3.1.15 to v4.1 that resolved this issue. Accepting this as the answer. Note to viewers, try upgrading Celery to v4 if you are encounter this issue. – pangyuteng Aug 31 '17 at 00:20
0

To get celery worker to resume working in Windows OS, my work around is listed below.

  • update celery : pip install celery==4.1.0
  • update billiard/spawn.py : encasulate line 338 to 339 with try: except: pass
  • (optional) install eventlet: pip install eventlet==0.22.1
  • add --pool=eventlet or --pool=solo when starting workers per comment in https://github.com/celery/celery/issues/4178
pangyuteng
  • 1,749
  • 14
  • 29