5

I'm a bit confused on what my configuration should look like to set up a topic exchange.

http://www.rabbitmq.com/tutorials/tutorial-five-python.html

This is what I'd like to accomplish:

Task1 -> send to QueueOne and QueueFirehose
Task2 -> sent to QueueTwo and QueueFirehose

then:

Task1 -> consume from QueueOne
Task2 -> consume from QueueTwo
TaskFirehose -> consume from QueueFirehose

I only want Task1 to consume from QueueOne and Task2 to consume from QueueTwo.

That problem now is that when Task1 and 2 run, they also drain QueueFirehose, and TaskFirehose task never executes.

Is there something wrong with my config, or am I misunderstanding something?

CELERY_QUEUES = { 
    "QueueOne": {
        "exchange_type": "topic",
        "binding_key": "pipeline.one",
    },  
    "QueueTwo": {
        "exchange_type": "topic",
        "binding_key": "pipeline.two",
    },  
    "QueueFirehose": {
        "exchange_type": "topic",
        "binding_key": "pipeline.#",
    },  
}

CELERY_ROUTES = {
        "tasks.task1": {
            "queue": 'QueueOne',
            "routing_key": 'pipeline.one',
        },
        "tasks.task2": {
            "queue": 'QueueTwo',
            "routing_key": 'pipeline.two',
        },
        "tasks.firehose": {
            'queue': 'QueueFirehose',
            "routing_key": 'pipeline.#',
        },
}
Matteo
  • 37,680
  • 11
  • 100
  • 115
brianz
  • 7,268
  • 4
  • 37
  • 44
  • Maybe this is just terminology to clarify, but your description sounds like you're conflating tasks and workers. For example, you say "Task2 sent to Queue2" then later say "Task2 to consume from Queue2". Tasks don't consume; they are consumed (by workers). You also say "TaskFirehose task never executes" but in your description, there is no TaskFirehose being sent to any queue. The basic concept is: tasks are sent to queues; and workers execute tasks from queues they are assigned. Tasks != the workers that execute them. – Chris Johnson Aug 25 '13 at 17:26

1 Answers1

0

Assuming that you actually meant something like this:

Task1 -> send to QueueOne
Task2 -> sent to QueueTwo
TaskFirehose -> send to QueueFirehose

then:

Worker1 -> consume from QueueOne, QueueFirehose
Worker2 -> consume from QueueTwo, QueueFirehose
WorkerFirehose -> consume from QueueFirehose

This might not be exactly what you meant, but i think it should cover many scenarios and hopefully yours too. Something like this should work:

# Advanced example starting 10 workers in the background:
#   * Three of the workers processes the images and video queue
#   * Two of the workers processes the data queue with loglevel DEBUG
#   * the rest processes the default' queue.

$ celery multi start 10 -l INFO -Q:1-3 images,video -Q:4,5 data
-Q default -L:4,5 DEBUG

For more options and reference: http://celery.readthedocs.org/en/latest/reference/celery.bin.multi.html

This was straight from the documentation.

I too had a similar situation, and I tackled it in a slightly different way. I couldnt use celery multi with supervisord. So instead I created multiple program in supervisord for each worker. The workers will be on different processes anyway, so just let supervisord take care of everything for you. The config file looks something like:-

; ==================================
; celery worker supervisor example
; ==================================

[program:Worker1]
; Set full path to celery program if using virtualenv
command=celery worker -A proj --loglevel=INFO -Q QueueOne, QueueFirehose

directory=/path/to/project
user=nobody
numprocs=1
stdout_logfile=/var/log/celery/worker1.log
stderr_logfile=/var/log/celery/worker1.log
autostart=true
autorestart=true
startsecs=10

; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600

; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true

; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998

Similarly, for Worker2 and WorkerFirehose, edit the corresponding lines to make:

[program:Worker2]
; Set full path to celery program if using virtualenv
command=celery worker -A proj --loglevel=INFO -Q QueueTwo, QueueFirehose

and

[program:WorkerFirehose]
; Set full path to celery program if using virtualenv
command=celery worker -A proj --loglevel=INFO -Q QueueFirehose

include them all in supervisord.conf file and that should do it.

rohan
  • 1,606
  • 12
  • 21