8

I am going break down a project into small microservices.

All microservices are cron-based. I am thinking of celery as a task distribution as well a mechanism to run periodic tasks (celerybeat).

I don't want to build multiple celery app per microserverice as that will increase overhead of having multiple brokers and multiple flower system to use for monitoring.

I tried with single app on multiple servers but I failed. My needs with celery are :

  1. I need to have independent servers for each microservice
  2. Task belonging to certain microservice should execute only on their servers; no sharing of task among other servers
  3. In case microservice is down i don't want celerybeat to clog the broker with thousands of pending tasks, resulting in halting service at other microservices.
  4. In do not have any need of communication between microservices.

I tried separating queues per worker which doesn't seem to be possible I tried one worker per server but i need more than one worker on per microservices

srj
  • 9,591
  • 2
  • 23
  • 27
Rakesh Bhatt
  • 115
  • 2
  • 13

1 Answers1

11

For your use case, a simple queue based routing from a single broker should suffice.

Keep only 1 broker running in any one server or on a separate server.

Now while queuing up the tasks, add them to separate queues.

From micro service 1:

In [2]: add.apply_async(args=(12, 1), queue='queue1')
Out[2]: <AsyncResult: 2fa5ca61-47bc-4c2c-be04-e44cbce7680a>

Start a worker to consume only this queue

celery worker -A tasks -l info -Q queue1

From micro service 2:

In [2]: sub.apply_async(args=(12, 1), queue='queue2')
Out[3]: <AsyncResult: 4d42861c-737e-4b73-bfa8-6d1e86241d57>

Start a worker to consume only this queue

celery worker -A tasks -l info -Q queue2

This will make sure that tasks from a microservice will get executed by worker from that microservice only.

Chillar Anand
  • 27,936
  • 9
  • 119
  • 136
  • Thanks for answer. But this approach creates issues while scaling. While you are adding up tasks upto 5000 or more per 15 minutes and if task are intensive queues will be blocked so that tasks. Monitoring system on queue level is also not too easy. – Rakesh Bhatt May 26 '17 at 03:29
  • 2
    Can explain the problem in detail? If tasks are time-consuming and if too many tasks are getting queued up, scaling workers should be fine? What things are you monitoring for? – Chillar Anand May 26 '17 at 04:37
  • @RakeshBhatt You can use autoscale option to scale workers when there are too many tasks to process and you need to complete them at a faster rate `celery worker -l info -A t --autoscale=8,1`. You can also horizontally scale your celery worker servers to consume at a faster rate. – Chillar Anand Jun 04 '17 at 13:14