1

I have a Celery worker running on a Linux machine with 16GB of physical memory. The worker can run tasks of type A which consumes a max of 8GB RAM and tasks of type B which consume a max of 4GB RAM. Is it possible to configure the worker so that it will only pull a task off the queue if the total max memory usage of all running worker tasks plus the new task is less than or equal to the physical memory limit? I get that virtual memory will keep the machine running if the physical memory limit is breached but I'd like to keep the tasks within the physical memory limit if possible to prevent slowdown from data being repeatedly swapped in and out of RAM.

For example, if the worker is running 3 instances of task type B (total max mem 12GB), and the next task on the queue is of type A, I'd like the worker to wait for one of the type B instances to complete before starting the instance of task type A.

Looking at similar questions such as How to limit the maximum number of running Celery tasks by name one option seems to be to setup up separate queues for the different task types and then start separate worker instances for each of these queues with the appropriate concurrency. I would like to avoid this solution if possible because as I understand it I would need to run the workers on separate machines and I'd like to avoid that expense.

Any advice or perhaps possible alternative approaches for handling this situation is much appreciated.

JustAG33K
  • 1,403
  • 3
  • 13
  • 28
abroun
  • 81
  • 2
  • 7
  • 1
    Does this answer your question? [Celery: dynamically allocate concurrency based on worker memory](https://stackoverflow.com/questions/63235685/celery-dynamically-allocate-concurrency-based-on-worker-memory) – 2ps Sep 14 '21 at 17:59
  • Thank you for the pointer. I think that it might get me some of the way there. Say the worker has concurrency 4 and then 3 tasks of type A arrive. The autoscaler could then detect this and scale concurrency to 2. I don't know if this will kill the excess task of type A though and then allow it to be automatically retried later on. I'll give it a go. – abroun Sep 14 '21 at 18:58
  • Autoscaler does not "kill" any running worker-processes. – DejanLekic Sep 18 '21 at 16:31

0 Answers0