1

I have long time running tasks in my live server. Tasks are involving in getting data from facebook and generating the PDF with reportlab PDF package.

For these I have 3 workers with concurrency level of 5, so that I can able to execute 30 PDF tasks in parallel.

But When 10 tasks are running at a time, a long time running task will breaks the other tasks to expire their task hard time limit(12 hours).

But In my server a single PDF task will took maximum 3 hours or 4 hrs in worst case. But When I am running all workers with concurrency level 5, some of the tasks are getting success and some of the tasks time limit exceeds(12 hours). But My target is need to complete all these 10 tasks within 4 or 5 hours.

Is any best way available to handle long time running tasks?

Also I am using django - celery package.
My celery Conf:

CELERYD_OPTS="-time-limit=43200 --concurrency=10"
CELERYD_CONCURRENCY = 10
CELERYD_NODES = "worker1 worker2 worker3"

Running workers: python manage.py celeryd_multi restart n1 n2 n3 -l info -f celery.log -c 10 --purge -Q:n1,n2,n3 backend

  • Usually the best way to handle long running tasks is to have them in their own queue and a worker dedicated to that queue. – user2097159 Oct 05 '15 at 19:53
  • Mean your are saying to have single queue and single worker that will process all long time running tasks? If so, All my tasks are long time running tasks and separating the PDF tasks with respect to pages(100 pages tasks, 250 pages tasks and 500 pages tasks) and allocating each task category tasks to separate queue will solve my problem? – venkatesh python Oct 06 '15 at 05:16
  • Yes that should solve your issue. – user2097159 Oct 06 '15 at 11:46

0 Answers0