3

I have configured celery to run 2 workers, each with a concurrency of 1. My /etc/default/celeryd file contains (amongst other settings):

CELERYD_NODES="worker1 worker2"
CELERYD_OPTS="-Q:worker1 central -c:worker1 1 -Q:worker2 RetailSpider -c:worker2 1"

In other words, I expect 2 workers and since concurrency is 1, 1 process per worker; one worker consumes from the queue 'central' and the other consumes from a queue called 'RetailSpider'. Both have concurrency 1.

Also sudo service celeryd status shows:

celery init v10.1.
Using config script: /etc/default/celeryd
celeryd (node worker1) (pid 46610) is up...
celeryd (node worker2) (pid 46621) is up...

However what is puzzling me is the output of ps aux|grep 'celery worker', which is

scraper  34384  0.0  1.0 348780 77780 ?        S    13:07   0:00 /opt/scraper/evo-scrape/venv/bin/python -m celery worker --app=evofrontend --loglevel=INFO -Q central -c 1 --logfile=/opt/scraper/evo-scrape/evofrontend/logs/celery/worker1.log --pidfile=/opt/scraper/evo-scrape/evofrontend/run/celery/worker1.pid --hostname=worker1@scraping0-evo
scraper  34388  0.0  1.0 348828 77884 ?        S    13:07   0:00 /opt/scraper/evo-scrape/venv/bin/python -m celery worker --app=evofrontend --loglevel=INFO -Q RetailSpider -c 1 --logfile=/opt/scraper/evo-scrape/evofrontend/logs/celery/worker2.log --pidfile=/opt/scraper/evo-scrape/evofrontend/run/celery/worker2.pid --hostname=worker2@scraping0-evo
scraper  46610  0.1  1.2 348780 87552 ?        Sl   Apr26   1:55 /opt/scraper/evo-scrape/venv/bin/python -m celery worker --app=evofrontend --loglevel=INFO -Q central -c 1 --logfile=/opt/scraper/evo-scrape/evofrontend/logs/celery/worker1.log --pidfile=/opt/scraper/evo-scrape/evofrontend/run/celery/worker1.pid --hostname=worker1@scraping0-evo
scraper  46621  0.1  1.2 348828 87920 ?        Sl   Apr26   1:53 /opt/scraper/evo-scrape/venv/bin/python -m celery worker --app=evofrontend --loglevel=INFO -Q RetailSpider -c 1 --logfile=/opt/scraper/evo-scrape/evofrontend/logs/celery/worker2.log --pidfile=/opt/scraper/evo-scrape/evofrontend/run/celery/worker2.pid --hostname=worker2@scraping0-evo

What are the additional 2 processes - the ones with ids 34384 and 34388?

(This is a Django project)

EDIT:

I wonder if this is somehow related to the fact that celery by default launches as many concurrent worker processes, as the number of CPUs/cores available. This machine as 2 cores, hence 2 per worker. However, I would have expected the -c:worker1 1 and -c:worker2 1 options to override that.

I added --concurrency=1 to CELERYD_OPTS and also CELERYD_CONCURRENCY = 1 to settings.py. I then killed all processes and restarted celeryd, yet I still saw 4 processes (2 per worker).

fpghost
  • 2,834
  • 4
  • 32
  • 61
  • what if you try one process? what if you try three? what if you kill the service? what if you reboot? A lot can be investigated without knowing what's what, or without even knowing what you're talking about (as is my case). – Dan Rosenstark Apr 27 '17 at 18:09
  • Did you upgraded to a newer version of Celery while you had previous workers daemonized in another version? – John Moutafis Apr 27 '17 at 18:33
  • @DanRosenstark if I kill the 2 processes that seem additional (i.e. not the 2 reported by 'celeryd status') then they are immediately respawned. If I kill the principal 2 then all 4 are dead. – fpghost Apr 27 '17 at 18:35
  • @JohnMoutafis It is possible. However, I've tried killing all celery processes and then stopping/restarting – fpghost Apr 27 '17 at 18:35
  • Try concurrency 0? – Dan Rosenstark Apr 27 '17 at 19:07
  • @DanRosenstark I don't think zero is valid? – fpghost Apr 27 '17 at 20:22

0 Answers0