All, I'm attempting to 'force' RQ workers to perform concurrently using supervisord. My setup supervisord setup seems to work fine, as rq-dashboard is showing 3 workers, 3 PID's and 3 queue (one for each worker/PID). Supervisord setup is as follows (showing only worker 1 setup, 2 more workers are defined below this one):
[program:rqworker1]
command = rqworker 1
process_name = rqworker1-%(process_num)s
numprocs = 1
user = username
autostart = True
stdout_logfile=/tmp/rqworker1.log
stdout_logfile_maxbytes=50MB
The issue is when I send 3 jobs concurrently, the total time to run is x3 that of a single task (namely, total time is linear with number of tasks, this scales to x4,x5, etc..). It seems no concurrency is available. I also implemented a primitive load-balancing by sending new jobs to the queue with minimum started+queued jobs, that works fine (jobs are observed to be spread evenly among queues).
Why would this setup not allow concurrency?
Any considerations regarding the setup i'm missing?
Note that rq-gevent-worker package (which worked great earlier w.r.t. concurrency/RQ) is no longer available as I migrated to PY3 and gevent itself is not yet supported on PY3. But this gives my a clue that concurrency is possible.