I currently have multiple python-rq workers executing jobs from a queue in parallel. Each job also utilizes the python multiprocessing module.
Job execution code is simply this:
from redis import Redis
from rq import Queue
q = Queue('calculate', connection=Redis())
job = q.enqueue(calculateJob, someArgs)
And calculateJob is defined as such:
import multiprocessing as mp
from functools import partial
def calculateJob (someArgs):
pool = mp.Pool()
result = partial(someFunc, someArgs=someArgs)
def someFunc(someArgs):
//do something
return output
So presumably when a job is being processed, all cores are automatically being utilized by that job. How does another worker processing another job in parallel execute its job if the first job is utilizing all cores already?