2

I expected that a apscheduler.executors.pool.ProcessPoolExecutor with the max_workers argument set to 1, would not execute more than one job in parallel.

import subprocess

from apscheduler.executors.pool import ProcessPoolExecutor
from apscheduler.schedulers.blocking import BlockingScheduler


def run_job():
    subprocess.check_call('echo start; sleep 3; echo done', shell=True)

scheduler = BlockingScheduler(
        executors={'processpool': ProcessPoolExecutor(max_workers=1)})

for i in range(20):
    scheduler.add_job(run_job)
scheduler.start()                                

However actually up to ten jobs are executed in parallel.

Do I misunderstand the concept or is this a bug?

celerimo
  • 23
  • 1
  • 4

1 Answers1

3

The reason this isn't working as expected is because you're not specifying which executor you want to run the job in.

Try this instead:

for i in range(20):
    scheduler.add_job(run_job, executor='processpool')
Alex Grönholm
  • 5,563
  • 29
  • 32
  • Thank you very much for your help! Is it possible to prevent the Scheduler from adding the default executor? – celerimo Dec 21 '15 at 18:41
  • 2
    No, nor should it be. But you could define your own executor as "default". – Alex Grönholm Dec 21 '15 at 18:43
  • Would you care to elaborate on this a little bit? Obviously I'm misunderstanding something about the concept of apscheduler. – celerimo Dec 23 '15 at 10:43
  • 2
    You named your executor "processpool" but if you name it "default" it will be used by default without having to explicitly specify the name in add_job(). – Alex Grönholm Dec 24 '15 at 11:36