3

I am using the scipy.optimize.differential_evolution function inside a Docker container, with the paramater workers=-1 passed to it. This enables parallel processing with the maximum number of CPUs available, using Python's multiprocessing library (it uses a Pool). Looking at the Scipy source code, when the argument workers=-1 is supplied, multiprocessing.Pool() is called without any arguments behind the scenes. And thus using the maximum number of CPU cores available.

Every iteration, a message is printed. When running the container, the first few iterations appear to be working as normal. However, at some point, the python script appears to get stuck, since no new output is printed (and the fans on my computer turn off...). This happens without any errors being printed.

Note that when I remove or change the workers parameter to let's say 4, everything works as expected and the execution doesn't get stuck. (My computer has 8 cores.)

I have tried increasing the memory allocated to Docker, but this hasn't helped unfortunately.

Please let me know if there is something I should expand upon. Thanks

Marnix.hoh
  • 1,556
  • 1
  • 15
  • 26
  • for me it was a memory issue, and yet, I was expecting to see some error: https://stackoverflow.com/questions/75784038/python-multiprocessing-application-is-getting-stuck-in-docker-container – ItayB Mar 20 '23 at 08:04

0 Answers0