7

If I create a Pool with an unacceptably-high number of processes while in the Python interpreter, it will obviously error-out, however it doesn't seem like the forked processes are cleaned-up before doing so, therefore leaving the environment dirty, and the rest of the system unable to fork processes.

>>> from multiprocessing import Pool
>>> p = Pool(1000)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/__init__.py", line 232, in Pool
    return Pool(processes, initializer, initargs, maxtasksperchild)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 159, in __init__
    self._repopulate_pool()
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 222, in _repopulate_pool
    w.start()
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py", line 130, in start
    self._popen = Popen(self)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/forking.py", line 121, in __init__
    self.pid = os.fork()
OSError: [Errno 35] Resource temporarily unavailable

>>> p = Pool(1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/__init__.py", line 232, in Pool
    return Pool(processes, initializer, initargs, maxtasksperchild)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 159, in __init__
    self._repopulate_pool()
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 222, in _repopulate_pool
    w.start()
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py", line 130, in start
    self._popen = Popen(self)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/forking.py", line 121, in __init__
    self.pid = os.fork()
OSError: [Errno 35] Resource temporarily unavailable

Is there some way to avoid/remedy this, or is it considered a bug?

Dustin Oprea
  • 9,673
  • 13
  • 65
  • 105
  • p.close() ? (SO say that I have to add more characters, but I got nothing more to say...) – Guy Gavriely Nov 19 '13 at 16:05
  • your os? (me too, has nothing more to ask :) – alko Nov 19 '13 at 16:07
  • 1
    Since the call results in an exception, p obviously doesn't exist. It's OSX, but that shouldn't matter. Unfortunately, I can't test on any other system because their process-limits are so high that I'll starve other resources before the process-limit hits. – Dustin Oprea Nov 19 '13 at 17:12
  • You need to kill the previous sessions. check [here](https://stackoverflow.com/questions/18428750/kill-python-interpeter-in-linux-from-the-terminal#18428853). – Reihan_amn Jul 09 '18 at 01:30

3 Answers3

2

Is there some way to avoid/remedy this,

Don't do that.

or is it considered a bug?

Yes in the sense that all resources allocated should be de-allocated if the initializer fails. You should check on the specific build of 2.7 that you are using and see if there are any multiprocessing-specific library bugs fixed in later builds (2.7.6 release notes: http://hg.python.org/cpython/raw-file/99d03261c1ba/Misc/NEWS).

I'm assuming that your platform is OSX based on the paths in the stacktrace. Here is a post on errno 35 (which appears to be EAGAIN in OSX) when forking - I can't run more than 100 processes

Whatever it is that you're trying to accomplish, it seems that you need to incorporate a limit on resource usage at the application level. That means you might need to rethink your solution. With your present solution and with the bug fixed, you'll still likely see the resource limit hit system-wide in other contexts.

Community
  • 1
  • 1
Jeremy Brown
  • 17,880
  • 4
  • 35
  • 28
  • Thanks. I'm not in a solution. I'm just creating a pool at the console, and having to restart the interpreter when I get such an error. – Dustin Oprea Nov 19 '13 at 16:38
  • It sounds like you have a perfectly fine grasp on the problem, its cause, and the remedy then. The multiprocessing pool class is pure Python. You could always try patching it with cleanup code and even submit it upstream. – Jeremy Brown Nov 19 '13 at 19:14
  • I agree. Thanks, Jeremy. – Dustin Oprea Nov 19 '13 at 19:29
2

I faced the same issue and it was able to fix it by as per Dustin's comment.

Ticket : http://bugs.python.org/issue19675

I'm using Python 2.7.8 on Mac OS Mavericks

Mit Mehta
  • 779
  • 8
  • 13
  • Thanks. Think about commenting on the actual bug, to give it momentum. – Dustin Oprea Nov 21 '14 at 15:39
  • i'm new to pyhton and a little confused here. if i make update my pool.py file according to this patch file http://bugs.python.org/file32742/pool.py.patch_2.7.6_20131120-1959 it solves the problem. so doesn't it mean that bug is solved and future iterations of python will get the updated pool.py file? also why does the ticket say open when a patch file exists? – Mit Mehta Nov 21 '14 at 16:21
  • Hmm, any follow up on this? Interesting to see that a ticket opened up 2.5 years ago is still open :/ – Brent Hronik Mar 30 '16 at 00:40
0

In my case, you should set the "ulimit -n 2048" in the terminal which you are going to run the function. the number 2048 could be higher. It solved my problem.