0

There are cases where it seems the the dask cluster hang upon restart

to simulate this i have written this stupid code:

import contextlib2
from distributed import Client, LocalCluster

for i in xrange(100):
    print i
    with contextlib2.ExitStack() as es:
        cluster = LocalCluster(processes=True, n_workers=4)
        client = Client(cluster)
        es.callback(client.close)
        es.callback(es.callback(client.close))

This code will never complete the loop I get this error

 raise_exc_info(self._exc_info)
  File "//anaconda/lib/python2.7/site-packages/tornado/gen.py", line 1141, in run
    yielded = self.gen.throw(*exc_info)
  File "//anaconda/lib/python2.7/site-packages/distributed/deploy/local.py", line 191, in _start
    yield [self._start_worker(**self.worker_kwargs) for i in range(n_workers)]
  File "//anaconda/lib/python2.7/site-packages/tornado/gen.py", line 1133, in run
    value = future.result()
  File "//anaconda/lib/python2.7/site-packages/tornado/concurrent.py", line 269, in result
    raise_exc_info(self._exc_info)
  File "//anaconda/lib/python2.7/site-packages/tornado/gen.py", line 883, in callback
    result_list.append(f.result())
  File "//anaconda/lib/python2.7/site-packages/tornado/concurrent.py", line 269, in result
    raise_exc_info(self._exc_info)
  File "//anaconda/lib/python2.7/site-packages/tornado/gen.py", line 1147, in run
    yielded = self.gen.send(value)
  File "//anaconda/lib/python2.7/site-packages/distributed/deploy/local.py", line 217, in _start_worker
    raise gen.TimeoutError("Worker failed to start")

Im using dask distributed 1.25.1 and python 2.7 running on mac

sami
  • 501
  • 2
  • 6
  • 18

1 Answers1

0

This is a problem in Dask , while using python 2.7 on linux the only method to start new worker (Multiprocess) is by using fork

fork in turn may cause a deadlock for details see ticket open for dask https://github.com/dask/distributed/issues/2446

sami
  • 501
  • 2
  • 6
  • 18