I am trying to run dask on a research cluster managed by slurm.
Launching a job with a classical sbatch
script is working.
But when I am doing:
from dask_jobqueue import SLURMCluster
cluster = SLURMCluster(cores=12, memory='24 GB', processes=1, interface='ib0')
cluster.scale(1)
The last step returns:
No handlers could be found for logger "dask_jobqueue.core"
When running squeue
, no job appear.
All the tests are passing. Using LocalCluster() does work on the login node.
Those are the package versions, with python 2.7:
dask 0.18.2 py_0 conda-forge
dask-core 0.18.2 py_0 conda-forge
dask-jobqueue 0.3.0 py_0 conda-forge
distributed 1.22.0 py27_0 conda-forge
Any clue where to look?