System Info: CentOS, python 3.5.2, 64 cores, 96 GB ram
So I'm trying to load a large array (50GB) from a hdf file into ram (96GB). Each chunk is around 1.5GB less than the worker memory limit. It never seems to complete sometimes crashing or restarting workers also I don't see the memory usage on the web dashboard increasing or tasks being executed.
Should this work or am I missing something obvious here?
import dask.array as da
import h5py
from dask.distributed import LocalCluster, Client
from matplotlib import pyplot as plt
lc = LocalCluster(n_workers=64)
c = Client(lc)
f = h5py.File('50GB.h5', 'r')
data = f['data']
# data.shape = 2000000, 1000
x = da.from_array(data, chunks=(2000000, 100))
x = c.persist(x)