3

I'm new to using Dask but have experienced painfully slow performance when attempting to re-write native sklearn functions in Dask. I've simplified the use-case as much as possible in hope of getting some help.

Using standard sklearn/numpy/pandas etc I have the following:

df = pd.read_csv(location, index_col=False) # A ~75MB CSV
# Build feature list and dependent variables, code irrelevant

from sklearn import linear_model
model = linear_model.Lasso(alpha=0.1, normalize=False, max_iter=100, tol=Tol)
model.fit(features.values, dependent)
print(model.coef_)
print(model.intercept_)

This takes a few seconds to compute. I then have the following in Dask:

# Read in CSV and prepare params like before but using dask arrays/dataframes instead

with joblib.parallel_backend('dask'):
    from dask_glm.estimators import LinearRegression
    # Coerce data
    X = self.features.to_dask_array(lengths=True)
    y = self.dependents

    # Build regression
    lr = LinearRegression(fit_intercept=True, solver='admm', tol=self.tolerance, regularizer='l1', max_iter=100, lamduh=0.1)
    lr.fit(X, y)

    print(lr.coef_)
    print(lr.intercept_)

Which takes ages to compute (about 30 minutes). I only have 1 Dask worker in my development cluster but that has 16GB ram and unbounded CPU.

Has anyone any idea why this is so slow?

Hopefully my code omissions aren't significant!

NB: This is the simplest use-case before people ask why even use Dask - this was used as a proof of concept exercise to check that things would function as expected.

Sykomaniac
  • 175
  • 3
  • 13
  • 2
    You are comparing two completely different algorithms (hint: Coordinate descent/first-order vs. Newton/second-order=hessian-opt). – sascha Nov 15 '18 at 13:52
  • @sascha Sorry was supposed to read admm - although what you said may still be true! Left over from me trying to figure out the speed – Sykomaniac Nov 15 '18 at 14:03
  • In addition to the above (different algorithms), are you getting burned on IPC overhead? – shadowtalker Nov 15 '18 at 14:09

1 Answers1

2

A quote from the documentation you may want to consider:

For large arguments that are used by multiple tasks, it may be more efficient to pre-scatter the data to every worker, rather than serializing it once for every task. This can be done using the scatter keyword argument, which takes an iterable of objects to send to each worker.

But in general, Dask has a lot of diagnostics available to you, especially the scheduler's dashboard, to help figure out what your workers are doing and how time is being spent - you would do well to investigate it. Other system-wide factors are also very important, as with any computation: how close are you coming to your memory capacity, for instance?

In general, though, Dask is not magic, and when data fits comfortable into memory anyway, there will certainly be cases where dask add significant overhead. Read the documentation carefully on the intended use for the method you are considering - is it supposed to speed things up, or merely allow you to process more data than would normally fit on your system?

mdurant
  • 27,272
  • 5
  • 45
  • 74