4

I am in the process of going to dummy encode a dask dataframe train_final[categorical_var]. However, when I run the code I get a memory error. Could this happen since dask is supposed to do it by loading data chunk by chunk.

The code is below:


from dask_ml.preprocessing import DummyEncoder
de = DummyEncoder()
train_final_cat = de.fit_transform(train_final[categorical_var])

The error:

---------------------------------------------------------------------------
MemoryError                               Traceback (most recent call last)
<ipython-input-84-e21592c13279> in <module>
      1 from dask_ml.preprocessing import DummyEncoder
      2 de = DummyEncoder()
----> 3 train_final_cat = de.fit_transform(train_final[categorical_var])

~/env/lib/python3.5/site-packages/sklearn/base.py in fit_transform(self, X, y, **fit_params)
    460         if y is None:
    461             # fit method of arity 1 (unsupervised transformation)
--> 462             return self.fit(X, **fit_params).transform(X)
    463         else:
    464             # fit method of arity 2 (supervised transformation)

~/env/lib/python3.5/site-packages/dask_ml/preprocessing/data.py in fit(self, X, y)
    602 
    603         self.transformed_columns_ = pd.get_dummies(
--> 604             sample, drop_first=self.drop_first
    605         ).columns
    606         return self

~/env/lib/python3.5/site-packages/pandas/core/reshape/reshape.py in get_dummies(data, prefix, prefix_sep, dummy_na, columns, sparse, drop_first, dtype)
    890             dummy = _get_dummies_1d(col[1], prefix=pre, prefix_sep=sep,
    891                                     dummy_na=dummy_na, sparse=sparse,
--> 892                                     drop_first=drop_first, dtype=dtype)
    893             with_dummies.append(dummy)
    894         result = concat(with_dummies, axis=1)

~/env/lib/python3.5/site-packages/pandas/core/reshape/reshape.py in _get_dummies_1d(data, prefix, prefix_sep, dummy_na, sparse, drop_first, dtype)
    978 
    979     else:
--> 980         dummy_mat = np.eye(number_of_cols, dtype=dtype).take(codes, axis=0)
    981 
    982         if not dummy_na:

~/env/lib/python3.5/site-packages/numpy/lib/twodim_base.py in eye(N, M, k, dtype, order)
    184     if M is None:
    185         M = N
--> 186     m = zeros((N, M), dtype=dtype, order=order)
    187     if k >= M:
    188         return m

MemoryError: 

Would anyone be able to give me some direction in this regard

Thanks

Michael

1 Answers1

1

Encoding dummy variables is a very memory intensive task, as you're creating a new column for each unique value of your categorical_column. If categorical_column is high cardinality then even a single chunk can explode in size. As well, creating dummies is not "embarrassingly parallel"; so workers can't just process each chunk independently. The workers need to communicate and replicate some data during the computation.

Aaron Elliot
  • 155
  • 9