4

I am attempting to take a dask dataframe, group by column 'A' and remove the groups where there are fewer than MIN_SAMPLE_COUNT rows.

For example, the following code works in pandas:

import pandas as pd
import dask as da

MIN_SAMPLE_COUNT = 1

x = pd.DataFrame([[1,2,3], [1,5,6], [2,8,9], [1,3,5]])
x.columns = ['A', 'B', 'C']

grouped = x.groupby('A')
x = grouped.filter(lambda x: x['A'].count().astype(int) > MIN_SAMPLE_COUNT)

However, in Dask if I try something analogous:

import pandas as pd
import dask

MIN_SAMPLE_COUNT = 1

x = pd.DataFrame([[1,2,3], [1,5,6], [2,8,9], [1,3,5]])
x.columns = ['A', 'B', 'C']

x = dask.dataframe.from_pandas(x, npartitions=2)

grouped = x.groupby('A')
x = grouped.filter(lambda x: x['A'].count().astype(int) > MIN_SAMPLE_COUNT)

I get the following error message:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\dask\dataframe\groupby.py in __getattr__(self, key)
   1162         try:
-> 1163             return self[key]
   1164         except KeyError as e:

~\AppData\Local\Continuum\anaconda3\lib\site-packages\dask\dataframe\groupby.py in __getitem__(self, key)
   1153         # error is raised from pandas
-> 1154         g._meta = g._meta[key]
   1155         return g

~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\base.py in __getitem__(self, key)
    274             if key not in self.obj:
--> 275                 raise KeyError("Column not found: {key}".format(key=key))
    276             return self._gotitem(key, ndim=1)

KeyError: 'Column not found: filter'

During handling of the above exception, another exception occurred:

AttributeError                            Traceback (most recent call last)
<ipython-input-55-d8a969cc041b> in <module>()
      1 # Remove sixty second blocks that have fewer than MIN_SAMPLE_COUNT samples.
      2 grouped = dat.groupby('KPI_60_seconds')
----> 3 dat = grouped.filter(lambda x: x['KPI_60_seconds'].count().astype(int) > MIN_SAMPLE_COUNT)

~\AppData\Local\Continuum\anaconda3\lib\site-packages\dask\dataframe\groupby.py in __getattr__(self, key)
   1163             return self[key]
   1164         except KeyError as e:
-> 1165             raise AttributeError(e)
   1166 
   1167     @derived_from(pd.core.groupby.DataFrameGroupBy)

AttributeError: 'Column not found: filter'

The error message suggests that the filter method used in Pandas has not been implemented in Dask (nor did I find it after a search).

Is there a Dask functionality which captures what I am looking to do? I have gone through the Dask API and nothing stood out to me as what I need. I am currently using Dask '1.1.1'

Thank you for your help.

user1549
  • 63
  • 6

1 Answers1

4

Fairly new to Dask myself. One way to achieve you are trying could be as follows:

Dask version: 0.17.3

import pandas as pd
import dask.dataframe as dd

MIN_SAMPLE_COUNT = 1

x = pd.DataFrame([[1,2,3], [1,5,6], [2,8,9], [1,3,5]])
x.columns = ['A', 'B', 'C']
print("x (before):")
print(x)  # still pandas
x = dd.from_pandas(x, npartitions=2)

grouped = x.groupby('A').B.count().reset_index()

grouped = grouped.rename(columns={'B': 'Count'})

y = dd.merge(x, grouped, on=['A'])
y = y[y.Count > MIN_SAMPLE_COUNT]
x = y[['A', 'B', 'C']]
print("x (after):")
print(x.compute())  # needs compute for conversion to pandas df

Output:

x (before):
   A  B  C
0  1  2  3
1  1  5  6
2  2  8  9
3  1  3  5
x (after):
   A  B  C
0  1  2  3
1  1  5  6
1  1  3  5
dgumo
  • 1,838
  • 1
  • 14
  • 18