7

What is the fastest way (within the limits of sane pythonicity) to count distinct values, across columns of the same dtype, for each row in a DataFrame?

Details: I have a DataFrame of categorical outcomes by subject (in rows) by day (in columns), similar to something generated by the following.

import numpy as np
import pandas as pd

def genSampleData(custCount, dayCount, discreteChoices):
    """generate example dataset"""
    np.random.seed(123)     
    return pd.concat([
               pd.DataFrame({'custId':np.array(range(1,int(custCount)+1))}),
               pd.DataFrame(
                columns = np.array(['day%d' % x for x in range(1,int(dayCount)+1)]),
                data = np.random.choice(a=np.array(discreteChoices), 
                                        size=(int(custCount), int(dayCount)))    
               )], axis=1)

For example, if the dataset tells us which drink each customer ordered on each visit to a store, I would like to know the count of distinct drinks per customer.

# notional discrete choice outcome          
drinkOptions, drinkIndex = np.unique(['coffee','tea','juice','soda','water'], 
                                     return_inverse=True) 

# integer-coded discrete choice outcomes
d = genSampleData(2,3, drinkIndex)
d
#   custId  day1  day2  day3
#0       1     1     4     1
#1       2     3     2     1

# Count distinct choices per subject -- this is what I want to do efficiently on larger DF
d.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1)
#0    2
#1    3

# Note: I have coded the choices as `int` rather than `str` to speed up comparisons.
# To reconstruct the choice names, we could do:
# d.iloc[:,1:] = drinkOptions[d.iloc[:,1:]]

What I have tried: The datasets in this use case will have many more subjects than days (example testDf below), so I have tried to find the most efficient row-wise operation:

testDf = genSampleData(100000,3, drinkIndex)

#---- Original attempts ----
%timeit -n20 testDf.iloc[:,1:].apply(lambda x: x.nunique(), axis=1)
# I didn't wait for this to finish -- something more than 5 seconds per loop
%timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(x.unique()), axis=1)
# Also too slow
%timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1)
#20 loops, best of 3: 2.07 s per loop

To improve on my original attempt, we note that pandas.DataFrame.apply() accepts the argument:

If raw=True the passed function will receive ndarray objects instead. If you are just applying a NumPy reduction function this will achieve much better performance

This did cut the runtime by more than half:

%timeit -n20 testDf.iloc[:,1:].apply(lambda x: len(np.unique(x)), axis=1, raw=True)
#20 loops, best of 3: 721 ms per loop *best so far*

I was surprised that a pure numpy solution, which would seem to be equivalent to the above with raw=True, was actually a bit slower:

%timeit -n20 np.apply_along_axis(lambda x: len(np.unique(x)), axis=1, arr = testDf.iloc[:,1:].values)
#20 loops, best of 3: 1.04 s per loop

Finally, I also tried transposing the data in order to do column-wise count distinct, which I thought might be more efficient (at least for DataFrame.apply(), but there didn't seem to be a meaningful difference.

%timeit -n20 testDf.iloc[:,1:].T.apply(lambda x: len(np.unique(x)), raw=True)
#20 loops, best of 3: 712 ms per loop *best so far*
%timeit -n20 np.apply_along_axis(lambda x: len(np.unique(x)), axis=0, arr = testDf.iloc[:,1:].values.T)
# 20 loops, best of 3: 1.13 s per loop

So far my best solution is a strange mix of df.apply of len(np.unique()), but what else should I try?

Community
  • 1
  • 1
C8H10N4O2
  • 18,312
  • 8
  • 98
  • 134
  • Is the day count representative? It seems to affect the differences in performances greatly. – ayhan Aug 04 '16 at 15:32
  • @ayhan interesting... the day count is representative for my particular use case but if something else works better for wider datasets that would be worth noting for other users – C8H10N4O2 Aug 04 '16 at 15:42
  • The opposite actually. It seems that comparing each column against the others is much faster when you have a small number of columns. I posted the results as an answer. – ayhan Aug 04 '16 at 15:51

3 Answers3

3

My understanding is that nunique is optimized for large series. Here, you have only 3 days. Comparing each column against the others seems to be faster:

testDf = genSampleData(100000,3, drinkIndex)
days = testDf.columns[1:]

%timeit testDf.iloc[:, 1:].stack().groupby(level=0).nunique()
10 loops, best of 3: 46.8 ms per loop

%timeit pd.melt(testDf, id_vars ='custId').groupby('custId').value.nunique()
10 loops, best of 3: 47.6 ms per loop

%%timeit
testDf['nunique'] = 1
for col1, col2 in zip(days, days[1:]):
    testDf['nunique'] += ~((testDf[[col2]].values == testDf.ix[:, 'day1':col1].values)).any(axis=1)
100 loops, best of 3: 3.83 ms per loop

It loses its edge when you add more columns of course. For different number of columns (the same order: stack().groupby(), pd.melt().groupby() and loops):

10 columns: 143ms, 161ms, 30.9ms
50 columns: 749ms, 968ms, 635ms
100 columns: 1.52s, 2.11s, 2.33s
ayhan
  • 70,170
  • 20
  • 182
  • 203
  • Yes, since that loop runs only a few times. I added the timings for higher number of columns too. – ayhan Aug 04 '16 at 16:01
  • 1
    great answer! +1, I'm still spinning my wheels insisting on a better solution. – piRSquared Aug 04 '16 at 16:17
  • @piRSquared I also feel like I can get rid of the loop with some kind of broadcasting but I'm note sure about the memory. But it definitely seems like a job that should take less time to complete. – ayhan Aug 04 '16 at 16:27
  • 1
    This solution counts `nan` values as distinct, since `np.nan==np.nan` is `False`. If we did not want to count `nan`, i.e., to count distinct non-missing values, we could tack on a `- np.sum(np.isnan(df),axis=1)` to the returned value. – C8H10N4O2 Aug 26 '16 at 15:31
2

pandas.melt with DataFrame.groupby and groupby.SeriesGroupBy.nunique seems to blow the other solutions away:

%timeit -n20 pd.melt(testDf, id_vars ='custId').groupby('custId').value.nunique()
#20 loops, best of 3: 67.3 ms per loop
C8H10N4O2
  • 18,312
  • 8
  • 98
  • 134
  • Maybe a caveat here that this is grouping by a column value rather than the row number. If you didn't have a unique row id variable, you might want to factor in the (small) cost of creating one for a fair benchmark. – C8H10N4O2 Aug 04 '16 at 14:26
  • 1
    In all your other solutions, the `apply` loop was happening in python - here `Groupby.nunique` uses some tricks (see [here](https://github.com/pydata/pandas/blob/master/pandas/core/groupby.py#L2896)) to do it all with vectorized operations. – chrisb Aug 04 '16 at 14:28
1

You don't need custId. I'd stack, then groupby

testDf.iloc[:, 1:].stack().groupby(level=0).nunique()

enter image description here

piRSquared
  • 285,575
  • 57
  • 475
  • 624