18

I have groupings of values in the data and within each group, I would like to check if a value within the group is below 8. If this condition is met, the entire group is removed from the data set.

Please note the value I'm referring to lies in another column to the groupings column.

Example Input:

Groups Count
  1      7
  1      11
  1      9 
  2      12
  2      15
  2      21 

Output:

Groups Count
  2      12
  2      15
  2      21 
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
nrcjea001
  • 1,027
  • 1
  • 9
  • 21

3 Answers3

26

Based on what you described in the question, as long as there is at least one value is below 8 within the group, then that group should be dropped. So the equivalent statement is that as long as the minimum value within that group is below 8, that group should be dropped.

By using the filter feature, the actual code can be reduced to only one line, please refer to Filtration, you may use the following code:

dfnew = df.groupby('Groups').filter(lambda x: x['Count'].min()>8 )
dfnew.reset_index(drop=True, inplace=True) # reset index
dfnew = dfnew[['Groups','Count']] # rearrange the column sequence
print(dfnew)

Output:
   Groups  Count
0       2     12
1       2     15
2       2     21
2342G456DI8
  • 1,819
  • 3
  • 16
  • 29
6

You can use isin, loc and unique with selecting subset by inverted mask. Last you can reset_index:

print df

  Groups  Count
0       1      7
1       1     11
2       1      9
3       2     12
4       2     15
5       2     21

print df.loc[df['Count'] < 8, 'Groups'].unique()
[1]

print ~df['Groups'].isin(df.loc[df['Count'] < 8, 'Groups'].unique())

0    False
1    False
2    False
3     True
4     True
5     True
Name: Groups, dtype: bool

df1 = df[~df['Groups'].isin(df.loc[df['Count'] < 8, 'Groups'].unique())]
print df1.reset_index(drop=True)

   Groups  Count
0       2     12
1       2     15
2       2     21
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
2

Create a Boolean Series with your condition then groupby + transform('any') to form a mask for the original DataFrame. This allows you to simply slice the original DataFrame.

df[~df.Count.lt(8).groupby(df.Groups).transform('any')]
#   Groups  Count
#3       2     12
#4       2     15
#5       2     21

While the syntax of groupby + filter is more straightforward, it performs much worse for a large number of groups, so creating the Boolean mask with transform is preferred. In this example there's over a 1000x improvement. The .isin method works extremely fast for a single column but would require switching to a merge if grouping on multiple columns.

import pandas as pd
import numpy as np

np.random.seed(123)
N = 50000
df = pd.DataFrame({'Groups': [*range(N//2)]*2,
                   'Count': np.random.randint(0, 1000, N)})

# Double check both are equivalent
(df.groupby('Groups').filter(lambda x: x['Count'].min() >= 8)
  == df[~df.Count.lt(8).groupby(df.Groups).transform('any')]).all().all()
#True

%timeit df.groupby('Groups').filter(lambda x: x['Count'].min() >= 8)
#8.15 s ± 80.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit df[~df.Count.lt(8).groupby(df.Groups).transform('any')]
#6.54 ms ± 143 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%timeit df[~df['Groups'].isin(df.loc[df['Count'] < 8, 'Groups'].unique())]
#2.88 ms ± 24 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
ALollz
  • 57,915
  • 7
  • 66
  • 89