17

Is there any way of doing this without writing a for loop?

Suppose we have the following data:

d = {'A': {-1: 0.19052041339798062,
      0: -0.0052531481871952871,
      1: -0.0022017467720961644,
      2: -0.051109629013311737,
      3: 0.18569441222621336},
     'B': {-1: 0.029181417300734112,
      0: -0.0031021862533310743,
      1: -0.014358516787430284,
      2: 0.0046386615308068877,
      3: 0.056676322314857898},
     'C': {-1: 0.071883343375205785,
      0: -0.011930096520251999,
      1: -0.011836365865654104,
      2: -0.0033930358388315237,
      3: 0.11812543193496111},
     'D': {-1: 0.17670604006475121,
      0: -0.088756293654161142,
      1: -0.093383245649534194,
      2: 0.095649943383654359,
      3: 0.51030339029516592},
     'E': {-1: 0.30273513342295627,
      0: -0.30640233455497284,
      1: -0.32698263145105921,
      2: 0.60257484810641992,
      3: 0.36859978928328413},
     'F': {-1: 0.25328469046380131,
      0: -0.063890702001567143,
      1: -0.10007720832198815,
      2: 0.08153164759036724,
      3: 0.36606175240021183},
     'G': {-1: 0.28764606940509913,
      0: -0.11022209861109525,
      1: -0.1264164305949009,
      2: 0.17030074112227081,
      3: 0.30100292424380881}}
df = pd.DataFrame(d)

I know I can get the std values by std_vals = df.std(), which gives the following result, and use these values to drop the columns one by one.

In[]:
        pd.DataFrame(d).std()
Out[]:
        A    0.115374
        B    0.028435
        C    0.059394
        D    0.247617
        E    0.421117
        F    0.200776
        G    0.209710
        dtype: float64

However, I don't know how to use the Pandas indexing to drop the columns with low std values directly.

Is there a way to do this, or I need to loop over each column?

Ashkan
  • 1,643
  • 5
  • 23
  • 45

3 Answers3

21

You can use the loc method of a dataframe to select certain columns based on a Boolean indexer. Create the indexer like this (uses Numpy Array broadcasting to apply the condition to each column):

df.std() > 0.3

Out[84]: 
A    False
B    False
C    False
D    False
E     True
F    False
G    False
dtype: bool

Then call loc with : in the first position to indicate that you want to return all rows:

df.loc[:, df.std() > .3]
Out[85]: 
           E
-1  0.302735
 0 -0.306402
 1 -0.326983
 2  0.602575
 3  0.368600
maxymoo
  • 35,286
  • 11
  • 92
  • 119
  • 3
    I get the error: 'IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match' when I try and use this suggestion. – benso8 Oct 26 '21 at 22:40
  • The std() method first tries to use non-numeric columns, but then drops them if it cannot compute their std (see [documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.std.html)). This leads to a difference in number of columns in the df and thus you cannot mask the original df with the boolean array - they do not have the same length. What does work is dropping the columns that are below your threshold from the original index as well, then filtering with the mask: df[(df.std() < 0.3).where((df.std() < 0.3)).dropna().index] – Johannes Schöck Jan 25 '22 at 14:49
10

To drop columns, You need those column names.

threshold = 0.2

df.drop(df.std()[df.std() < threshold].index.values, axis=1)

         D       E       F       G
-1  0.1767  0.3027  0.2533  0.2876
 0 -0.0888 -0.3064 -0.0639 -0.1102
 1 -0.0934 -0.3270 -0.1001 -0.1264
 2  0.0956  0.6026  0.0815  0.1703
 3  0.5103  0.3686  0.3661  0.3010
Jianxun Li
  • 24,004
  • 10
  • 58
  • 76
-2

To drop columns where the value in the column is constant

t_df = t_df.drop(t_df.columns[t_df.nunique() == 1],axis=1)