9

How do I get the mean for all of the values (except for NaN) in a pandas dataframe?

pd.DataFrame.mean() only gives the means for each column (or row, when setting axis=1), but I want the mean over the whole thing. And df.mean().mean() isn't really the wisest option (see below).

Note that in my specific real case, the dataframe has a large multiindex, which additionally complicates things. For situations where this does not matter, one could deem @EdChum's answer more straightforward, which might be preferable to a faster solution in some cases.

Example code

data1 = np.arange(16).reshape(4, 4)
df = pd.DataFrame(data=data1)

df.mean()
0    9.0
1    7.0
2    8.0
3    9.0
dtype: float64

df.mean().mean()
7.5

np.arange(16).mean()
7.5

works, but if I mask parts of the df (which in reality, is a hundreds of rows/columns correlation matrix which by its nature has half of itself filled with redundant data), it gets funny:

triang = np.triu_indices(4)
data2 = np.arange(4.,20.).reshape(4, 4)
data2[triang]=np.nan
df2 = pd.DataFrame(data=data2)

df2.mean().mean()
15.0

But (8. + 12. + 13. + 16. + 17. + 18.)/6 is 14.

How can I best get the "real" mean, except writing some kind of loop that does the above by hand?

JC_CL
  • 2,346
  • 6
  • 23
  • 36

2 Answers2

6

You can use numpy.nanmean:

triang = np.triu_indices(4)
data2 = np.arange(4.,20.).reshape(4, 4)
data2[triang]=np.nan
df2 = pd.DataFrame(data=data2)

res = np.nanmean(df2)  # 14.0

Also possible via stack, as described by @EdChum, but slower:

df2 = pd.concat([df2]*100000)

%timeit np.nanmean(df2)              # 14.0ms
%timeit df2.stack().dropna().mean()  # 55.7ms

If your data is numeric only, you can also remove the Pandas overhead altogether.

jpp
  • 159,742
  • 34
  • 281
  • 339
  • 1
    I guess when the doc says "The average is taken over the flattened array by default" I probably really does just use the values and ignores my index. It does result in a single number which also seems realistic for my real data. I'll play around with it for a bit until I understand it though, before I'll accept that answer. – JC_CL Jun 22 '18 at 10:51
  • @JC_CL, `numpy` will not use the index here, it'll only use dataframe values. But do test for yourself. – jpp Jun 22 '18 at 10:52
2

You can stack, then dropna and then call mean:

In[201]:
df2.stack().dropna().mean()

Out[201]: 14.0

So this converts the df into a single column Series, then you can call dropna to remove the NaN rows, now mean will calculate the mean correctly.

EdChum
  • 376,765
  • 198
  • 813
  • 562
  • This works with the minimal example, but it appears to fail when the df has a large (multi)index. Probably would have to remove that index first. – JC_CL Jun 22 '18 at 10:33
  • @JC_CL well you never stated this in your example so I can only answer what is stated – EdChum Jun 22 '18 at 11:04
  • You are right. I just wasn't expecting that the indices play a role when I'm just interested in the values. However, your method still works, I just have stack stack upon stack… But I'll take the `np.nanmean(df)` approach, since it's just `62.9 µs per loop`, whereas `df.stack().stack().stack().dropna().mean()` is quite a bit slower at `19.9 ms per loop`. – JC_CL Jun 22 '18 at 11:16
  • @JC_CL I forgot about `np.nanmean` that would be my preferred method here also, not surprising that my method is slower as there are a lot intermediate structures being generated. The only difference is that you keep everything in `pandas` land – EdChum Jun 22 '18 at 11:17
  • Still, could come in handy at some point. I added a remark to the original question. – JC_CL Jun 22 '18 at 11:23