4

I want to take the log of each cell in a very sparse pandas DataFrame and must avoid the 0s. At first I was checking for 0s with a lambda function, then I thought it might be faster to replace the many 0s with NaNs. I got some inspiration from this closely related question, and tried using a "mask." Is there a better way?

# first approach
# 7.61 s ± 1.46 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
def get_log_1(df):
    return df.applymap(
        lambda x: math.log(x) if x != 0 else 0)

# second approach (faster!)
# 5.36 s ± 968 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
def get_log_2(df):
    return (df
            .replace(0, np.nan)
            .applymap(math.log)
            .replace(np.nan, 0))

# third apprach (even faster!!)
# 4.76 s ± 941 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
def get_log_3(df):
    return (df
            .mask(df <= 0)
            .applymap(math.log)
            .fillna(0))
Dustin Michels
  • 2,951
  • 2
  • 19
  • 31
  • Is possible add `df` for test? – jezrael Mar 10 '18 at 10:10
  • The `df` I'm using has `shape` (31064, 323) and is ~90% 0s. I think this generates something similar? `np.put(np.zeros((30000, 300)), range(0, 3000), 1); df = pd.DataFrame(a).sample(frac=1)` – Dustin Michels Mar 10 '18 at 10:37

1 Answers1

6

One possible solution is use numpy.log:

print (np.log(df.mask(df <=0)).fillna(0))

Or pure numpy:

df1= pd.DataFrame(np.ma.log(df.values).filled(0), index=df.index, columns=df.columns)
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252