1

I have a pandas series with foll. value_counts output():

NaN     2741
 197    1891
 127     188
 194      42
 195      24
 122      21

When I perform describe() on this series, I get:

df[col_name].describe()
count    2738.000000
mean      172.182250
std        47.387496
min         0.000000
25%       171.250000
50%       197.000000
75%       197.000000
max       197.000000
Name: SS_D_1, dtype: float64

However, if I try to find minimum and maximum, I get nan as answer:

numpy.min(df[col_name].values)
nan

Also, when I try t convert it to a numpy array, I seem to get an array with only nan's

numpy.array(df[col_name])

Any suggestion on how to convert from pandas series to numpy array succesfully

user308827
  • 21,227
  • 87
  • 254
  • 417
  • 1
    `df[col_name].values` will return the numpy array. If you have a NaN in the data, it gets propagated using the numpy.min function. Meaning if there is a NaN, np.min will always yield the NaN as the anser. Try nanmin http://docs.scipy.org/doc/numpy/reference/generated/numpy.nanmin.html#numpy.nanmin – Brian Pendleton Sep 04 '15 at 20:45
  • 1
    The `min` of any array containing `nan` is also `nan`. To ignore `nan` values, try `np.nanmin(df[col_name].values)` (or just `df[col_name].min()`). – ali_m Sep 04 '15 at 20:47
  • Thanks, but I also get a nan for this: numpy.array(df[col_name]).min() – user308827 Sep 04 '15 at 20:50
  • 1
    The problem is that you're casting it to a numpy array before calling the `min()` method. `pandas.Series.min()` does the equivalent of `np.nanmin` and ignores nan values, whereas `numpy.ndarray.min` does the equivalent of `np.min` and will return `nan` for an array that contains one or more `nan`s. – ali_m Sep 04 '15 at 21:01
  • great, ty. if you can write this as an answer, i will be happy to accept – user308827 Sep 04 '15 at 21:02
  • 1
    @user308827 - as of pandas' 0.24.0 - you can access the backing array of a pandas Series with `.array` and `.to_numpy` - please find an updated answer bellow. [pandas 0.24.x release notes](https://pandas.pydata.org/pandas-docs/version/0.24/whatsnew/v0.24.0.html#accessing-the-values-in-a-series-or-index) – mork Jan 25 '19 at 19:14

2 Answers2

2

Both the function np.min and the method np.ndarray.min will always return NaN for any array that contains one or more NaN values (this is standard IEE754 floating point behaviour).

You could use np.nanmin, which ignores NaN values when computing the min, e.g.:

np.nanmin(df[col_name].values)

An even simpler option is just to use the pd.Series.min() method, which already ignores NaN values, i.e.:

df[col_name].min()

I have no idea why numpy.array(df[col_name]) would return an array containing only NaNs, unless df[col_name] already contained only NaNs to begin with. I assume this must be due to some other mistake in your code.

ali_m
  • 71,714
  • 23
  • 223
  • 298
1

As of pandas' v 0.24.0 - you can access the backing array of a pandas Series with .array and .to_numpy

pandas 0.24.x release notes Quote: "Series.array and Index.array have been added for extracting the array backing a Series or Index... We haven’t removed or deprecated Series.values or DataFrame.values, but we highly recommend and using .array or .to_numpy() instead

... We recommend using Series.array when you need the array of data stored in the Series, and Series.to_numpy() when you know you need a NumPy array."

mork
  • 1,747
  • 21
  • 23