3

I want to perform a groupby.first() of a pandas timeseries where the datetime index is almost consecutive, where almost is less than 5 minutes of difference. I have seen a lot of material but never if the datetime is not consecutive like in my example:

ind=['2019-02-28 01:20:00', '2019-02-28 01:21:00','2019-02-28 01:22:00', '2019-02-28 01:23:00',
     '2019-02-28 01:24:00', '2019-02-28 01:25:00','2019-02-28 01:26:00', '2019-02-28 01:27:00',
     '2019-02-28 01:28:00', '2019-02-28 04:05:00','2019-02-28 04:06:00', '2019-02-28 04:07:00',
     '2019-02-28 04:08:00', '2019-02-28 04:09:00','2019-02-28 06:55:00', '2019-02-28 06:56:00',
     '2019-02-28 06:57:00', '2019-02-28 06:58:00','2019-02-28 09:50:00', '2019-02-28 09:51:00',
     '2019-02-28 09:52:00', '2019-02-28 09:53:00','2019-02-28 09:54:00', '2019-02-28 09:55:00',          
     '2019-02-28 09:56:00', '2019-02-28 09:57:00','2019-02-28 09:58:00', '2019-02-28 09:59:00',
     '2019-02-28 10:00:00']

val=[2.11, 2.24, 2.37, 2.42, 2.58, 2.71, 2.76, 3.06, 3.29, 2.04, 2.26,2.55, 2.89, 3.26, 2.2 , 2.54,
     2.85, 3.24, 2.2 , 2.12, 2.11, 2.07,2.1 , 2.16, 2.28, 2.35, 2.44, 2.5 , 2.57]

s = pd.Series(val,index=pd.to_datetime(ind))

My desidered output should be:

Datetime               Value
2019-02-28 01:20:00    2.11
2019-02-28 04:05:00    2.04
2019-02-28 06:55:00    2.20
2019-02-28 09:50:00    2.20

Anyone can help me?

Shubham Sharma
  • 68,127
  • 6
  • 24
  • 53
mat
  • 181
  • 14

3 Answers3

4

Let us group the dataframe on blocks of consecutive rows where time difference is less than 5min:

df = s.reset_index(name='Value')
b  = df['index'].diff().dt.seconds.gt(300).cumsum()
df = df.groupby(b, as_index=False).first()

Explanations

Reset the index of the given timeseries s then calculate the difference of datetime index compared to previous element and use dt.seconds to get the difference measured in seconds.

>>> df['index'].diff().dt.seconds

0         NaN
1        60.0
2        60.0
3        60.0
4        60.0
5        60.0
6        60.0
7        60.0
8        60.0
9      9420.0
....
25       60.0
26       60.0
27       60.0
28       60.0
Name: index, dtype: float64

Now compare total seconds with 300 to create a boolean mask followed by cumsum to identify blocks of rows where the difference between consecutive datetime values is less than 5 min

>>> df['index'].diff().dt.seconds.gt(300).cumsum()

0     0
1     0
2     0
3     0
4     0
5     0
6     0
7     0
8     0
9     1
...
25    3
26    3
27    3
28    3
Name: index, dtype: int64

Group the dataframe on the above blocks and aggregate using first

>>> df
                index  Value
0 2019-02-28 01:20:00   2.11
1 2019-02-28 04:05:00   2.04
2 2019-02-28 06:55:00   2.20
3 2019-02-28 09:50:00   2.20
Shubham Sharma
  • 68,127
  • 6
  • 24
  • 53
2

Using numpy based solution:

from numpy import array, diff, where, split
data = ((s.index.hour*60)+s.index.minute+(s.index.second/60)).astype(int)
data = {k:v for k,v in enumerate(data)}
result= split(list(data.keys()), where(diff(list(data.values()))>5)[0]+1 )
res = s.iloc[[i[0] for i in result]]

res:

2019-02-28 01:20:00    2.11
2019-02-28 04:05:00    2.04
2019-02-28 06:55:00    2.20
2019-02-28 09:50:00    2.20
dtype: float64
Pygirl
  • 12,969
  • 5
  • 30
  • 43
  • Do you mind if i change the name of `df` to `s` in the OP's question because its confusing as `df` is usually used to denote Dataframe but the OP has denoted series by it. – Shubham Sharma Mar 23 '21 at 18:31
  • 1
    @ShubhamSharma: Go ahead :) I will update it in my answer also then – Pygirl Mar 23 '21 at 18:31
  • 1
    You solution seems interesting i will perform some Benchmarks – mat Mar 23 '21 at 20:48
0

It appears you have missed some values. This filters rows 10**9 nanoseconds in a second, 60 seconds in a minute, 5 minute boundaries.

df.loc[df.index.values.astype(int)%(10**9*60*5)==0]

output

2019-02-28 01:20:00    2.11
2019-02-28 01:25:00    2.71
2019-02-28 04:05:00    2.04
2019-02-28 06:55:00    2.20
2019-02-28 09:50:00    2.20
2019-02-28 09:55:00    2.16
2019-02-28 10:00:00    2.57
Rob Raymond
  • 29,118
  • 3
  • 14
  • 30