1

I want to get the last observation at the end of each minute for each stock. My high-frequency dataframe looks like:

+-----+--------+-------+----------+----------+----------+
|stock| date   | hour  |  minute  |  second  |  price   |
+-----+--------+-------+----------+----------+----------+
 VOD  | 01-02  |  10   |   13     |   11     |  85.35   |
 VOD  | 01-02  |  10   |   13     |   12     |  85.75   |
 VOD  | 01-02  |  10   |   14     |   09     |  84.35   |    
 VOD  | 01-02  |  10   |   14     |   16     |  82.85   |   
 VOD  | 01-02  |  10   |   14     |   26     |  85.65   |   
 VOD  | 01-02  |  10   |   15     |   07     |  84.35   |    
 ...     ...      ...     ....       ...         ...
 ABC  | 01-02  |  11   |   13     |   11     |  25.35   |
 ABC  | 01-02  |  11   |   13     |   15     |  25.39   |
 ABC  | 01-02  |  11   |   13     |   19     |  25.26   |

The desired output should be like

+-----+--------+-------+-------+-------+
|stock| date   | hour  | minute| Price | 
+-----+--------+-------+-------+-------+
 VOD  | 01-02  |  10   |  13   | 85.75 |
 VOD  | 01-02  |  10   |  14   | 85.65 |
 VOD  | 01-02  |  10   |  15   | 84.35 |
 VOD  | 01-02  |  10   |  16   | 85.75 |
 ...     ...      ...    ....     ...       
 ABC  | 01-02  |  11   |  13   | 25.26 |   

I knew I probably have to use partitionBy and orderBy syntax to get the results, but I am confused with those two. I am familiar with groupby function in SQL. I wonder which one is more similar to groupby function. Can someone help?

Christopher Moore
  • 15,626
  • 10
  • 42
  • 52
FlyUFalcon
  • 314
  • 1
  • 4
  • 18

2 Answers2

3

We can use window function and partition on 'stock', 'date', 'hour', 'minute' to create new frame.

  • For this case we can orderby **second**column and in descending order.

  • Then we can only select the first row from the window frame.

Example:

df.show()
#+-----+-----+----+------+------+-----+
#|stock| date|hour|minute|second|price|
#+-----+-----+----+------+------+-----+
#|  VOD|01-02|  10|    13|    11|85.35|
#|  VOD|01-02|  10|    13|    12|85.75|
#|  VOD|01-02|  10|    14|    09|84.35|
#|  VOD|01-02|  10|    14|    16|82.85|
#|  VOD|01-02|  10|    14|    26|85.65|
#+-----+-----+----+------+------+-----+

from pyspark.sql.window import Window
from pyspark.sql.functions import *

w = Window.partitionBy('stock', 'date', 'hour', 'minute').orderBy(desc('second'))

#adding rownumber to the data
df.withColumn("rn",row_number().over(w)).show()

#+-----+-----+----+------+------+-----+---+
#|stock| date|hour|minute|second|price| rn|
#+-----+-----+----+------+------+-----+---+
#|  VOD|01-02|  10|    13|    12|85.75|  1|
#|  VOD|01-02|  10|    13|    11|85.35|  2|
#|  VOD|01-02|  10|    14|    26|85.65|  1|
#|  VOD|01-02|  10|    14|    16|82.85|  2|
#|  VOD|01-02|  10|    14|    09|84.35|  3|
#+-----+-----+----+------+------+-----+---+

#then select only the first row as we are ordering descending.
df.withColumn("rn",row_number().over(w)).filter(col("rn") == 1).drop("second","rn").show()
#+-----+-----+----+------+-----+
#|stock| date|hour|minute|price|
#+-----+-----+----+------+-----+
#|  VOD|01-02|  10|    13|85.75|
#|  VOD|01-02|  10|    14|85.65|
#+-----+-----+----+------+-----+
notNull
  • 30,258
  • 4
  • 35
  • 50
0

After a few trail-and-error. It seems I got the solution. Just create a column with cumulative value of price and then pick out the row with the largest price.

w1(Window.partitionBy(df_trade['stock'],df_trade['date'],df_trade['hour'],df_trade['minute']).orderBy(df_trade['second']))

df1=df[['stock', 'date','hour','minute','second','price']].withColumn('subgroup',psf.sum('price').over(w1))
df1.orderBy(['stock', 'date','hour','minute','second']).show() 
 # create a column named subgroup to get the cumulative value of price

w=Window.partitionBy('stock', 'date','hour','minute','second')
df3=df1.withColumn('max',psf.max('subgroup').over(w)).where(psf.col('subgroup')==psf.col('max')).drop('max')        
#Get the row with largest value of cumulative price

df3=df3.orderBy(['stock', 'date','hour','minute','second'], ascending=[True, True,True,True,True]).drop('subgroup')

df3=df3.withColumnRenamed('price','lastprice')   # rename
df3.show()    
Dharman
  • 30,962
  • 25
  • 85
  • 135
FlyUFalcon
  • 314
  • 1
  • 4
  • 18