7

i have DataDrame looks like this:

+-------+---------+
|email  |timestamp|
+-------+---------+
|x@y.com|        1|
|y@m.net|        2|
|z@c.org|        3|
|x@y.com|        4|
|y@m.net|        5|
|    .. |       ..|
+-------+---------+

for each email i want to keep the latest record, so the result would be:

+-------+---------+
|email  |timestamp|
+-------+---------+
|x@y.com|        4|
|y@m.net|        5|
|z@c.org|        3|
|    .. |       ..|
+-------+---------+

how can I do that? i'm new to spark and dataframe.

user468587
  • 4,799
  • 24
  • 67
  • 124

1 Answers1

16

Here is a general ANSI SQL query which should work with Spark SQL:

SELECT email, timestamp
FROM
(
    SELECT t.*, ROW_NUMBER() OVER (PARTITION BY email ORDER BY timestamp DESC) rn
    FROM yourTable t
) t
WHERE rn = 1;

For PySpark data frame code, try the following:

from pyspark.sql.window import Window

df = yourDF
    .withColumn("rn", F.row_number()
        .over(Window.partitionBy("email")
        .orderBy(F.col("timestamp").desc())))

df = df.filter(F.col("rn") == 1).drop("rn")
df.show()
Matt Lavin
  • 932
  • 10
  • 21
Tim Biegeleisen
  • 502,043
  • 27
  • 286
  • 360