7

I have this dataframe

+---+----+---+
|  A|   B|  C|
+---+----+---+
|  0|null|  1|
|  1| 3.0|  0|
|  2| 7.0|  0|
|  3|null|  1|
|  4| 4.0|  0|
|  5| 3.0|  0|
|  6|null|  1|
|  7|null|  1|
|  8|null|  1|
|  9| 5.0|  0|
| 10| 2.0|  0|
| 11|null|  1|
+---+----+---+

What I need do is a cumulative sum of values from column C until the next value is zero.

Expected output:

+---+----+---+----+
|  A|   B|  C|   D|
+---+----+---+----+
|  0|null|  1|   1|
|  1| 3.0|  0|   0|
|  2| 7.0|  0|   0|
|  3|null|  1|   1|
|  4| 4.0|  0|   0|
|  5| 3.0|  0|   0|
|  6|null|  1|   1|
|  7|null|  1|   2|
|  8|null|  1|   3|
|  9| 5.0|  0|   0|
| 10| 2.0|  0|   0|
| 11|null|  1|   1|
+---+----+---+----+

To reproduce dataframe:

from pyspark.shell import sc
from pyspark.sql import Window
from pyspark.sql.functions import lag, when, sum

x = sc.parallelize([
    [0, None], [1, 3.], [2, 7.], [3, None], [4, 4.],
    [5, 3.], [6, None], [7, None], [8, None], [9, 5.], [10, 2.], [11, None]])
x = x.toDF(['A', 'B'])

# Transform null values into "1"
x = x.withColumn('C', when(x.B.isNull(), 1).otherwise(0))
Kafels
  • 3,864
  • 1
  • 15
  • 32

1 Answers1

15

Create a temporary column (grp) that increments a counter each time column C is equal to 0 (the reset condition) and use this as a partitioning column for your cumulative sum.

import pyspark.sql.functions as f
from pyspark.sql import Window

x.withColumn(
    "grp", 
    f.sum((f.col("C") == 0).cast("int")).over(Window.orderBy("A"))
).withColumn(
    "D",
    f.sum(f.col("C")).over(Window.partitionBy("grp").orderBy("A"))
).drop("grp").show()
#+---+----+---+---+
#|  A|   B|  C|  D|
#+---+----+---+---+
#|  0|null|  1|  1|
#|  1| 3.0|  0|  0|
#|  2| 7.0|  0|  0|
#|  3|null|  1|  1|
#|  4| 4.0|  0|  0|
#|  5| 3.0|  0|  0|
#|  6|null|  1|  1|
#|  7|null|  1|  2|
#|  8|null|  1|  3|
#|  9| 5.0|  0|  0|
#| 10| 2.0|  0|  0|
#| 11|null|  1|  1|
#+---+----+---+---+
pault
  • 41,343
  • 15
  • 107
  • 149
  • Could you please the 'grp' part. It's fascinating but I am unable to understand how it works. – Bala Apr 23 '20 at 14:31
  • Wow: `(f.col("C") == 0).cast("int")` - making a boolean and then casting it to `1`, so that it can be summed to a partition. Is this actually necessary for some sort of performance, or is it just "clever"? – Stephen Nov 24 '20 at 07:54
  • @stephen I don't remember to be honest but I think it was necessary because `sum` needs numeric types and won't make the implicit conversion. Maybe the latest version of spark handles it differently. If you try it out, feel free to make a clarifying update to this answer. – pault Nov 25 '20 at 16:40
  • @pault best answer ever. saved my life +1 – Chinivar Basu Sep 08 '21 at 06:52