0

I have a dataframe in pyspark with information about customer transactions per day

id,date,value
1,2016-01-03,10
1,2016-01-05,20
1,2016-01-08,30
1,2016-01-09,20
2,2016-01-02,10
2,2016-01-04,10
2,2016-01-06,20
2,2016-01-07,20
2,2016-01-09,20

I would like to create new rows with the different dates of each id and fill with 0. like this:

id,date,value
1,2016-01-03,10
1,2016-01-04,0
1,2016-01-05,20
1,2016-01-06,0
1,2016-01-07,0
1,2016-01-08,30
1,2016-01-09,20
2,2016-01-02,10
2,2016-01-03,0
2,2016-01-04,20
2,2016-01-05,0
2,2016-01-06,20
2,2016-01-07,20
2,2016-01-08,0
2,2016-01-09,20

Previously I did this code in python, but I need to do it in pyspark, and I'm still learning pyspark.

df = (df.groupby('id')['date'].apply(lambda d:
                               pd.date_range(start=d.min(),end=d.max()).to_list())
            .explode().reset_index()
            .merge(df, on=['id','date'],how='left'))

df['value'] = df['value'].fillna(0).astype(int)

I also searched related questions but I was not successful in implementing.

Question 1

Question 2

2 Answers2

2

So, you'd like to get a dataframe with dates in day interval. It can be done in 2 steps - create a dataframe with all dates, and then join the values in that dataframe.

data_sdf.show()

# +---+----------+---+
# | id|        dt|val|
# +---+----------+---+
# |  1|2016-01-03| 10|
# |  1|2016-01-05| 20|
# |  1|2016-01-08| 30|
# |  1|2016-01-09| 20|
# |  2|2016-01-02| 10|
# |  2|2016-01-04| 10|
# |  2|2016-01-06| 20|
# |  2|2016-01-07| 20|
# |  2|2016-01-09| 20|
# +---+----------+---+

We calculate the min and max of dates within each id group. The min-max will be used in the sequence() to generate an array of dates with 1 day interval which can be exploded into rows.

explode_all_dt_sdf = data_sdf. \
    groupBy('id'). \
    agg(func.min('dt').alias('min_dt'), 
        func.max('dt').alias('max_dt')
        ). \
    withColumn('dt_arr', func.expr('sequence(min_dt, max_dt, interval 1 day)')). \
    withColumn('exploded_dt_arr', func.explode('dt_arr')). \
    select('id', func.col('exploded_dt_arr').alias('dt'))

explode_all_dt_sdf.orderBy('id', 'dt').show(5)

# +---+----------+
# | id|        dt|
# +---+----------+
# |  1|2016-01-03|
# |  1|2016-01-04|
# |  1|2016-01-05|
# |  1|2016-01-06|
# |  1|2016-01-07|
# +---+----------+

Then join the original dataframe to the above to get all values and fill missing values with 0.

data_w_all_dt_sdf = explode_all_dt_sdf. \
    join(data_sdf, ['id', 'dt'], 'left'). \
    fillna(0, subset=['val'])

data_w_all_dt_sdf.orderBy('id', 'dt').show()

# +---+----------+---+
# | id|        dt|val|
# +---+----------+---+
# |  1|2016-01-03| 10|
# |  1|2016-01-04|  0|
# |  1|2016-01-05| 20|
# |  1|2016-01-06|  0|
# |  1|2016-01-07|  0|
# |  1|2016-01-08| 30|
# |  1|2016-01-09| 20|
# |  2|2016-01-02| 10|
# |  2|2016-01-03|  0|
# |  2|2016-01-04| 10|
# |  2|2016-01-05|  0|
# |  2|2016-01-06| 20|
# |  2|2016-01-07| 20|
# |  2|2016-01-08|  0|
# |  2|2016-01-09| 20|
# +---+----------+---+
samkart
  • 6,007
  • 2
  • 14
  • 29
  • This is good! I'm having a problem with the date column that is in the format dd-mm-yyyy HH:mm:ss, how to group only by dd-mm-yyyy? This is with divergence at the time of joining the two dfs – Ana Beatriz Jul 06 '22 at 16:25
  • your example data doesn't show that data type. however, you can use `unix_timestamp()` and `from_unixtime()` functions together to read that format. – samkart Jul 06 '22 at 17:06
1

In addition to @samkart's answer, if you don't wanna use join:

import pyspark.sql.functions as f
from pyspark.sql.types import *

schema = StructType([
    StructField('id', IntegerType(), True),
    StructField('date', StringType(), True),
    StructField('value', IntegerType(), True)
])

df = spark.createDataFrame([
    (1, '2016-01-03', 10),
    (1, '2016-01-05', 20),
    (1, '2016-01-08', 30),
    (1, '2016-01-09', 20),
    (2, '2016-01-02', 10),
    (2, '2016-01-04', 10),
    (2, '2016-01-06', 20),
    (2, '2016-01-07', 20),
    (2, '2016-01-09', 20)
], schema = schema)

df = (
    df
    .withColumn('date', f.col('date').cast(DateType()))
    .groupBy('id')
    .agg(
        f.collect_list('date').alias('dates'),
        f.collect_list('value').alias('values')
    )
    .withColumn('zipped', f.map_from_arrays(f.col('dates'), f.col('values')))
    .withColumn('dates_completed', f.expr('sequence(array_min(dates), array_max(dates), interval 1 day)'))
    .withColumn('values_completed', f.expr('transform(dates_completed, element -> coalesce(element_at(zipped, element), 0))'))
    .select('id', f.explode(f.map_from_arrays(f.col('dates_completed'), f.col('values_completed'))).alias('date', 'value'))
)

And the output:

+---+----------+-----+                                                          
|id |date      |value|
+---+----------+-----+
|1  |2016-01-03|10   |
|1  |2016-01-04|0    |
|1  |2016-01-05|20   |
|1  |2016-01-06|0    |
|1  |2016-01-07|0    |
|1  |2016-01-08|30   |
|1  |2016-01-09|20   |
|2  |2016-01-02|10   |
|2  |2016-01-03|0    |
|2  |2016-01-04|10   |
|2  |2016-01-05|0    |
|2  |2016-01-06|20   |
|2  |2016-01-07|20   |
|2  |2016-01-08|0    |
|2  |2016-01-09|20   |
+---+----------+-----+
ARCrow
  • 1,360
  • 1
  • 10
  • 26