I have a dataframe in pyspark with information about customer transactions per day
id,date,value
1,2016-01-03,10
1,2016-01-05,20
1,2016-01-08,30
1,2016-01-09,20
2,2016-01-02,10
2,2016-01-04,10
2,2016-01-06,20
2,2016-01-07,20
2,2016-01-09,20
I would like to create new rows with the different dates of each id and fill with 0. like this:
id,date,value
1,2016-01-03,10
1,2016-01-04,0
1,2016-01-05,20
1,2016-01-06,0
1,2016-01-07,0
1,2016-01-08,30
1,2016-01-09,20
2,2016-01-02,10
2,2016-01-03,0
2,2016-01-04,20
2,2016-01-05,0
2,2016-01-06,20
2,2016-01-07,20
2,2016-01-08,0
2,2016-01-09,20
Previously I did this code in python, but I need to do it in pyspark, and I'm still learning pyspark.
df = (df.groupby('id')['date'].apply(lambda d:
pd.date_range(start=d.min(),end=d.max()).to_list())
.explode().reset_index()
.merge(df, on=['id','date'],how='left'))
df['value'] = df['value'].fillna(0).astype(int)
I also searched related questions but I was not successful in implementing.