I have a pyspark dataframe df:
+-------------------+
| timestamplast|
+-------------------+
|2019-08-01 00:00:00|
|2019-08-01 00:01:09|
|2019-08-01 01:00:20|
|2019-08-03 00:00:27|
+-------------------+
I want to add columns 'year','month','day','hour' to the existing dataframe by list comprehension.
In Pandas this would be done as such:
L = ['year', 'month', 'day', 'hour']
date_gen = (getattr(df['timestamplast'].dt, i).rename(i) for i in L)
df = df.join(pd.concat(date_gen, axis=1)) # concatenate results and join to original dataframe
How would this be done in pyspark?