-1

How do I expand a dataframe based on column values? I intend to go from this dataframe:

+---------+----------+----------+
|DEVICE_ID|  MIN_DATE|  MAX_DATE|
+---------+----------+----------+
|        1|2019-08-29|2019-08-31|
|        2|2019-08-27|2019-09-02|
+---------+----------+----------+

To one that looks like this:

+---------+----------+
|DEVICE_ID|      DATE|
+---------+----------+
|        1|2019-08-29|
|        1|2019-08-30|
|        1|2019-08-31|
|        2|2019-08-27|
|        2|2019-08-28|
|        2|2019-08-29|
|        2|2019-08-30|
|        2|2019-08-31|
|        2|2019-09-01|
|        2|2019-09-02|
+---------+----------+

Any help would be much appreciated.

MGJ-123
  • 614
  • 4
  • 19
  • Did you try any approach so far? – abiratsis Jan 16 '20 at 10:58
  • Tried an approach where I create a new column that consists of a list of dates (using a UDF) and then use the explode command. I am new to Pyspark and don't really know whether this is the best approach or not. Any thoughts? Many thanks – MGJ-123 Jan 16 '20 at 11:04

1 Answers1

0
from datetime import timedelta, date
from pyspark.sql.functions import udf
from pyspark.sql.types import ArrayType

# Create a sample data row.
df = sqlContext.sql("""
select 'dev1' as device_id, 
to_date('2020-01-06') as start, 
to_date('2020-01-09') as end""")

# Define a UDf to return a list of dates
@udf
def datelist(start, end):
    return ",".join([str(start + datetime.timedelta(days=x)) for x in range(0, 1+(end-start).days)])

# explode the list of dates into rows
df.select("device_id", 
          F.explode(
              F.split(datelist(df["start"], df["end"]), ","))
          .alias("date")).show(10, False)

Sunny Shukla
  • 342
  • 2
  • 8