For input dataframe:
+-------+-----+-------+------+--------------+-------+----+
|Closing| Flow|Opening|Period| RowKey|RowKey2|Year|
+-------+-----+-------+------+--------------+-------+----+
| -2.11|-2.11| 0.0| 01|2016-01-1200-A| 1200-A|2016|
| -1.11|-1.11| 0.0| 01|2016-01-1200-B| 1200-B|2016|
| -1.0| -1.0| 0.0| 04| 2016-04-2200| 2200|2016|
| -1.0| -1.0| 0.0| 04| 2016-04-3000| 3000|2016|
| -1.0| -1.0| 0.0| 05|2016-05-1200-C| 1200-C|2016|
| 0.0| 1.0| -1.0| 05| 2016-05-3000| 3000|2016|
| 0.0| 1.0| -1.0| 08| 2016-08-2200| 2200|2016|
| 1.0| 1.0| 0.0| 09| 2016-09-2200| 2200|2016|
| -2.0| -1.0| -1.0| 12|2016-12-1200-C| 1200-C|2016|
| 100.0|100.0| 0.0| 12| 2016-12-4000| 4000|2016|
+-------+-----+-------+------+--------------+-------+----+
I would like to apply function from below (where period is list of distinct period values in input["Period] column):
def insertPeriod(row, period):
row = row.asDict()
row["Period"]=period
return Row(**row)
def filterOutFromPartion(aggdata, periods):
output = []
for period in periods:
iterator = (item for item in aggdata if item["Period"] == period)
try:
found = next(iterator)
output.append(insertPeriod(found,period))
except StopIteration:
if (len(output)==0):
continue
else:
temp = output[-1]
output.append(insertPeriod(temp,period))
return iter(output)
So that result would be:
+--------------+----+------+-------------+-----+--------------+--------------+
| RowKey|Year|Period|AccountNumber| Flow|OpeningBalance|ClosingBalance|
+--------------+----+------+-------------+-----+--------------+--------------+
|2016-01-1200-A|2016| 1| 1200|-2.11| 0| -2.11|
|2016-01-1200-B|2016| 1| 1200|-1.11| 0| -1.11|
|2016-02-1200-A|2016| 2| 1200| 0| -2.11| -2.11|
|2016-02-1200-B|2016| 2| 1200| 0| -1.11| -1.11|
|2016-03-1200-A|2016| 3| 1200| 0| -2.11| -2.11|
|2016-03-1200-B|2016| 3| 1200| 0| -1.11| -1.11|
|2016-04-1200-A|2016| 4| 1200| 0| -2.11| -2.11|
|2016-04-1200-B|2016| 4| 1200| 0| -1.11| -1.11|
| 2016-04-2200-|2016| 4| 2200| -1| 0| -1|
| 2016-04-3000-|2016| 4| 3000| -1| 0| -1|
|2016-05-1200-A|2016| 5| 1200| 0| -2.11| -2.11|
|2016-05-1200-B|2016| 5| 1200| 0| -1.11| -1.11|
|2016-05-1200-C|2016| 5| 1200| -1| 0| -1|
| 2016-05-2200-|2016| 5| 2200| 0| -1| -1|
| 2016-05-3000-|2016| 5| 3000| 1| -1| 0|
|2016-06-1200-A|2016| 6| 1200| 0| -2.11| -2.11|
|2016-06-1200-B|2016| 6| 1200| 0| -1.11| -1.11|
|2016-06-1200-C|2016| 6| 1200| 0| -1| -1|
| 2016-06-2200-|2016| 6| 2200| 0| -1| -1|
| 2016-06-3000-|2016| 6| 3000| 0| 0| 0|
+--------------+----+------+-------------+-----+--------------+--------------+
only showing top 20 rows
Basically put a map operation on rdd grouped by RowKey2 value, and if there is missing information on period then just use info from last one (if exist).
So i would love to use
df.rdd.partitionBy("RowKey2")\
.mapPartitions(lambda x: filterOutFromPartion(x, periodsList))\
.collect()
Which rise
Py4JError: An error occurred while calling None.org.apache.spark.api.python.PythonPartitioner. Trace: py4j.Py4JException: Constructor org.apache.spark.api.python.PythonPartitioner([class java.lang.String, class java.lang.Long]) does not exist
If I skip partitioning by key, then i get results only for 2 first Rowkeys (each 12 period as expected). Can someone provide me some help there?
Regards, Mike