I have the following dataframe with some columns that contains arrays. (We are using spark 1.6)
+--------------------+--------------+------------------+--------------+--------------------+-------------+
| UserName| col1 | col2 |col3 |col4 |col5 |
+--------------------+--------------+------------------+--------------+--------------------+-------------+
|foo |[Main, Indi...|[1777203, 1777203]| [GBP, GBP]| [CR, CR]| [143, 143]|
+--------------------+--------------+------------------+--------------+--------------------+-------------+
And I expect the following result:
+--------------------+--------------+------------------+--------------+--------------------+-------------+
| UserName| explod | explod2 |explod3 |explod4 |explod5 |
+--------------------+--------------+------------------+--------------+--------------------+-------------+
|NNNNNNNNNNNNNNNNN...| Main |1777203 | GBP | CR | 143 |
|NNNNNNNNNNNNNNNNN...|Individual |1777203 | GBP | CR | 143 |
----------------------------------------------------------------------------------------------------------
I have tried a Lateral view:
sqlContext.sql("SELECT `UserName`, explod, explod2, explod3, explod4, explod5 FROM sourceDF
LATERAL VIEW explode(`col1`) sourceDF AS explod
LATERAL VIEW explode(`col2`) explod AS explod2
LATERAL VIEW explode(`col3`) explod2 AS explod3
LATERAL VIEW explode(`col4`) explod3 AS explod4
LATERAL VIEW explode(`col5`) explod4 AS explod5")
But I get a cartesian product, with a lot of duplicates. I have tried the same, exploding all the columns with a withcolumn approach but still get a lot of duplicates
.withColumn("col1", explode($"col1"))...
Of course I can do a distinct to the final dataframe, but it's not an elegant solution. Is there any way to explode the columns without getting all this duplicates?
Thanks!