I want to pivot a spark dataframe, I refer pyspark documentation, and based on pivot
function, the clue is .groupBy('name').pivot('name', values=None)
. Here's my dataset,
In[75]: spDF.show()
Out[75]:
+-----------+-----------+
|customer_id| name|
+-----------+-----------+
| 25620| MCDonnalds|
| 25620| STARBUCKS|
| 25620| nan|
| 25620| nan|
| 25620| MCDonnalds|
| 25620| nan|
| 25620| MCDonnalds|
| 25620|DUNKINDONUT|
| 25620| LOTTERIA|
| 25620| nan|
| 25620| MCDonnalds|
| 25620|DUNKINDONUT|
| 25620|DUNKINDONUT|
| 25620| nan|
| 25620| nan|
| 25620| nan|
| 25620| nan|
| 25620| LOTTERIA|
| 25620| LOTTERIA|
| 25620| STARBUCKS|
+-----------+-----------+
only showing top 20 rows
And then I try to di pivot the table name
In [96]:
spDF.groupBy('name').pivot('name', values=None)
Out[96]:
<pyspark.sql.group.GroupedData at 0x7f0ad03750f0>
And when I try to show them
In [98]:
spDF.groupBy('name').pivot('name', values=None).show()
Out [98]:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-98-94354082e956> in <module>()
----> 1 spDF.groupBy('name').pivot('name', values=None).show()
AttributeError: 'GroupedData' object has no attribute 'show'
I don't know why 'GroupedData'
can't be shown, what should I do to solve the issue?