0

I have a grouped pyspark pandas dataframe ==> 'groups', and I'm trying to iterate over the groups the same way it's possible in pandas :

import pyspark.pandas as ps

dataframe = ps.read_excel("data.xlsx")
groups = dataframe.groupby(['col1', 'col2'])
for name, group in groups:
    print(name)
    ...

I get the following error:

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Cell In[29], line 1
----> 1 for name, group in groups:
      2     print(name)

File /opt/spark/python/pyspark/pandas/groupby.py:2806, in DataFrameGroupBy.__getitem__(self, item)
   2803 def __getitem__(self, item: Any) -> GroupBy:
   2804     if self._as_index and is_name_like_value(item):
   2805         return SeriesGroupBy(
-> 2806             self._psdf._psser_for(item if is_name_like_tuple(item) else (item,)),
   2807             self._groupkeys,
   2808             dropna=self._dropna,
   2809         )
   2810     else:
   2811         if is_name_like_tuple(item):

File /opt/spark/python/pyspark/pandas/frame.py:699, in DataFrame._psser_for(self, label)
    672 def _psser_for(self, label: Label) -> "Series":
    673     """
    674     Create Series with a proper column label.
    675 
   (...)
    697     Name: id, dtype: int64
    698     """
--> 699     return self._pssers[label]

KeyError: (0,)

is there anyway to do this, or a workaround ?

elj96
  • 53
  • 3
  • 2
    Please provide a small, reproducible example alongside your desired output. Perhaps there's another way in pyspark of doing what you want to do instead of using a for loop. – Ric S Apr 03 '23 at 15:19

1 Answers1

0

Group by doesn't work the same way in pandas as it does in Pyspark. You could convert to pandas then convert back to Pyspark. It's not ideal if you're working with a large dataset but it is one work around.

import pyspark.pandas as ps
import pandas as pd

dataframe = ps.read_excel("data.xlsx")
pdf = dataframe.to_pandas() # convert to pandas dataframe
groups = pdf.groupby(['col1', 'col2'])
for name, group in groups:
    print(name)
    ...
ps_groups = ps.from_pandas(group) # convert back to PySpark dataframe
tamarajqawasmeh
  • 253
  • 1
  • 7