2

I have data like below

year    name    percent     sex
1880    John    0.081541    boy
1881    William 0.080511    boy
1881    John    0.050057    boy

I need to groupby and count using different columns

df_year = df.groupby('year').count()
df_name = df.groupby('name').count()
df_sex = df.groupby('sex').count()

then I have to create a Window to get the top-3 data by each column

window = Window.partitionBy('year').orderBy(col("count").desc())
top4_res = df_year.withColumn('topn', func.row_number().over(window)).\
                                              filter(col('topn') <= 4).repartition(1)

suppose I have hundreds of columns to groupby and count and topk_3 operation.

can I do it all in once?

or is there any better ways to do it?

RenJie
  • 31
  • 3

2 Answers2

4

I am not sure if this will meet your requirement but if you are okay with a single dataframe, i think it can give you a start, let me know if otherwise. You can stack these 3 columns (or more) and then groupby and take count :

cols = ['year','name','sex']
e = f"""stack({len(cols)},{','.join(map(','.join,
             (zip([f'"{i}"' for i in cols],cols))))}) as (col,val)"""

(df.select(*[F.col(i).cast('string') for i in cols]).selectExpr(e)
 .groupBy(*['col','val']).agg(F.count("col").alias("Counts")).orderBy('col')).show()

+----+-------+------+
| col|    val|Counts|
+----+-------+------+
|name|   John|     2|
|name|William|     1|
| sex|    boy|     3|
|year|   1881|     2|
|year|   1880|     1|
+----+-------+------+

If you want a wide form you can also pivot but i think long form would be helpful:

(df.select(*[F.col(i).cast('string') for i in cols]).selectExpr(e)
 .groupBy('col').pivot('val').agg(F.count('val')).show())

+----+----+----+----+-------+----+
| col|1880|1881|John|William| boy|
+----+----+----+----+-------+----+
|name|null|null|   2|      1|null|
|year|   1|   2|null|   null|null|
| sex|null|null|null|   null|   3|
+----+----+----+----+-------+----+

anky
  • 74,114
  • 11
  • 41
  • 70
0

If you want top n values of columns that have the biggest count, this should work:

from pyspark.sql.functions import *

columns_to_check = [ 'year', 'name' ]
n = 4

for c in columns_to_check:
  # returns a dataframe
  x = df.groupBy(c).count().sort(col("count").desc()).limit(n)
  x.show()

  # returns a list of rows
  x = df.groupBy(c).count().sort(col("count").desc()).take(n)
  print(x)
matkurek
  • 553
  • 5
  • 12
  • thansk,but can this done in one time without ‘for’ ? n times groupby() seems very slow. – RenJie May 29 '20 at 09:22
  • If you'd like to do the grouping/partitioning part in a single statement you could generate a long SQL query with count for each column partition and then write n queries for top rows against the distinct result for each column. Not sure if it would be faster. – matkurek May 29 '20 at 09:57