38

I have a PySpark Dataframe with two columns:

+---+----+
| Id|Rank|
+---+----+
|  a|   5|
|  b|   7|
|  c|   8|
|  d|   1|
+---+----+

For each row, I'm looking to replace Id column with "other" if Rank column is larger than 5.

If I use pseudocode to explain:

For row in df:
  if row.Rank > 5:
     then replace(row.Id, "other")

The result should look like this:

+-----+----+
|   Id|Rank|
+-----+----+
|    a|   5|
|other|   7|
|other|   8|
|    d|   1|
+-----+----+

Any clue how to achieve this? Thanks!!!


To create this Dataframe:

df = spark.createDataFrame([('a', 5), ('b', 7), ('c', 8), ('d', 1)], ['Id', 'Rank'])
Jaroslav Bezděk
  • 6,967
  • 6
  • 29
  • 46
Yuehan Lyu
  • 1,044
  • 4
  • 11
  • 17

2 Answers2

70

You can use when and otherwise like -

from pyspark.sql.functions import *

df\
.withColumn('Id_New',when(df.Rank <= 5,df.Id).otherwise('other'))\
.drop(df.Id)\
.select(col('Id_New').alias('Id'),col('Rank'))\
.show()

this gives output as -

+-----+----+
|   Id|Rank|
+-----+----+
|    a|   5|
|other|   7|
|other|   8|
|    d|   1|
+-----+----+
Pushkr
  • 3,591
  • 18
  • 31
  • nice one @Pushkr! – titipata May 16 '17 at 02:13
  • @titiro89 Yours is a clear solution to explain the usage of RDD and map! Thanks! It works on this exemplar, but on my real data set the "a = df.rdd" operation incurred a bunch of tasks and failed at last. Not sure if it's expensive to change from df to RDD. – Yuehan Lyu May 16 '17 at 09:41
17

Starting with @Pushkr solution couldn't you just use the following ?

from pyspark.sql.functions import *

df.withColumn('Id',when(df.Rank <= 5,df.Id).otherwise('other')).show()
carbontracking
  • 1,029
  • 4
  • 17
  • 33