65

Using Spark 1.6.1 version I need to fetch distinct values on a column and then perform some specific transformation on top of it. The column contains more than 50 million records and can grow larger.
I understand that doing a distinct.collect() will bring the call back to the driver program. Currently I am performing this task as below, is there a better approach?

 import sqlContext.implicits._
 preProcessedData.persist(StorageLevel.MEMORY_AND_DISK_2)

 preProcessedData.select(ApplicationId).distinct.collect().foreach(x => {
   val applicationId = x.getAs[String](ApplicationId)
   val selectedApplicationData = preProcessedData.filter($"$ApplicationId" === applicationId)
   // DO SOME TASK PER applicationId
 })

 preProcessedData.unpersist()  
ZygD
  • 22,092
  • 39
  • 79
  • 102
Kazhiyur
  • 865
  • 2
  • 10
  • 14

4 Answers4

71

Well to obtain all different values in a Dataframe you can use distinct. As you can see in the documentation that method returns another DataFrame. After that you can create a UDF in order to transform each record.

For example:

val df = sc.parallelize(Array((1, 2), (3, 4), (1, 6))).toDF("age", "salary")

// I obtain all different values. If you show you must see only {1, 3}
val distinctValuesDF = df.select(df("age")).distinct

// Define your udf. In this case I defined a simple function, but they can get complicated.
val myTransformationUDF = udf(value => value / 10)

// Run that transformation "over" your DataFrame
val afterTransformationDF = distinctValuesDF.select(myTransformationUDF(col("age")))
Alberto Bonsanto
  • 17,556
  • 10
  • 64
  • 93
34

In Pyspark try this,

df.select('col_name').distinct().show()

s510
  • 2,271
  • 11
  • 18
8

This solution demonstrates how to transform data with Spark native functions which are better than UDFs. It also demonstrates how dropDuplicates which is more suitable than distinct for certain queries.

Suppose you have this DataFrame:

+-------+-------------+
|country|    continent|
+-------+-------------+
|  china|         asia|
| brazil|south america|
| france|       europe|
|  china|         asia|
+-------+-------------+

Here's how to take all the distinct countries and run a transformation:

df
  .select("country")
  .distinct
  .withColumn("country", concat(col("country"), lit(" is fun!")))
  .show()
+--------------+
|       country|
+--------------+
|brazil is fun!|
|france is fun!|
| china is fun!|
+--------------+

You can use dropDuplicates instead of distinct if you don't want to lose the continent information:

df
  .dropDuplicates("country")
  .withColumn("description", concat(col("country"), lit(" is a country in "), col("continent")))
  .show(false)
+-------+-------------+------------------------------------+
|country|continent    |description                         |
+-------+-------------+------------------------------------+
|brazil |south america|brazil is a country in south america|
|france |europe       |france is a country in europe       |
|china  |asia         |china is a country in asia          |
+-------+-------------+------------------------------------+

See here for more information about filtering DataFrames and here for more information on dropping duplicates.

Ultimately, you'll want to wrap your transformation logic in custom transformations that can be chained with the Dataset#transform method.

Powers
  • 18,150
  • 10
  • 103
  • 108
  • dropDuplicates allows you to maintain all the column information that are in dataframe but perform distinct on the column that is specified to the dropduplicates command. – Nikunj Kakadiya Jan 19 '21 at 09:08
1
df =  df.select("column1", "column2",....,..,"column N").distinct.[].collect()

in the empty list, you can insert values like [ to_JSON()] if you want the df in a JSON format.

Striezel
  • 3,693
  • 7
  • 23
  • 37