11

i have a dataframe from a sql source which looks like:

User(id: Long, fname: String, lname: String, country: String)

[1, Fname1, Lname1, Belarus]
[2, Fname2, Lname2, Belgium]
[3, Fname3, Lname3, Austria]
[4, Fname4, Lname4, Australia]

I want to partition and write this data into csv files where each partition is based on initial letter of the country, so Belarus and Belgium should be one in output file, Austria and Australia in other.

jdk2588
  • 782
  • 1
  • 9
  • 23

2 Answers2

11

Here is what you can do

import org.apache.spark.sql.functions._
//create a dataframe with demo data
val df = spark.sparkContext.parallelize(Seq(
  (1, "Fname1", "Lname1", "Belarus"),
  (2, "Fname2", "Lname2", "Belgium"),
  (3, "Fname3", "Lname3", "Austria"),
  (4, "Fname4", "Lname4", "Australia")
)).toDF("id", "fname","lname", "country")

//create a new column with the first letter of column
val result = df.withColumn("countryFirst", split($"country", "")(0))

//save the data with partitionby first letter of country 

result.write.partitionBy("countryFirst").format("com.databricks.spark.csv").save("outputpath")

Edited: You can also use the substring which can increase the performance as suggested by Raphel as

substring(Column str, int pos, int len) Substring starts at pos and is of length len when str is String type or returns the slice of byte array that starts at pos in byte and is of length len when str is Binary type

val result = df.withColumn("firstCountry", substring($"country",1,1))

and then use partitionby with write

Hope this solves your problem!

koiralo
  • 22,594
  • 6
  • 51
  • 72
  • Adding to the question, does doing df.withColumn has performance penalties, or if it could be done in more effective manner ? – jdk2588 Jul 07 '17 at 10:25
  • 1
    You could also use spark's `substring` function instead of `split`, I think thats more readable – Raphael Roth Jul 07 '17 at 19:20
  • 1
    can we do this with multiple columns? – user482963 Oct 17 '17 at 13:25
  • 2
    Doesn't this add an extra column called "countryFirst" to the output data? Is there a way to not have that column in the output data but still partition data by the "countryFirst column"? A naive approach is to iterate over distinct values of "countryFirst" and write filtered data per distinct value of "countryFirst". This way, you could avoid writing the extra column in the output. I was hoping to do better than this. – Omkar Neogi Jan 27 '20 at 22:46
1

One alternative to solve this problem would be to first create a column containing only the first letter of each country. Having done this step, you could use partitionBy to save each partition to separate files.

dataFrame.write.partitionBy("column").format("com.databricks.spark.csv").save("/path/to/dir/")
Shaido
  • 27,497
  • 23
  • 70
  • 73
  • This will create partition on column values, so we will have separate files for Belarus and Belgium not in one file. – jdk2588 Jul 07 '17 at 07:47
  • 1
    Yes, as I mentioned you need to first create a separate column containing the countries first letter. Then use `partitionBy` on that column – Shaido Jul 07 '17 at 07:53