10

I need to read an iso-8859-1 encoded file, do some operations then save it (with iso-8859-1 encoding). To test this, I'm losely mimicking a testcase I found on the Databricks CSV package: https://github.com/databricks/spark-csv/blob/master/src/test/scala/com/databricks/spark/csv/CsvSuite.scala

-- specifically: test("DSL test for iso-8859-1 encoded file")

val fileDF = spark.read.format("com.databricks.spark.csv")
  .option("header", "false")
  .option("charset", "iso-8859-1")
  .option("delimiter", "~")         // bogus - hopefully something not in the file, just want 1 record per line
  .load("s3://.../cars_iso-8859-1.csv")

   fileDF.collect                   // I see the non-ascii characters correctly
val selectedData = fileDF.select("_c0")  // just so show an operation
selectedData.write
  .format("com.databricks.spark.csv")
  .option("header", "false")
  .option("delimiter", "~")
  .option("charset", "iso-8859-1")
  .save("s3://.../carOutput8859")

This code runs without an error - but it doesn't seem to honor the iso-8859-1 option on output. At a Linux prompt (after copying from S3 -> local Linux)

file -i cars_iso-8859-1.csv 
cars_iso-8859-1.csv: text/plain; charset=iso-8859-1

file -i carOutput8859.csv 
carOutput8859.csv: text/plain; charset=utf-8

I'm just looking for some good examples of reading and writing non-UTF8 files. At this point, I have plenty of flexibility in the approach. (doesn't have to be a csv reader) Any recommedations/examples?

jduff1075
  • 380
  • 1
  • 3
  • 15
  • 2
    I have run into this same issue and have found that support for this was never added. If it is any help, I use a workaround at the unix level: `iconv -c -futf8 -tl1 -o ` which will convert a file from utf8 to iso-8859-1. Of note: the '-c' parameter will exclude unknown characters of iso-8859-1, which the amount of is quite significant – daniel May 30 '17 at 20:00

0 Answers0