I am trying to write a parquet file into CSV using the df.write.csv
but the output CSV file has a big name (part -0000- ), how can I rename that?
I searched and I found that it can be done in scala using the following code.
import org.apache.hadoop.fs._
fs = FileSystem.get(spark.hadoopConfiguration)
fs = FileSystem.get(sc.hadoopConfiguration)
fs.rename(new Path("csvDirectory/data.csv/part-0000"), new Path("csvDirectory/newData.csv"))
How can it be done in pyspark?