Hi I have files in a directory Folder/1.csv Folder/2.csv Folder/3.csv
I want to read all these files in a pyspark dataframe/rdd and change some column value and write it back to same file. I have tried it but it creating new file in the folder part_000 something but I want to write the data in to same file whatever the contents in 1.csv , 2.csv,3.csv after modification in column values
How can I achieve that using loop or loading file in to each dataframe or how it possible with array or any logic ?