EDIT 2: Changed my go script to read sources instead. new script
EDIT: Trying this out currently, and it might be deleting files before they are processed. Currently looking for a better solution and investigating this method.
I solved this temporarily by creating a Go script. It will scan the checkpoints folder that I set in Spark and process the files in that to figure out which files have been written out of Spark already. It will then delete them if they exist. It does this every 10 seconds.
However, relies on Spark's checkpoint file structure and representation (JSON), which is not documented and could change at any point. I also have not looked through the Spark source code to see if the files I am reading (checkpoint/sources/0/...
), are the real source of truth for processed files. Seems to be working ATM though! Better than doing it manually at this point.