I'm having an issue with checkpointing in production when spark can't find a file from _spark_metadata folder
18/05/04 16:59:55 INFO FileStreamSinkLog: Set the compact interval to 10 [defaultCompactInterval: 10]
18/05/04 16:59:55 INFO DelegatingS3FileSystem: Getting file status for 's3u://data-bucket-prod/data/internal/_spark_metadata/19.compact'
18/05/04 16:59:55 ERROR FileFormatWriter: Aborting job null.
java.lang.IllegalStateException: s3u://data-bucket-prod/data/internal/_spark_metadata/19.compact doesn't exist when compacting batch 29 (compactInterval: 10)
There was already a question asked but no solution for now.
In checkpointing folder I see that batch 29 is not committed yet, so can I remove something from checkpointing's sources
, state
and/or offsets
to prevent spark from failing because of missing _spark_metadata/19.compact
file?