There will be a s3 source with zip files and each of the zipfiles will contain multiple pdf files and xml files.[Eg:100 pdfs and 100 xml files] (xml files will contain data about the pdf) . Each zip file will contain summary.xml which has information about xml+pdf file names. Batch needs to read the pdf files and its associated xml file and push these to rest service/db.
As stated there, I have an uncompressTasklet then implemented the MultiResourceItemReader/StaxItemReader and made the filenames as item to process further. Now I have questions on error handling on the same.
If there is any error in one of the processors, planning to move the files to /error directory to process next day(In the skip). so How can I make use of the existing processor but with different reader which will read from error directory alone?
And also If all the processor steps are successful but writer fails, How to retry only the itemwriter for that failure(dont want to run the processor again since it has rest API call) ? how to store the output of the processor(can we use jobrepositary to store the pojo or will it too much load ) and retry itemwriter step alone the next day ?
I have composite processor and one of the proceesor is rest call and its the last one. the response of the rest is sent to writer. If the writer fails for somereason, how do I configure to retry writer step alone?