I am trying to collect huge data which may takes to complete around 5 days using JBeret implementation.We are running the extraction using Wildfly 10.1.0 Application Server with subsystem(jberet) as in-memory job-repository.
I took the chunk process for collecting data from database and batchlet for zipping process as two stepping process under single job id.
Also i am running the extraction using multi-threading which means we are collecting the data parallel using 10 Threads.
Due to Database slowness/memory issue faced the job failure exception.
2018-01-07 00:49:24,999 ERROR [org.jberet] (Batch Thread - 8) JBERET000007: Failed to run job simple-batchlet-job, step1, org.jberet.job.model.Chunk@3f63b3dd: javax.transaction.RollbackException: ARJUNA016102: The transaction is not active! Uid is 0:ffffac1d2026:37db6cf6:5a4f7329:49355
at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1190)
at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:126)
Is there is any possibility to pause all the Threads if there is any abnormal issues faced in database and resume it back so that we can flush out the garbage from database.
Thanks.