Let me explain how my app is set up. First I have a stand alone command line started app that runs a main which in turn calls start on a job operator passing the appropriate params. I understand the start is an async call and once I call start unless I block some how in my main it dies.
My problem I have run into is when I run a partitioned job it appears to leave a few threads alive which prevents the entire processing from ending. When I run a non-partitioned job the process ends normally once the job has completed.
Is this normal and/or expected behavior? Is there a way to tell the partitioned threads to die. It seems that the partitioned threads are blocked waiting on something once the job has completed and they should not be?
I know that I could monitor for batch status in the main and possibly end it but as I stated in another question this adds a ton of chatter to the db and is not ideal.
An example of my job spec
<job id="partitionTest" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0">
<step id="onlyStep">
<partition>
<plan partitions="2">
<properties partition="0">
<property name="partitionNumber" value="1"></property>
</properties>
<properties partition="1">
<property name="partitionNumber" value="2"></property>
</properties>
</plan>
</partition>
<chunk item-count="2">
<reader id="reader" ref="DelimitedFlatFileReader">
<properties>
<!-- Reads in from file Test.csv -->
<property name="fileNameAndPath" value="#{jobParameters['inputPath']}/CSVInput#{partitionPlan['partitionNumber']}.csv" />
<property name="fieldNames" value="firstName, lastName, city" />
<property name="fullyQualifiedTargetClass" value="com.test.transactionaltest.Member" />
</properties>
</reader>
<processor ref="com.test.partitiontest.Processor" />
<writer ref="FlatFileWriter" >
<properties>
<property name="appendOn" value="true"/>
<property name="fileNameAndPath" value="#{jobParameters['outputPath']}/PartitionOutput.txt" />
<property name="fullyQualifiedTargetClass" value="com.test.transactionaltest.Member" />
</properties>
</writer>
</chunk>
</step>
</job>
Edit:
Ok reading a bit more about this issue and looking into the spring batch code, it appears there is a bug at least in my opinion in the JsrPartitionHandler. Specifically the handle method creates a ThreadPoolTaskExecutor locally but then that thread pool is never cleaned up properly. A shutdown/destroy should be called before that method returns in order to perform some clean up otherwise the threads get left in memory and out of scope.
Please correct me if I am wrong here but that definitely seems like what the problem is.
I am going and try to make a change regarding it and see how it plays out. I'll update after I have done some testing.