0

We are using Spring Batch Chunk to read messages from JMS destination and write to a flat file. In this regard, we have below observations,

  1. If the message broker goes down while the reader reads the messages and commit count is not reached, what ever number of messages read so far are being passed to Writer and then batch is going in to FAILED state. Is this the default behaviour of Chunk?

  2. If at all the answer for point 1 is YES, how do we make sure that this partial chunk is not sent to Writer. (To give more background to this issue, we have the JMS Session Transacted in the JMS Template, so when the chunk fails to read complete number of messages equal to Commit Count, all the messages read in the partial chunk are being rolled back to the JMS destination, where as the same partial chunk is being written to file. This is causing duplicates in the file when we restart the batch job).

Any help would be greatly appreciated.

EDIT

The configuration is as shown below,

Chunk:

<batch:step id="step-1" next="step-2">
    <batch:tasklet allow-start-if-complete="false">
        <batch:chunk reader="jms-reader-1-1" writer="file-writer-1-1" commit-interval="1000">
    </batch:chunk>
</batch:step>

Writer (Flat File) :

<bean scope="step" class="o.s.b.i.f.FlatFileItemWriter" id="file-writer-1-1">
    <property name="resource" value="file:#{T(com.test.core.BatchConfiguration).BATCH_VFS_LOCAL_TEMP_LOCATION}/#{T(com.test.utils.ThreadContextUtils).getJobInstanceIdAsString()}/AssetMesage"/>
    <property name="lineAggregator">
        <bean class="o.s.b.i.f.t.DelimitedLineAggregator">
            <property name="delimiter" value=","/>
            <property name="fieldExtractor">
                <bean class="o.s.b.i.f.t.BeanWrapperFieldExtractor">
                    <property name="names" value="assetId,assetName,assetDesc"/>
                </bean>
            </property>
        </bean>
    </property>
</bean>

Reader (JMS):

<bean scope="step" class="com.test.runtime.impl.item.readers.JMSItemReader" id="jms-reader-1-1">
    <property name="adapter">
        <bean class="com.test.adapter.impl.JMSAdapter">
            <property name="resource" ref="JMS.vmsmartbatch02_Regression"/>
            <property name="retryerId" value="JMS.vmsmartbatch02_Regression-retryer"/>
        </bean>
    </property>
    <property name="destination" value="#{jobParameters[source1jmsdestination] != null ? jobParameters[source1jmsdestination] : &quot;sourceTopic&quot;}"/><property name="durableSubscriberName" value="sourceTopicDS"/><property name="destinationType" value="Topic"/>
    <property name="ackMode" value="#{T(javax.jms.Session).CLIENT_ACKNOWLEDGE}"/>
    <property name="maxMessageCount" value="2000"/>
</bean>

EDIT 2

below is the core reader logic I am using,

Reader

    public Object read() throws Exception, UnexpectedInputException,
                ParseException, NonTransientResourceException {
            Object item = null;
            try {
                if(ackMode != 0 && ackMode >= 1  && ackMode <= 3){
                    adapter.getResource().setSessionAcknowledgeMode(ackMode);
                }

                if(maxMessageCount > 0){
                    ThreadContextUtils.addToExecutionContext("maxMessageCount", maxMessageCount);
  if(ThreadContextUtils.getExecutionContext().containsKey("readMessageCount")) {
                        readMessageCount = ThreadContextUtils.getExecutionContext().getInt("readMessageCount");
                    }
                }
                if (TOPIC_KEY.equalsIgnoreCase(destinationType)
                        && durableSubscriberName != null) {
                    item = (Object) adapter.invoke(REC_DS_AND_CONVERT_SELECTED,
                            OBJECT_CLASS, destination, durableSubscriberName,
                            receiveTimeout, filter == null ? "" : filter);
                } else {
                    item = (Object) adapter.invoke(REC_AND_CONVERT_SELECTED,
                            OBJECT_CLASS, destination,
                            receiveTimeout <= 0 ? adapter.getResource()
                                    .getReceiveTimeout() : receiveTimeout,
                            (filter == null ? "" : filter));
                }   
                if(maxMessageCount > 0){
                 if( item !=null){
                      readMessageCount++;
                    ThreadContextUtils.addToExecutionContext("readMessageCount", readMessageCount);
                 }
                }
                return item;
            } finally {

            }
        }
anand206
  • 21
  • 5
  • My hunch is that the reason the job is ending in a failed state is not due to the current chunk (the chunk that didn't read enough records) but the next one. What is causing the records to be passed? Sharing your configuration would help. – Michael Minella Aug 12 '15 at 14:14
  • @ Michael, If the partial chunk is successful as you mentioned, I am wondering why JMS messages that were read in the partial chunk are being rolled back. Also I observed from our logging that the consumer receive method is returning successful from the last operation before the broker going down. – anand206 Aug 12 '15 at 17:24
  • Are you seeing any exceptions in your logs? Also, since you aren't using an OOTB reader for this, can you post that code as well? – Michael Minella Aug 12 '15 at 17:30
  • @MichaelMinella : I have posted to read method logic as requested. Its an implementation of ItemReader interface provided by Spring batch. – anand206 Sep 04 '15 at 14:25

0 Answers0