3

When trying to consume a topic on my java application I get the following exception:

    org.springframework.integration.kafka.support.ConsumerConfiguration.executeTasks(ConsumerConfiguration.java:135)
        ... 32 more
Caused by: java.lang.IllegalStateException: Iterator is in failed state
        at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:54)
        at kafka.utils.IteratorTemplate.next(IteratorTemplate.scala:38)
        at kafka.consumer.ConsumerIterator.next(ConsumerIterator.scala:46)
        at org.springframework.integration.kafka.support.ConsumerConfiguration$1.call(ConsumerConfiguration.java:104)
        at org.springframework.integration.kafka.support.ConsumerConfiguration$1.call(ConsumerConfiguration.java:98)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        ... 3 more

This exception happens after a while when processing a lot of messages. And always happens on the same topic.

The configuration of the kafka consumer is:

<int-kafka:zookeeper-connect id="zookeeperConnect"
    zk-connect="${kafkaZookeeperUrl}" zk-connection-timeout="${kafkaZkConnectionTimeout}"
    zk-session-timeout="${kafkaZkSessionTimeout}" zk-sync-time="${kafkaZkSyncTime}" />

<!-- -->
<!-- Spring Integration -->
<!-- -->
<bean id="consumerProperties"
    class="org.springframework.beans.factory.config.PropertiesFactoryBean">
    <property name="properties">
        <props>
            <prop key="auto.commit.enable">${kafkaConsumerAutoCommitEnable}</prop>
            <prop key="auto.commit.interval.ms">${kafkaConsumerAutoCommitInterval}</prop>
            <prop key="fetch.min.bytes">${kafkaConsumerFetchMinBytes}</prop>
            <prop key="fetch.wait.max.ms">${kafkaConsumerFetchWaitMax}</prop>
            <prop key="auto.offset.reset">${kafkaConsumerOffsetReset}</prop>
        </props>
    </property>
</bean>
<!-- -->
<!-- Channels -->
<!-- -->
<int:channel id="kafka1">
    <int:interceptors>
        <int:wire-tap channel="kafkaWiretap" />
    </int:interceptors>
</int:channel>
<!-- -->
<!-- Consumer Contexts -->
<!-- -->
<int-kafka:consumer-context id="consumerContext1"
    consumer-timeout="${kafkaDataInTimeout}" zookeeper-connect="zookeeperConnect"
    consumer-properties="consumerProperties">
    <int-kafka:consumer-configurations>
        <int-kafka:consumer-configuration
            group-id="dataWriterSource" value-decoder="valueDecoder"
            key-decoder="valueDecoder" max-messages="${kafkaDataInMaxMessages}">
            <int-kafka:topic id="DATA_IN" streams="${kafkaDataInStreams}" />
        </int-kafka:consumer-configuration>
    </int-kafka:consumer-configurations>
</int-kafka:consumer-context>
<!-- -->
<!-- Inbound Channel Adapters -->
<!-- -->
<int-kafka:inbound-channel-adapter
    id="kafkaInboundChannelAdapter1" kafka-consumer-context-ref="consumerContext1"
    auto-startup="${kafkaConsumerChannelAutoStartup}" channel="kafka1">
    <int:poller fixed-delay="10" time-unit="MILLISECONDS"
        max-messages-per-poll="1000" />
</int-kafka:inbound-channel-adapter>

The topic has 600 partitions and it receives a lot of messages. The configuration of the context consumer is:

####################################
# KAFKA Consumers Configuration.
####################################
# General consumer properties
kafkaConsumerAutoCommitEnable=true
kafkaConsumerAutoCommitInterval=500
kafkaConsumerFetchMinBytes=1
kafkaConsumerFetchWaitMax=100
kafkaConsumerOffsetReset=largest

# Consumers
# Data In
kafkaDataInTimeout=500
kafkaDataInMaxMessages=5000
kafkaDataInStreams=4

Now as far as I can check, there is some kind of a problem, either with how I have configured the poller for the consumer or there is a bug in the following piece of code on the ConsumerContext.java:

private Map<String, Map<Integer, List<Object>>> executeTasks(
        final List<Callable<List<MessageAndMetadata<K, V>>>> tasks) {

    final Map<String, Map<Integer, List<Object>>> messages = new ConcurrentHashMap<String, Map<Integer, List<Object>>>();
    messages.putAll(getLeftOverMessageMap());

    try {
        for (final Future<List<MessageAndMetadata<K, V>>> result : this.executorService.invokeAll(tasks)) {
            if (!result.get().isEmpty()) {
                final String topic = result.get().get(0).topic();
                if (!messages.containsKey(topic)) {
                    messages.put(topic, getPayload(result.get()));
                }
                else {

                    final Map<Integer, List<Object>> existingPayloadMap = messages.get(topic);
                    getPayload(result.get(), existingPayloadMap);
                }
            }
        }

public ConsumerMetadata<K, V> getConsumerMetadata() {
    return consumerMetadata;
}

public Map<String, Map<Integer, List<Object>>> receive() {
    count = messageLeftOverTracker.getCurrentCount();
    final Object lock = new Object();

    final List<Callable<List<MessageAndMetadata<K, V>>>> tasks = new LinkedList<Callable<List<MessageAndMetadata<K, V>>>>();

    for (final List<KafkaStream<K, V>> streams : createConsumerMessageStreams()) {
        for (final KafkaStream<K, V> stream : streams) {
            tasks.add(new Callable<List<MessageAndMetadata<K, V>>>() {
                @Override
                public List<MessageAndMetadata<K, V>> call() throws Exception {
                    final List<MessageAndMetadata<K, V>> rawMessages = new ArrayList<MessageAndMetadata<K, V>>();
                    try {
                        while (count < maxMessages) {
                            final MessageAndMetadata<K, V> messageAndMetadata = stream.iterator().next();
                            synchronized (lock) {
                                if (count < maxMessages) {
                                    rawMessages.add(messageAndMetadata);
                                    count++;
                                }
                                else {
                                    messageLeftOverTracker.addMessageAndMetadata(messageAndMetadata);
                                }
                            }
                        }
                    }
                    catch (ConsumerTimeoutException cte) {
                        LOGGER.debug("Consumer timed out");
                    }
                    return rawMessages;
                }
            });
        }
    }
    return executeTasks(tasks);
}

The line 104 is final MessageAndMetadata messageAndMetadata = stream.iterator().next(); not synch and may get into conflict with the line 135: if (!result.get().isEmpty()) {

Any help from the spring integration kafka people will be really great. The question is: What is going on and how can we solve the issue.

Thanks in advance, Francisco

Francisco
  • 51
  • 6
  • Please, take a look if you don't face the similar issue: http://stackoverflow.com/questions/28046294/consuming-from-kafka-failed-iterator-is-in-failed-state – Artem Bilan Aug 24 '15 at 14:03
  • I already did, the version used there is the 1.0.0, and the suggesiton is to go to the latest version in order to solve the issue. As I say I have the 1.1.2 version, where its suppose to be solved, however the issue https://jira.spring.io/browse/INTEXT-112 still there. Another thing that puzzle me is that the lines lines where it happens are not exactly ther same (I know it can be because of the versions differences). Bottom line is that I am using a version that is suppose to solve it, however it still there the problem. – Francisco Aug 24 '15 at 19:02

0 Answers0