0

This is a followup question to - Reading the same message several times from Kafka. If there is a better way to ask this question without posting a new question, let me know. In this post Gary mentions

"But you will still see later messages first if they have already been retrieved so you will have to discard those too."

Is there a clean way to discard messages already read by poll() after calling seek()? I started implementing logic to do this by saving the initial offset (in recordOffset), incrementing it on success. On failure, I call seek() and set the value of recordOffset to record.offset(). Then for every new message I check to see if the record.offset() is greater than recordOffset. If it is, I simply call acknowledge(), thereby "discarding" all the previously read messages. Here is the code -

    // in onMessage()...
    if (record.offset() > recordOffset){
        acknowledgment.acknowledge();
        return;
    }

    try {
        processRecord(record);
        recordOffset = record.offset()+1;
        acknowledgment.acknowledge();
    } catch (Exception e) {
        recordOffset = record.offset();
        consumerSeekCallback.seek(record.topic(), record.partition(), record.offset());
    }

The problem with this approach is that it gets complicated with multiple partitions. Is there an easier/cleaner way?

EDIT 1 Based on Gary's suggestion below, I tried adding an errorHandler like this -

@KafkaListener(topicPartitions =
        {@org.springframework.kafka.annotation.TopicPartition(topic = "${kafka.consumer.topic}", partitions = { "1" })},
        errorHandler = "SeekToCurrentErrorHandler")

Is there something wrong with this syntax as I get "Cannot resolve method 'errorHandler'"?

EDIT 2 After Gary explained the 2 error handlers, I removed the above errorHandler and added below to the config file -

@Bean
public ConcurrentKafkaListenerContainerFactory kafkaListenerContainerFactory() {
    ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();
    factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(kafkaProps()));
    factory.getContainerProperties().setAckOnError(false);
    factory.getContainerProperties().setErrorHandler(new SeekToCurrentErrorHandler());
    factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL);
    return factory;
}

When I start the application, I get this error now...

java.lang.NoSuchMethodError: org.springframework.util.Assert.state(ZLjava/util/function/Supplier;)V at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.determineInferredType(MessagingMessageListenerAdapter.java:396)

Here is line 396 -

Assert.state(!this.isConsumerRecordList || validParametersForBatch,
            () -> String.format(stateMessage, "ConsumerRecord"));
Assert.state(!this.isMessageList || validParametersForBatch,
            () -> String.format(stateMessage, "Message<?>"));
Gary Russell
  • 166,535
  • 14
  • 146
  • 179
rmulay
  • 135
  • 1
  • 3
  • 15

2 Answers2

4

Starting with version 2.0.1, if the container's ErrorHandler is a RemainingRecordsErrorHandler, such as the SeekToCurrentErrorHandler, the remaining records (including the failed one) are sent to the error handler instead of the listener.

This allows the SeekToCurrentErrorHandler to reposition every partition so the next poll will return the unprocessed record(s).

/**
 * An error handler that seeks to the current offset for each topic in the remaining
 * records. Used to rewind partitions after a message failure so that it can be
 * replayed.
 *
 * @author Gary Russell
 * @since 2.0.1
 *
 */
public class SeekToCurrentErrorHandler implements RemainingRecordsErrorHandler 

EDIT

There are two types of error handler. The KafkaListenerErrorHandler (specified in the annotation) works at the listener level; it is wired into the listener adapter that invokes the @KafkaListener annotation and thus only has access to the current record.

The second error handler (configured on the listener container) works at the container level and thus has access to the remaining records. The SeekToCurrentErrorHandler is a container-level error handler.

It is configured on the container properties in the container factory...

@Bean
public ConcurrentKafkaListenerContainerFactory kafkaListenerContainerFactory() {
    ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();
    factory.setConsumerFactory(this.consumerFactory);
    factory.getContainerProperties().setAckOnError(false);
    factory.getContainerProperties().setErrorHandler(new SeekToCurrentErrorHandler());
    factory.getContainerProperties().setAckMode(AckMode.RECORD);
    return factory;
}
Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • Gary - thanks for your suggestion. I am getting an error (posted in edited question) when I add an errorHandler. Also, I am new to gradle and not sure if just updating the version in build.gradle is enough to get to 2.0.1. I added - compile 'org.springframework.kafka:spring-kafka:2.0.1.RELEASE'. I will keep trying...just wanted to let you know where I am stuck with this. – rmulay Nov 28 '17 at 05:25
  • Gary - I updated my code to set the error handler at the container level as you have shown. Now I get an assertion failure on start-up. I added some details in the question. Any idea why? – rmulay Nov 28 '17 at 17:45
  • Spring Kafka 2.x requires Spring Framework 5 (currently 5.0.2). – Gary Russell Nov 28 '17 at 18:02
  • It took the whole day to upgrade to 2.0.1...but finally I was able to test this and it works! I will do some more thorough testing with a larger number of messages next to be absolutely sure. Thanks Gary! – rmulay Nov 29 '17 at 01:51
  • I might be doing something wrong, but the code above does not work in `spring-kafka-2.2.9.RELEASE`. The error is `The method setErrorHandler(SeekToCurrentErrorHandler) is undefined for the type ContainerProperties` – jumping_monkey Oct 16 '19 at 07:04
  • The error handler was moved to the factory starting with version 2.2. Always check the ["What's New?"](https://docs.spring.io/spring-kafka/docs/2.2.10.RELEASE/reference/html/#class-and-package-changes) chapter in the documentation. – Gary Russell Oct 16 '19 at 12:58
1

You go right way and yes you have to deal with different partitions as well. There is a FilteringMessageListenerAdapter, but you still have to write the logic.

Artem Bilan
  • 113,505
  • 11
  • 91
  • 118
  • Also see `SeekToCurrentErrorHandler` (explained in my answer). – Gary Russell Nov 26 '17 at 15:24
  • If for some reason I cannot migrate all our services to 2.0.1, it is good to know that I was thinking about this right. I'd rather not have to implement this logic...it is messy for sure. – rmulay Nov 29 '17 at 02:08