0

I have a kafka consumer class which has a Main-Topic listener and a DLQ listener. When the main-topic listener fails to process the consumerRecord, Then the record gets pushed into the DLQ topic based on my bean factory. Consequently, The DLQ successfully processes the message. But when I restart my consumer application, I see the DLQ processed message is again consumed by the Main-Topic Listener though it was processed successfully. Can someone please help me out on how to prevent the main topic from re consuming the DLQ processed message? Thank you in advance!

Kafka Consumer.java

public class KafkaConsumer {
//MAIN TOPIC LISTENER
@KafkaListener(id = "main-topic", topics = "main-topic", groupId = "main", containerFactory = "kafkaListenerContainerFactory", clientIdPrefix = "main-topic")
    public void mainListener(ConsumerRecord<String, String> consumerRecord, Acknowledgement ack) {
     //The below line of code converts the consumerRecord value into an Object class and save the converted object to DB.   
     dbService.saveTodb(consumerRecord.value(), new ObjectMapper());
        ack.acknowledge();
    }

//DLQ LISTENER
@KafkaListener(id = "DLQ-topic", topics = "DLQ-topic", groupId = "main", clientIdPrefix = "DLQ", autostartup= "false")
    public void mainListener(ConsumerRecord<String, String> consumerRecord, Acknowledgement ack) {
        dbService.saveTodb(consumerRecord.value(), new ObjectMapper());
        ack.acknowledge();
    }
}

KafkaBeanFactory.java

public class KafkaBeanFactory{
@Bean(name = "kafkaListenerContainerFactory")
    public ConcurrentKafkaListenerContainerFactory<Object, Object> kafkaListenerContainerFactory(
            ConcurrentKafkaListenerContainerFactoryConfigurer configurer,
            ConsumerFactory<Object, Object> kafkaConsumerFactory, KafkaTemplate<Object, Object> template) {
        ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
        configurer.configure(factory, kafkaConsumerFactory);
        var recoverer = new DeadLetterPublishingRecoverer(template,
                (record, ex) -> new TopicPartition("DLQ-topic", record.partition()));
        var errorHandler = new DefaultErrorHandler(recoverer, new FixedBackOff(3, 20000));
        errorHandler.addRetryableExceptions(JsonProcessingException.class, DBException.class);
        errorHandler.setAckAfterHandle(true);
        factory.setCommonErrorHandler(errorHandler);
        return factory;
}
}

application.yaml

 kafka:
    bootstrap-servers: localhost:9092 # sample value 
    client-id: main&DLQ
    properties:
      security:
        protocol: SASL_SSL
      sasl:
        mechanism: PLAIN
        jaas:
          config: org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="<string>";
          security:
            protocol: SASL_SSL
    consumer:
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      groupId: main
      enable-auto-commit: false
      auto.offset.reset: earliest
    listener:
      ack-mode: MANUAL_IMMEDIATE
KGT
  • 29
  • 3

1 Answers1

0

You need to set the commitRecovered property on the default error handler.

    /**
     * Set to true to commit the offset for a recovered record.
     * The container must be configured with
     * {@link org.springframework.kafka.listener.ContainerProperties.AckMode#MANUAL_IMMEDIATE}.
     * Whether or not the commit is sync or async depends on the container's syncCommits
     * property.
     * @param commitRecovered true to commit.
     */
    @Override
    public void setCommitRecovered(boolean commitRecovered) { // NOSONAR enhanced javadoc
Gary Russell
  • 166,535
  • 14
  • 146
  • 179