2

I have a spring-boot application which listens to the Kafka stream and sends the record to some service for further processing. The service might fail sometime. The exception scenario is mentioned in the comments. As of now, I mocked the service success and exception scenarios on my own.

Listener code:

@Autowired
PlanitService service

@KafkaListener(
        topics = "${app.topic}",
        groupId = "notifGrp", 
        containerFactory = "storeKafkaListener")
public void processStoreNotify(StoreNotify store) throws RefrigAlarmNotifyException{
       service.planitStoreNotification(store);

       // Some other logic which throws custom exception
       // RefrigAlarmNotifyException


    }
}

The consumer factory configurations are as below:

@Bean
    public ConsumerFactory<String, StoreNotify> storeConsumerFactory() {
        Map<String, Object> config = new HashMap<>();
        config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getConsumerBootstrapServers());
        config.put(ConsumerConfig.GROUP_ID_CONFIG, "notifGrp");
        config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
        config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest");

        try (ErrorHandlingDeserializer2<String> headerErrorHandlingDeserializer = new ErrorHandlingDeserializer2<>(
                new StringDeserializer());
                ErrorHandlingDeserializer2<StoreNotify> errorHandlingDeserializer = new ErrorHandlingDeserializer2<>(
                        new JsonDeserializer<>(StoreNotify.class, objectMapper()))) {
            return new DefaultKafkaConsumerFactory<>(config, headerErrorHandlingDeserializer,
                    errorHandlingDeserializer);
        }
    }

    @Bean
    public ConcurrentKafkaListenerContainerFactory<String, StoreNotify> storeKafkaListener() {
        ConcurrentKafkaListenerContainerFactory<String, StoreNotify> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(storeConsumerFactory());
        factory.getContainerProperties().setAckOnError(false);
        factory.getContainerProperties().setAckMode(AckMode.RECORD);
        //factory.setMessageConverter(new ByteArrayJsonMessageConverter());     

        DeadLetterPublishingRecoverer recoverer = new DeadLetterPublishingRecoverer(kafkaTemplate,
                (r, e) -> {

                    LOGGER.error("Exception is of type: ", e);
                    if (e instanceof RestClientException) {
                        LOGGER.error("RestClientException while processing {} ", r.value(), e);
                        return new TopicPartition(storeDeadLtrTopic, r.partition());
                    }
                    else {
                        LOGGER.error("Generic exception while processing {} ", r.value(), e);
                        return new TopicPartition(storeErrorTopic, r.partition());
                    }
                });
        factory.setErrorHandler(new SeekToCurrentErrorHandler(recoverer, new FixedBackOff(0L, 0L)));
        return factory;
    }

As the REST service is throwing RestClientException, it should go into the if block mentioned above. Regarding FixedBackOff, I don't want SeekToCurrentErrorHandler do the retry processing so I passed the second parameter as 0l. I just want it to send the record with specified topic. Correct me if I am wrong The exception stack trace is

org.springframework.kafka.listener.ListenerExecutionFailedException: Listener method 'public void com.demo.ran.consumer.StoreKafkaConsumer.processStoreNotifMessage(com.demo.ran.model.StoreNotify) throws com.demo.ran.exception.RefrigAlarmNotifyException' threw exception; nested exception is org.springframework.web.client.RestClientException: Service exception; nested exception is org.springframework.web.client.RestClientException: Service exception
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.decorateException(KafkaMessageListenerContainer.java:1742) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeErrorHandler(KafkaMessageListenerContainer.java:1730) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1647) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:1577) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1485) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1235) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:985) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:905) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_241]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_241]
    at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_241]
Caused by: org.springframework.web.client.RestClientException: Service exception
    at com.demo.ran.service.PlanitService.planitStoreNotification(PlanitService.java:53) ~[classes/:na]
    at com.demo.ran.consumer.StoreKafkaConsumer.processStoreNotifMessage(StoreKafkaConsumer.java:48) ~[classes/:na]
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_241]
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_241]
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_241]
    at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_241]
    at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:171) ~[spring-messaging-5.2.3.RELEASE.jar:5.2.3.RELEASE]
    at org.springframework.messaging.handler.invocation.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:120) ~[spring-messaging-5.2.3.RELEASE.jar:5.2.3.RELEASE]
    at org.springframework.kafka.listener.adapter.HandlerAdapter.invoke(HandlerAdapter.java:48) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.adapter.MessagingMessageListenerAdapter.invokeHandler(MessagingMessageListenerAdapter.java:326) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:86) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.adapter.RecordMessagingMessageListenerAdapter.onMessage(RecordMessagingMessageListenerAdapter.java:51) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:1696) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:1679) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1634) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
    ... 8 common frames omitted
Swapnil
  • 801
  • 3
  • 19
  • 42

1 Answers1

2

You don't need to use manual acks for this use case; simply configure a SeekToCurrentErrorHandler and throw the exception to the container; it will discard unprocessed records, perform the seeks and redeliver the failed message.

See the documentation.

You can configure the error handler with a DeadLetterPublishingRecoverer which can be used to send the record to a dead letter topic after some number of retries.

You can configure which exceptions are retryable.

        } catch (Exception exception) {
            LOGGER.error("Exception while calling the service  ", exception);
            // Ignore the record
        }

You must not "eat" the exception like that, let it propagate to the container.

When using MANUAL acks, you must add the Acknowledgment as a parameter and ack it.

Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • Thanks for the information. I will go through this approach. Any idea why the existing code did not work. In the above code I forgot to put Acknowledgement as a parameter but in actual code it is there. – Swapnil Mar 18 '20 at 13:59
  • Since you are catching the exception, the container knows nothing about it. Even if you threw the exception, the default error handler just logs the exception and moves on. – Gary Russell Mar 18 '20 at 14:11
  • Yeah, that makes sense. – Swapnil Mar 18 '20 at 15:34
  • I might be wrong, please correct me. As you mentioned in the above comments, I should not eat generic Exception. It should be propagated to the container. But it might be the case that the Exception is genuine one and even if it is redelivered, it is going to throw the same exception again and again. What should I do in case? – Swapnil Apr 03 '20 at 19:53
  • 1
    See [the documentation](https://docs.spring.io/spring-kafka/docs/2.4.5.RELEASE/reference/html/#annotation-error-handling). If you add a `SeekToCurrentErrorHandler` you can configure 1) how many times to retry, based on the exception type and 2) how long to delay before making the next delivery attempt. This allows you to control which exceptions are fatal and should not be retried. By default there are 10 delivery attempts with no back-off. Certain exceptions are considered fatal by default. Again, see the reference manual. When retries are exhaused you can log or send to another topic. – Gary Russell Apr 03 '20 at 20:00
  • I tried implementing the solution you provided with reference to https://docs.spring.io/spring-kafka/docs/2.4.5.RELEASE/reference/html/#seek-to-current But I am facing the problem with an exception- org.springframework.kafka.listener.ListenerExecutionFailedException: Listener failed. The sample example mentioned in the document listens String object and I have model object. Do I have to set any message converter? Please suggest. – Swapnil Apr 07 '20 at 14:13
  • Edit the question to add the current state of your code and show the full stack trace. – Gary Russell Apr 07 '20 at 14:26
  • i updated the code and few comments in the original question. If you need any other details, please let me know. – Swapnil Apr 07 '20 at 17:35
  • 1
    The top level exception is `ListenerExecutionFailedException` you need `if (e.getCause() instanceof RestClientException) {` – Gary Russell Apr 07 '20 at 17:58
  • That worked as per the expectations. Thanks for the suggestion! – Swapnil Apr 07 '20 at 18:57