0

Spring cloud Kafka stream does not retry upon deserialization error even after specific configuration. The expectation is, it should retry based on the configured retry policy and at the end push the failed message to DLQ.

Configuration as below.

spring.cloud.stream.bindings.input_topic.consumer.maxAttempts=7
spring.cloud.stream.bindings.input_topic.consumer.backOffInitialInterval=500
spring.cloud.stream.bindings.input_topic.consumer.backOffMultiplier=10.0
spring.cloud.stream.bindings.input_topic.consumer.backOffMaxInterval=100000
spring.cloud.stream.bindings.iinput_topic.consumer.defaultRetryable=true
public interface MyStreams {

    String INPUT_TOPIC = "input_topic";
    String INPUT_TOPIC2 = "input_topic2";
    String ERROR = "apperror";
    String OUTPUT = "output";

    @Input(INPUT_TOPIC)
    KStream<String, InObject> inboundTopic();

    @Input(INPUT_TOPIC2)
    KStream<Object, InObject> inboundTOPIC2();

    @Output(OUTPUT)
    KStream<Object, outObject> outbound();

    @Output(ERROR)
    MessageChannel outboundError();
}

@StreamListener(MyStreams.INPUT_TOPIC)
    @SendTo(MyStreams.OUTPUT)
    public KStream<Key, outObject> processSwft(KStream<Key, InObject> myStream) {
        return myStream.mapValues(this::transform);
    }

The metadataRetryOperations in KafkaTopicProvisioner.java is always null and hence it creates a new RetryTemplate in the afterPropertiesSet().

public KafkaTopicProvisioner(KafkaBinderConfigurationProperties kafkaBinderConfigurationProperties, KafkaProperties kafkaProperties) {
        Assert.isTrue(kafkaProperties != null, "KafkaProperties cannot be null");
        this.adminClientProperties = kafkaProperties.buildAdminProperties();
        this.configurationProperties = kafkaBinderConfigurationProperties;
        this.normalalizeBootPropsWithBinder(this.adminClientProperties, kafkaProperties, kafkaBinderConfigurationProperties);
    }

    public void setMetadataRetryOperations(RetryOperations metadataRetryOperations) {
        this.metadataRetryOperations = metadataRetryOperations;
    }

    public void afterPropertiesSet() throws Exception {
        if (this.metadataRetryOperations == null) {
            RetryTemplate retryTemplate = new RetryTemplate();
            SimpleRetryPolicy simpleRetryPolicy = new SimpleRetryPolicy();
            simpleRetryPolicy.setMaxAttempts(10);
            retryTemplate.setRetryPolicy(simpleRetryPolicy);
            ExponentialBackOffPolicy backOffPolicy = new ExponentialBackOffPolicy();
            backOffPolicy.setInitialInterval(100L);
            backOffPolicy.setMultiplier(2.0D);
            backOffPolicy.setMaxInterval(1000L);
            retryTemplate.setBackOffPolicy(backOffPolicy);
            this.metadataRetryOperations = retryTemplate;
        }

    }
Usman Maqbool
  • 3,351
  • 10
  • 31
  • 48
NiBa
  • 39
  • 8

2 Answers2

0

Spring cloud Kafka stream does not retry upon deserialization error even after specific configuration.

The behavior you are seeing matches the default settings of Kafka Streams when it encounters a deserialization error.

From https://docs.confluent.io/current/streams/faq.html#handling-corrupted-records-and-deserialization-errors-poison-pill-records:

LogAndFailExceptionHandler implements DeserializationExceptionHandler and is the default setting in Kafka Streams. It handles any encountered deserialization exceptions by logging the error and throwing a fatal error to stop your Streams application. If your application is configured to use LogAndFailExceptionHandler, then an instance of your application will fail-fast when it encounters a corrupted record by terminating itself.

I am not familiar with Spring's facade for Kafka Streams, but you probably need to configure the desired org.apache.kafka.streams.errors.DeserializationExceptionHandler, instead of configuring retries (they are meant for a different purpose). Or, you may want to implement your own, custom handler (see link above for more information), and then configure Spring/KStreams to use it.

miguno
  • 14,498
  • 3
  • 47
  • 63
0

The retry configuration only works with MessageChannel-based binders. With the KStream binder, Spring just helps with building the topology in a prescribed way, it's not involved with the message flow once the topology is built.

The next version of spring-kafka (used by the binder) has added the RecoveringDeserializationExceptionHandler (commit here); while it can't help with retry, it can be used with a DeadLetterPublishingRecoverer send the record to a dead-letter topic.

You can use a RetryTemplate within your processors/transformers to retry specific operations.

Gary Russell
  • 166,535
  • 14
  • 146
  • 179
  • There is minimal support for DLQ within the Kafka streams binder today, basically sending the failed record to a DLQ if it is configured. However, no retry mechanisms are provided as that's only applicable for message channel based binders. See this docs section for more details. https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-kafka/2.2.0.RELEASE/spring-cloud-stream-binder-kafka.html#_error_handling – sobychacko Jun 26 '19 at 13:16
  • We will try to integrate with the `RecoveringDeserializationExceptionHandler` that Gary mentioned in the next version of the binder. – sobychacko Jun 26 '19 at 13:25
  • Thanks @GaryRussell and @sobychacko for clarification. I have configured `serdeError=sendtodlq` which is sending the deserialization failures to a DLQ. The intent here is to retry contacting schema registry if the failure is due to network issues. Can you please explain more around '**You can use a RetryTemplate within your processors/transformers to retry specific operations.**' ? because the only place i understood so far to handle deserialization failures with KStream binders is `spring.cloud.stream.kafka.streams.binder.configuration.default.deserialization.exception.handler`. – NiBa Jun 29 '19 at 07:23
  • `RecoveringDeserializationExceptionHandler` would be really helpful to add more custom behaviours for these failures. Also, our requirement is to stop the kafka stream upon deserialization error then fix and reset the order of messages. – NiBa Jun 29 '19 at 07:28
  • `>Can you please explain more around...` - that was for any errors in custom processors/transformers. To retry deserialization you can use a custom `Serde` to wrap the real `Serde` and use a `RetryTemplate` there. – Gary Russell Jun 29 '19 at 14:46
  • I just added a `RetryingDeserializer` to spring-kafka; [PR here](https://github.com/spring-projects/spring-kafka/pull/1153). – Gary Russell Jul 03 '19 at 13:17