0

I am trying to retry consumer on exception in my code. my consumer looks like below

    @Bean
    public Consumer<Message<String>> input(){
       return -> {
           String output = service.getValues();
       };
    }

below are the ways i have tried out.

1.

@Bean
public Consumer<Message<String>> input(){
   return -> {
      RetryTemplate retryTemplate = new RetryTemplate();
      RetryPolicy retryPolicy = new SimpleRetryPolicy(2);
      FixedBackOffPolicy backOffPolicy = new FixedBackOffPolicy();
      backOffPolicy.setBackOffPeriod(600000);
    
      retryTemplate.setBackOffPolicy(backOffPolicy);
      retryTemplate.setRetryPolicy(retryPolicy);
    
      retryTemplate.execute(context -> {
        try {
          String output = service.getValues();
        }
        catch (Exception e) {
          throw new IllegalStateException(..);
       }
      });
    };
}

2

@StreamRetryTemplate
public RetryTemplate myRetryTemplate() {
   return new RetryTemplate();
}
  1. instead of RetryTemplate setting maxRetry and backOff values in consumer properties like below
spring.cloud.stream.bindings.input-in-0.consumer.maxAttempts=2
spring.cloud.stream.bindings.input-in-0.consumer.backOffInitialInterval=600000
spring.cloud.stream.bindings.input-in-0.consumer.backOffMaxInterval=600000
spring.cloud.stream.bindings.input-in-0.consumer.backoffMultiplier=1.0
spring.cloud.stream.bindings.input-in-0.consumer.defaultRetryable=false

Nothing working, seems its retrying with default retry only. For my surprise its doing retry more than 10 times. May be I may missing some basic initial setup things please help whats I am doing wrong

below are my consumer properties in console on startup

[pool-8-thread-2] INFO  o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values: 
    allow.auto.create.topics = true
    auto.commit.interval.ms = 5000
    auto.offset.reset = latest
    bootstrap.servers = [server.domian:9092]
    check.crcs = true
    client.dns.lookup = use_all_dns_ips
    client.id = consumer-fsf-gateway-5
    client.rack = 
    connections.max.idle.ms = 540000
    default.api.timeout.ms = 60000
    enable.auto.commit = true
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = group_id_val
    group.instance.id = null
    heartbeat.interval.ms = 3000
    interceptor.classes = []
    internal.leave.group.on.close = true
    internal.throw.on.fetch.stable.offset.unsupported = false
    isolation.level = read_uncommitted
    key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 500
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = kafka
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = SASL_SSL
    security.providers = null
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
mhvb
  • 31
  • 4
  • Your example is inconclusive. For example your binding properties point to a binding called `poppyPants`, but I don't see it as your function is called `input` and the expected binding name would be `input-in-0`. Also, what are you doing with configuring `RetryTemplate` inside your consumer??? Please read the https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream.html#_retry_template section. If you still can't make it work, please provide a link to a running sample somewhere in guthub where we can look – Oleg Zhurakousky Jul 28 '22 at 12:41
  • @OlegZhurakousky thanks for response that was typo mistake binding is input-in-0 only. i updated my question please check – mhvb Jul 30 '22 at 17:33

1 Answers1

0

What ever configurations provided in question are working for retry. The issue was due to lot of previous failures it was retrying previous request. as all the previous requests queued up, those should solve first then next will come.

mhvb
  • 31
  • 4