0

I have implemented the kafka consumer as follows

@Bean
public Consumer<Message<String>> input(){
return -> {
// external service call
};
}

if any exception occurs in external call or in consumer, I want to retry consumer after 10 min. can any one tell me how to achieve this. I have few of the SO links and spring docs. I didn't understood perfectly. I am new to this and want to try it.

links i gone through

https://docs.spring.io/spring-cloud-stream/docs/Elmhurst.RELEASE/reference/htmlsingle/index.html#_retry_template

Backoff settings for spring cloud stream rabbit

Spring cloud stream kafka consumer error handling and retries issues

Update : I have added below properties but on exception it retrying consumer more than 8 times and the interval between retry is 1 sec only

spring.cloud.stream.bindings.poppyPants.consumer.maxAttempts=2
spring.cloud.stream.bindings.poppyPants.consumer.backOffInitialInterval=900000
spring.cloud.stream.bindings.poppyPants.consumer.backOffMaxInterval=900000
spring.cloud.stream.bindings.poppyPants.consumer.backoffMultiplier=1.0
spring.cloud.stream.bindings.poppyPants.consumer.defaultRetryable=false

Update2 below are my config properties in console

00:51:17.996 [pool-8-thread-2] INFO  o.a.k.c.consumer.ConsumerConfig - ConsumerConfig values: 
    allow.auto.create.topics = true
    auto.commit.interval.ms = 5000
    auto.offset.reset = latest
    bootstrap.servers = [server.domian:9092]
    check.crcs = true
    client.dns.lookup = use_all_dns_ips
    client.id = consumer-fsf-gateway-5
    client.rack = 
    connections.max.idle.ms = 540000
    default.api.timeout.ms = 60000
    enable.auto.commit = true
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = group_id_val
    group.instance.id = null
    heartbeat.interval.ms = 3000
    interceptor.classes = []
    internal.leave.group.on.close = true
    internal.throw.on.fetch.stable.offset.unsupported = false
    isolation.level = read_uncommitted
    key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 500
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 30000
    retry.backoff.ms = 100
    sasl.client.callback.handler.class = null
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = kafka
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.login.callback.handler.class = null
    sasl.login.class = null
    sasl.login.refresh.buffer.seconds = 300
    sasl.login.refresh.min.period.seconds = 60
    sasl.login.refresh.window.factor = 0.8
    sasl.login.refresh.window.jitter = 0.05
    sasl.mechanism = GSSAPI
    security.protocol = SASL_SSL
    security.providers = null
    send.buffer.bytes = 131072
    session.timeout.ms = 10000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
    ssl.endpoint.identification.algorithm = https
    ssl.engine.factory.class = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLSv1.3
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer

any thing wrong? any extra config required @Gary Russell any suggestion?

xxz
  • 11
  • 4

1 Answers1

0

You can use several retry properties of ConsumerProperties, such as maxAttempts, backOffInitialInterval, backOffMaxInterval and backOffMultiplier. This will use the default RetryTemplate. If you need to further customize you can provide your own implementation of the RetryTemplate

Oleg Zhurakousky
  • 5,820
  • 16
  • 17
  • does this properties works for only exception case until the request success / limit reached? if it success i dont want to retry it. i just want retry for exception case only. – xxz Jul 26 '22 at 09:43
  • Yes, those properties only apply when your consumer method throws an exception. – sobychacko Jul 26 '22 at 14:54
  • i have added these properties but the retry happening more than 8 times which is not as excepted spring.cloud.stream.bindings.poppyPants.consumer.maxAttempts=2 spring.cloud.stream.bindings.poppyPants.consumer.backOffInitialInterval=900000 spring.cloud.stream.bindings.poppyPants.consumer.backOffMaxInterval=900000 spring.cloud.stream.bindings.poppyPants.consumer.backoffMultiplier=1.0 spring.cloud.stream.bindings.poppyPants.consumer.defaultRetryable=false – xxz Jul 27 '22 at 09:08
  • updated question with my changes – xxz Jul 27 '22 at 09:08