I'm trying to configure my consumer to work with an exponential backoff where the message will be processed a fixed number of attempts, applying among them the backoff period. But I don't get the expected behaviour.
This is my Java code:
@EnableBinding({
MessagingConfiguration.EventTopic.class
})
public class MessagingConfiguration {
public interface EventTopic {
String INPUT = "events-channel";
@Input(INPUT)
@Nonnull
SubscribableChannel input();
}
}
@StreamListener(MessagingConfiguration.EventTopic.INPUT))
void handle(@Nonnull Message<Event> event) {
throw new RuntimeException("FAILING!");
}
If I try the next configuration:
spring.cloud.stream:
bindings:
events-channel:
content-type: application/json
destination: event-develop
group: group-event-service
consumer:
max-attempts: 2
After all retries (20*) I get this message:
Backoff FixedBackOff{interval=0, currentAttempts=10, maxAttempts=9} exhausted for ConsumerRecord(...
2 (consumer.max-attempts
) * 10 (FixedBackOff.currentAttempts
) = 20* retries
All these retries occur with 1 second of delay (default backoff period)
If I change the configuration to:
spring.cloud.stream:
bindings:
events-channel:
content-type: application/json
destination: event-develop
group: group-event-service
consumer:
max-attempts: 8
#Times in milliseconds
back-off-initial-interval: 1000
back-off-max-interval: 60000
back-off-multiplier: 2
The backoff period is well applied during the 8 retries (max-attempts
) BUT when the 8 retries finish a new cycle of retries is started indefinitely blocking the topic.
In the next versions, maybe I will implement a more sophisticated system of error handling but now I only need to discard the message after the retries and get the next one.
What am I doing wrong?
I read a lot of questions/answers here, the official documentation and some tutorials on the internet but I didn't find a solution to avoid the infinite loop of retries.
P.S.: I'm working with spring-cloud-stream (3.1.1)
and spring-kafka (2.6.6)