we are using spring, kafka(spring kafka 2.4.5, spring cloud stream 3.0.1), openshift. we have the below configuration. Multiple broker/topic with each topic has 8 partitions with replication factor as 3, multiple spring boot consumer.
we are getting the below exception when we bring down one of broker as part of resiliency testing and even after we bring up the server, still getting the same error.
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.
2020-05-19 18:39:57.598 ERROR [service,,,] 1 --- [ad | producer-5] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='{49, 50, 49, 50, 54, 53, 56}' and payload='{123, 34, 115, 111, 117, 114, 99, 101, 34, 58, 34, 72, 67, 80, 77, 34, 44, 34, 110, 97, 109, 101, 34...' to topic topicname
2020-05-19 18:39:57.598 WARN [service,,,] 1 --- [ad | producer-5] o.a.k.clients.producer.internals.Sender : [Producer clientId=producer-5] Received invalid metadata error in produce request on partition topicname-4 due to org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is not the leader for that topic-partition.. Going to request metadata update now
I checked google and most says that change the retry value to more than 1 will work but since the error is coming even after broker up, i am not sure whether it works or not.
This is what i have in properties file:
spring.cloud.stream.kafka.binder.brokers=${output.server}
spring.cloud.stream.kafka.binder.requiredAcks=1
spring.cloud.stream.bindings.outputChannel.destination=${output.topic}
spring.cloud.stream.bindings.outputChannel.content-type=application/json
and one line of code to send messages using kafka streams API.
`client.outputChannel().send(MessageBuilder.withPayload(message).setHeader(KafkaHeaders.MESSAGE_KEY, message.getId().getBytes()).build());`
please help me.
Thanks Rams