I have a worker in spring-boot which listen to kafka topic with 20 partitions. I created the following listener:
@KafkaListener(topics = "mytopic")
public void listen(@Payload(required = true) IncomingMessage msg,
@Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
@Header(KafkaHeaders.OFFSET) long offset) {
In this listener, I print the partition, and found out all my messages are coming from the same partition number - 17. This means the producer side is writing all messages to the same partition.
My container factory is ConcurrentKafkaListenerContainerFactory and so I want to be able to handle multiple events at the same time, this means I need different partitions to also have events in them..
The producer side in a Linux machine is kafkacat, which produce events into this topic. It seems like kafkacat doesn't have the ability to send to partition in a round-robin style.
The problem is, I must use some CLI tool to produce the events and not a service. Is there any way to overcome this issue using CLI tool? I couldn't find any cli tool which does this.
I thought about maintaining a partition-number in a file, and read it, and then provide the partition number in the kafkacat command, and later increment the (partition-number)%partitions, but this is not thread safe.
Note: the topic doesn't have Key, I only produce Value messages, and doesn't care about the Key.