I use Confluent.Kafka 1.9.2 C# library to create single Kafka consumer for listening topic with several partitions. Currently consumer drain out all messages from first partition and only then goes to next. As I know from KIP, I can avoid such behavior and achieve round-robin by changing max.partition.fetch.bytes
parameter to lower value. I changed this value to 5000 bytes and pushed 10000 messages to first partition and 1000 to second, average size of messages is 2000 bytes, so consumer should to move between partitions every 2-3 messages (if I understand correctly). But it still drains out first partition before consuming second one. My only guess why it don't work as should is latest comment here that such approach can't work with several brokers, btw Kafka server that I use just has 6 brokers. Could it be the reason or maybe something else?
Asked
Active
Viewed 293 times
0

Dmitriy Marov
- 39
- 5
-
The default behavior should be to group records of all partitions together per poll, not read one partition at a time. – OneCricketeer Nov 23 '22 at 16:45
-
Please correct me if I understand wrong: consumer polls N messages that fetched from every partition and grouped (if `max.partition.fetch.bytes` value allow such behavior)? So anyway it should take messages from second partitions. – Dmitriy Marov Nov 23 '22 at 17:07
-
I've never explicitly set that config, so I'm not sure about it, but I definitely don't think it affects single partitions, rather the entire fetch request over all partitions... If some partitions have larger messages than others, it's possible you might not see them at all, rather than the behavior of "one partition at a time" without any extra config – OneCricketeer Nov 23 '22 at 17:15