I have a Kafka consumer running on a Spring application.
I am trying to config the consumer with fetch.max.wait.ms and fetch.min.bytes.
I would like the consumer to wait until there are 15000000 bytes of messages or 1 minute has passed.
consumerProps.put(ConsumerConfig.FETCH_MAX_WAIT_MS_CONFIG, 60000);
consumerProps.put(ConsumerConfig.FETCH_MIN_BYTES_CONFIG, 15000000);
factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerProps));
I know this configuration does have an effect, because once it was set i started to get org.apache.kafka.common.errors.DisconnectException
To resolve it i increased request.timeout.ms
consumerProps.put(ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG, 120000);
This resolved the errors, but the behavior is not as expected:
The consumer is picking up messages (at low amount, no way near the fetch.min.bytes) very often.
However, within a minute it will sometimes do multiple fetches.
It works O.k on my local dev when i test it with Spring EmbeddedKafka, but doesn't work on production. (MSk)
What can explain it? Is it possible it doesn't work well on MSK?
Are there other properties that play a role here or can be in the way?
Is it correct to say that, assuming i am always under fetch.min.bytes, that i won't see more than 1 fetch per minute?
Is there a case where while records are polled, new ones are written, what is the expected behavior then? does it affect current poll or next one?
(other properties defined for this consumer: session.timeout.ms, max.poll.records, max.partition.fetch.bytes)
====== EDIT =====
After some investigation i discovered something: The configuration works as expected when the consumer is working against a topic with a single partition.
When working against a topic with multiple partitions the fetch time becomes unexpected.