I am using spring-kafka 2.2.8 and writing a simple async producer with the below settings:
producer config key : compression.type and value is : none
producer config key : request.timeout.ms and value is : 10000
producer config key : acks and value is : all
producer config key : batch.size and value is : 33554431
producer config key : delivery.timeout.ms and value is : 1210500
producer config key : retry.backoff.ms and value is : 3000
producer config key : key.serializer and value is : class org.apache.kafka.common.serialization.StringSerializer
producer config key : security.protocol and value is : SSL
producer config key : retries and value is : 3
producer config key : value.serializer and value is : class io.confluent.kafka.serializers.KafkaAvroSerializer
producer config key : max.in.flight.requests.per.connection and value is : 1
producer config key : linger.ms and value is : 1200000
producer config key : client.id and value is : <<my app name>>
I've printed the above producer settings using below code snippet:
DefaultKafkaProducerFactory defaultKafkaProducerFactory = (DefaultKafkaProducerFactory) mykafkaProducerFactory;
Set<Entry> set = defaultKafkaProducerFactory.getConfigurationProperties().entrySet();
set.forEach( item ->
System.out.println("producer config key : "+item.getKey()+" and value is : "+item.getValue())
);
Now i'm creating a KafkaTemplate with autoFlush as false by calling the below constructor
public KafkaTemplate(mykafkaProducerFactory, boolean autoFlush)
Now i've an async producer producing 10 message in the span of 10 sec . Then surprisingly, i got all the 10 messages published onto the topic in few seconds and i'm sure the size of these 10 messages combined is way less than my batch.size: 33554431
Now my question is
- Why the messages are being published instead of waiting for either linger.ms or batch.size before producing the message?