0

I am getting below error - Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) as below,

 Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 1
 record(s) for pipeline-demo-0: 60125 ms has passed since last append
 2020-04-26 16:11:14.927  ERROR o.s.k.s.LoggingProducerListener - Exception thrown when sending a message with key='null' and
 payload='KafkaMessage(message={grx_projectCode=Value(v=demo,
 dataType=STRING), grx_gid=Value(v=5e5207a8-881d-...' to topic
 pipeline-demo and partition 0:
     org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for pipeline-demo-0: 60125 ms has passed since last append
 2020-04-26 16:11:14.927 ERROR i.t.g.c.c.s.i.DumpToKafkaServiceImpl - Dump to kafka exception 
    org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is
 org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s)
 for pipeline-demo-0: 60125 ms has passed since last append

Have tried out multiple combination of bigger timeout and less batch-size with 0 linger ms still getting this error.

Consumer configs:

event.topic=events
consumer.threads=1
max.poll.records=1000
max.poll.interval.ms=120000
max.partition.fetch.bytes=1048576
fetch.max.bytes=524288000
fetch.min.bytes=1
fetch.max.wait.ms=500

Producer configs:

retries=2
batch.size=100
linger.ms=0
buffer.memory=17179869184
acks=all

code for producer

@Override
    public void send(String topic, KafkaMessage kafkaMessage, String partitionBy, String correlationId) {
        Integer partition = null;
        if (!StringUtils.isEmpty(partitionBy)) {
            try {
                int numPartitions = template.partitionsFor(topic).size();
                partition = Utils.abs(Utils.murmur2(partitionBy.getBytes())) % numPartitions;
            } catch (Exception e) {
                log.error("Unable to get partitions for topic", e);
            }
        }

        ProducerRecord<Integer, KafkaMessage> record = new ProducerRecord<Integer, KafkaMessage>(topic, partition, null,
                kafkaMessage, null);
        ListenableFuture<SendResult<Integer, KafkaMessage>> future = template.send(record);
        future.addCallback(new ListenableFutureCallback<SendResult<Integer,KafkaMessage>>() {

            @Override
            public void onSuccess(SendResult<Integer, KafkaMessage> result) {
                MeterFactory.getEventsSavedMeter().mark();

            }

            @Override
            public void onFailure(Throwable ex) {
                log.error("Dump to kafka exception ", ex);
                MeterFactory.getEventsSaveFailedMeter().mark();
            }
        });
    }

code for config, KafkaProducerConfig.java,

public class KafkaProducerConfig {

    @Value("${bootstrap.servers}")
    private String bootstrapServers;

    @Value("${retries}")
    private String retries;

    @Value("${batch.size}")
    private String batchSize;

    @Value("${linger.ms}")
    private String lingerMilliSeconds;

    @Value("${buffer.memory}")
    private String bufferMemory;

    @Value("${acks}")
    private String acks;

    public Map<String, Object> producerConfigs() {
        Map<String, Object> props = new HashMap<>();
        props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
        props.put(ProducerConfig.RETRIES_CONFIG, retries);
        props.put(ProducerConfig.BATCH_SIZE_CONFIG, batchSize);
        props.put(ProducerConfig.LINGER_MS_CONFIG, lingerMilliSeconds);
        props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, bufferMemory);
        props.put(ProducerConfig.ACKS_CONFIG, acks);
        props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
        props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
        return props;
    }

    @Bean
    public ProducerFactory<Integer, KafkaMessage> producerFactory() {
        return new DefaultKafkaProducerFactory<>(producerConfigs());
    }

    @Bean
    public KafkaTemplate<Integer, KafkaMessage> kafkaTemplate() {
        return new KafkaTemplate<Integer, KafkaMessage>(producerFactory());
    }

}
Anil Nivargi
  • 1,473
  • 3
  • 27
  • 34

1 Answers1

0

Kafka doesn't immediately send records. It batches them, and periodically will send batches of a configured size (batchSize & lingerMilliSconds) .

Based on the messages of only a few records expiring, you're sending too little data without flushing the producer.

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • flush will make my producer synchronous and that is not desired – Alaukik Srivastava Apr 27 '20 at 06:38
  • Then send more data – OneCricketeer Apr 27 '20 at 18:25
  • its not the case message are going really fast but somehow still it is showing this error i tried batch-size of 100 and was polling 3000 records (since this jar is consuming topic and filtering and producing to another topic) and timeout of 60 sec still getting this error there is something else. This is intermittent. – Alaukik Srivastava Apr 28 '20 at 21:41