I am working on a Kafka stream application based on spring-boot and java 8. We use Kafka clients version 2.5.0 I noticed that sometimes (not always) when forwarding a record from a punctuator, the operation fails with a null pointer exception. Here is the stack trace:
Caused by: org.apache.kafka.streams.errors.StreamsException: task [2_2] Abort sending since an error caught with a previous record (timestamp 1603721062667) to topic reply-reminder-push-sender due to java.lang.NullPointerException\tat
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:240)\tat
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:111)\tat org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:89)\tat
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)\tat
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)\tat
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:133)\t... 24 common frames omitted
Caused by: java.lang.NullPointerException: null\tat
org.apache.kafka.common.record.DefaultRecord.sizeOf(DefaultRecord.java:613)\tat
org.apache.kafka.common.record.DefaultRecord.recordSizeUpperBound(DefaultRecord.java:633)\tat org.apache.kafka.common.record.DefaultRecordBatch.estimateBatchSizeUpperBound(DefaultRecordBatch.java:534)\tat
org.apache.kafka.common.record.AbstractRecords.estimateSizeInBytesUpperBound(AbstractRecords.java:135)\tat
org.apache.kafka.common.record.AbstractRecords.estimateSizeInBytesUpperBound(AbstractRecords.java:125)\tat org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:914)\tat
org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:862)\tat org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:181)\t... 29 common frames omitted
It looks like there is a null pointer exception in the library when calculating the size of record headers but I don“t think I am creating or updating them. Is there any way to fix it? Thanks