0

We are trying to achieve java KafkaProducer resiliency where in case Kafka is not reachable for some reason then we are storing failed messages in files and retrying once kafka comes up. But we are checking periodically to see if kafka is available are not by trying to creating KafkaProducer object every 1 mins we observed that Too many files open issue is coming up. Where when we observed there are sockets getting open which are closing though we call Producer close function. Since this process runs as a daemon, how to release the sockets from Producer so that we dont get into this issue.

We get the log message

[org.apache.kafka.common.utils.KafkaThread] (kafka-admin-client-thread | consumer-id7035)
Uncaught exception in thread 'kafka-admin-client-thread | consumer-id7035'

each time the kafkaproducer object is created. I need help in figuring out how to close cleanly when we keep trying to create new KafkaProducer objects to attemp resending of messages when kakfa is down

I tried using AdminClient to see if kafka is available or not, but even that caused the same isssue.

If Producer not able to connect to kafka it should comeout cleanly without holding sockets or any resources that it could have held.

Alexandre Dupriez
  • 3,026
  • 20
  • 25
  • Which version of Kafka are you using? It would be great to check which exception escapes and leads to the log you are seeing. – Alexandre Dupriez Aug 03 '19 at 05:58
  • kafka 2.1.1, we get IllegalstateException while sending data,hence we try reinitialize the kafkaproducer object, which is causing the connecion leak. – rajukemburi Aug 03 '19 at 06:01
  • Do you have the full stack trace? – Alexandre Dupriez Aug 03 '19 at 06:01
  • ProducerRecord record = new ProducerRecord<>(topic, key, payLoad); PublisherCallback publisherCallback = new PublisherCallback(topic, payLoad, reProcessingPath,maxDiskSizeMB, lock); try { producer.send(record, publisherCallback); } catch (IllegalStateException e) { publisherCallback.cretaeJsonForFailedMessages(); producer=new KafkaProducer(props); throw e; } – rajukemburi Aug 03 '19 at 06:03
  • above code runs in a loop, where it keeps trying, i will get the exception stacktrace shortly – rajukemburi Aug 03 '19 at 06:25
  • 1234:publisher failed to publish data: java.lang.IllegalStateException: Cannot perform operation after producer has been closed at org.apache.kafka.clients.producer.KafkaProducer.throwIfProducerClosed(KafkaProducer.java:847) at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:856) at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:840) – rajukemburi Aug 03 '19 at 06:36
  • since kafka connection failed this says producer closed, but still not releasing the resources that it holds. – rajukemburi Aug 03 '19 at 06:37
  • watch "(date;lsof -p 17026|grep 'pipe\|socket'|wc -l)|tee -a log" output of this command shows that the pipes and sockets are increasing all the time till reaches too many open files issue. – rajukemburi Aug 03 '19 at 07:11
  • Sat Aug 3 00:07:48 PDT 2019 250 Sat Aug 3 00:07:50 PDT 2019 250 Sat Aug 3 00:07:52 PDT 2019 250 Sat Aug 3 00:07:54 PDT 2019 250 Sat Aug 3 00:07:56 PDT 2019 250 Sat Aug 3 00:07:59 PDT 2019 250 Sat Aug 3 00:08:01 PDT 2019 250 Sat Aug 3 00:08:03 PDT 2019 250 Sat Aug 3 00:08:05 PDT 2019 250 Sat Aug 3 00:08:07 PDT 2019 253 Sat Aug 3 00:08:09 PDT 2019 253 Sat Aug 3 00:08:12 PDT 2019 253 Sat Aug 3 00:08:14 PDT 2019 253 Sat Aug 3 00:08:16 PDT 2019 253 – rajukemburi Aug 03 '19 at 07:13
  • hello, please help – rajukemburi Aug 05 '19 at 18:11

0 Answers0