1

We have a Spring Boot app, which went Out of Heap Memory Last Week. Since the HeapDumpOnOutOfMemory flag was enabled, we could see in a 4GB Heap, around 3.7GB space was occupied by com.sun.jmx.mbeanserver.NamedObject.

All these Objects had key/value entries as client-id=producer-XXXX,type=producer-metrics.

We searched in our logs and these producers had been closed around a few weeks back.

Why are these Objects not getting Garbage Collected? Is this the default behavior of JMX Beans? Haven't seen this in any other application can we disable JMX for Kafka Producers?

We're using Spring Boot version 2.1.5.Release along with Spring-Kafka version 2.2.6.Release.

Code:-

ProducerRecord<String, String> record = new ProducerRecord<>("topic", message);
producer.send(record, (RecordMetadata metadata, Exception exception) -> {
if (exception != null) {
    logger.error("Exception in posting response to Kafka {}", exception);
    logger.error(exception.getMessage());
} else {
    logger.info("Request sent to Kafka: Offset: {} ", metadata.offset());
}
producer.close();
});

public KafkaProducer<String, String> getProducer(String bootstrapServers){
    Properties property = new Properties();
    property.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
    property.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    property.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    return new KafkaProducer<>(property);

}
Apurv
  • 315
  • 5
  • 14
  • Can you show your code and configuration? And show how you are using producers. By default, a single producer is used for all operations. When using transactions, a cache of producers is maintained (or a producer per group/topic.partition when a consumer starts the transaction). Producers are only physically closed when the producer factory is destroyed (I have confirmed that the MBean is unregistered at that time). It's not clear why you would have so many producer instances in the cache unless they are not being "close()" d - which is when they are returned to the cache. – Gary Russell Sep 09 '20 at 13:28
  • Hi @GaryRussell We're not using transactions. Here's the code:- ```ProducerRecord record = new ProducerRecord<>("topic", message); producer.send(record, (RecordMetadata metadata, Exception exception) -> { if (exception != null) { logger.error("Exception in posting response to Kafka {}", exception); logger.error(exception.getMessage()); } else { logger.info("Request sent to Kafka: Offset: {} ", metadata.offset()); } producer.close(); });``` As part of producer, we're setting the String Serializers & the Bootstrap Servers. – Apurv Sep 10 '20 at 07:35
  • 1
    Don't put code in comments; edit the question instead. You need to show how you are creating the producer, not how you are using it. This does not appear to be using spring-kafka. Are you using the Spring's `DefaultKafkaProducerFactory`? – Gary Russell Sep 10 '20 at 12:52
  • Hi Garry, we're using ```org.apache.kafka.clients.producer.KafkaProducer``` – Apurv Sep 14 '20 at 07:45
  • So why did you tag this question with [tag:spring-kafka]? You are using the Kafka API directly, not Spring. See the javadocs for `KafkaProducer`; you don't need to create a producer for each send; use the same one each time. – Gary Russell Sep 14 '20 at 13:07
  • @GaryRussell can we disable JMXReporter in spring-kafka? – z0mb1ek Sep 01 '21 at 17:39
  • Don't ask new questions in comments on old ones; ask a new one; I just looked at the code and I don't see a way to disable it; it is internal to the kafka-clients and seems to be created unconditionally; ask as new question tagged with [tag: apache-kafka]. – Gary Russell Sep 01 '21 at 17:55

0 Answers0