1

According to the kafka documentation, the heap memory allocation of 6gb is good enough for a broker. But I am constantly getting heap space out of memory issues on my Kafka deployment even with a 9 gb heap space allocation.

So my questions are:

  • What producer and consumer configurations affect the heap space?
  • How do the number of topics and partitions per topic affect it?
  • How do I compute the heap space that is required for my kafka setup?
indraneel
  • 405
  • 4
  • 10
  • Producer and consumers have their own heap space. You shouldn't be running them on the broker. Have you enabled JMX monitoring on your brokers? You should set that up to determine where the problem exists. Also, what version of Kafka? – OneCricketeer Nov 02 '18 at 20:23
  • Did you look at https://www.cloudera.com/documentation/kafka/latest/topics/kafka_performance.html? – tk421 Nov 02 '18 at 22:59
  • @cricket_007 Yes, the producer and consumers on different machines. Not able to figure out the issue from the monitoring systems. Its Kafka 1.1 – indraneel Nov 05 '18 at 15:23
  • @tk421yes, checked that. It has some pointers but still not very clear. – indraneel Nov 05 '18 at 15:23
  • How many topics*partitions do you have? We run at most 20 brokers, and on average, less than 1000 partitions per broker, and it's fairly stable, even after 100K messages per sec. It's been that way since 0.10 – OneCricketeer Nov 05 '18 at 16:17
  • @cricket_007 we have about 10 topics , with the max partitions going upto 200 only for certain topics. The message incoming rate is about 100K but outgoing is around 10m . – indraneel Nov 08 '18 at 06:51
  • Have you been able to add `HeapDumpOnOutOfMemoryError` JVM parameters and create a heap dump for analysis? – OneCricketeer Nov 08 '18 at 15:13

0 Answers0