6

Is there a way to limit or define the max memory usage of a kafka streams application? I have enabled caching with my state stores but when I deploy in Openshift I get OOM killed on my pods. I have checked I have no memory leakes and all my state store iterators are being closed.

I have updated my RocksDbConfigSetter to the recommendations found in https://github.com/facebook/rocksdb/wiki/Setup-Options-and-Basic-Tuning#other-general-options with no luck.

When I look at the state store directory the size is about 2GB. Currently have 50GB of memory allocated to the application and it still OOMs

Chris
  • 1,299
  • 3
  • 18
  • 34
  • Atm, it's not possible to define a strict limit on memory usage. – Matthias J. Sax Apr 29 '19 at 22:14
  • So am I suppose to give my openshift more and more memory until it does not OOM? I am already at 20GB. Does startup (rebuilding the statestore) use more memory. App was running fine and when I went to restart the pod it OOM. – Chris Apr 29 '19 at 22:18
  • It looks like there might be more to your problem than what you summarized here. – miguno Apr 30 '19 at 10:31
  • Try to estimate your required memory : https://docs.confluent.io/current/streams/sizing.html – Matthias J. Sax Apr 30 '19 at 12:34
  • I am using the defaults so its 112mb * 20 partitions and I am using a key value store. So that is 2240 mb. I am confused, is that in addition to the memory usage of actual data as described by scenario 3. – Chris Apr 30 '19 at 16:04
  • I am encountering the same scenario even with RocksDB tuning? Is there any soft limit of memory or setting the heap size "-Xms -Xmx" can help? In scenario 3 from documentation, if we have state store size 20GB, do we need 20GB memory to hold that data in memory? – Nishu Tayal May 07 '19 at 08:23
  • Hey @MatthiasJ.Sax, has the situation changed? Can we set a strict memory usage? – Renato Mefi Sep 28 '20 at 18:35
  • I guess :) -- https://issues.apache.org/jira/browse/KAFKA-8215 -- https://issues.apache.org/jira/browse/KAFKA-8324 -- https://issues.apache.org/jira/browse/KAFKA-8323 -- https://issues.apache.org/jira/browse/KAFKA-8637 -- Worth to upgrade to at least 2.3 – Matthias J. Sax Sep 29 '20 at 06:20
  • 1
    @MatthiasJ.Sax We are also seeing memory issues in our environment and trying to implement RocksDBConfiSetter. https://stackoverflow.com/questions/65814205/kafka-streams-limiting-off-heap-memory – SunilS Jan 20 '21 at 17:13
  • @Chris Were you able to resolve the issue? – Marc Nov 29 '22 at 16:55

0 Answers0