0

We are having occasional issues with a Java legacy application consuming too much memory (and - no - there is no memory leak involved here). The application runs on a VM that has been assigned 16 GB of memory, 14 of which are assigned to the application.

Using the "Melody" monitor we observe e.g. this memory (heap) usage behavior: Java memory consumption (as seen by application)

The left, low part (at around 2 GB) is a lazy Sunday, where the application essentially idled, while the right hand side is Monday morning where we got some activity and memory consumption reached about 11 GB, which looks fairly normal and uncritical for our application.

However, the physical memory consumption view looks like this: Physical memory usage

Here the machine/VM (which runs only our application) "idles" at almost 12GB and when consumption rises the machine hits the 16 GB limit, it starts to swap (visible on another view not shown here) and after about 15' at >100% memory consumption the system's "out-of-memory killer" strikes and terminates the process. The following blank section is showing the period during which the application was down and we were analyzing a couple of things (including taking these screenshots) before we restarted it.

My question is: can one tweak the Java VM (v1.8.0_121 in our case) to return (part of) the physical memory that it is apparently hogging?

My gut feeling is that if the VM would have returned at least part of that RAM that it isn't needing any more (at least judging from the first diagram) then the system would have had more margin for the following period of high load.

If - referring to the example shown - the physical memory would have been decreased from ~12 GB to, say, 6 GB or less during the day-long idle phase then the peak at right would probably not have gone beyond the 16 GB limit and the application would probably not have crashed.

Any ideas or suggestions on this? What are suitable JVM options to force a return of unused memory - at least after some longer low-memory period as we see here?

We are currently using these VM options regarding memory tweaking: -server -XX:+UseParallelGC -XX:GCTimeRatio=40

mmo
  • 3,897
  • 11
  • 42
  • 63
  • *"there is no memory leak involved here"* - could you elaborate? the more i read the more it sounds like memory leak somewhere.. – Bagus Tesa Jan 23 '23 at 11:56
  • also, read [this QA](https://stackoverflow.com/a/38287515). it would be nice to know which java runtime you used, whether your app involved with JNDI, technology used (LDAP?), etc. its a detective works and we **can't** replicate the problem which make it harder for everyone involved. – Bagus Tesa Jan 23 '23 at 12:01
  • I don't quite follow, it seems that your application never uses above 12GB but the physical memory spikes too. How are you running your application? Are you limitting the amount of memory available? – matt Jan 23 '23 at 13:37
  • Have you considered upgrading to Java 17? It has many garbage collector improvements over v 8. And additional options for tweaking it. – egeorge Jan 23 '23 at 15:32
  • @Bagus Test: Of course one can never be 100% sure but the "no memory leak" claim comes from misc. checks using MemoryAnalyzer to analyze memory dumps and also the fact, that the heap doesn't grow over time (as can be seen in the first sketch), not even when the applications runs for several weeks. The Java version is mentioned in my description (v1.8.0_121) and we don't use JNDI nor LDAP. We *do* use JDBC to access the DB and Hibernate/JPA on top of it. – mmo Jan 24 '23 at 14:58
  • @Eric George Yes, we have considered upgrading the Java version but we use a bunch of libraries that can't cope with Java >=v9 (the usual "Java module woes"...) and we simply don't have the time and money to upgrade "everything", so we are currently still stuck with Java 1.8 (as so many...). – mmo Jan 24 '23 at 15:03
  • @matt: you are right - the spikes at ~8:00 and at 9:00-10:00 seem to directly hit the physical memory consumption of the application. I have no explanation for this. I would have expected that at least the earlier, smaller spike would get completely absorbed by the free heap space (which - subtracting the used heap space at that time (~2 GB) from the physical consumption (~12 GB) should contain ~10 GB of free heap) but obviously that's not the case. – mmo Jan 24 '23 at 15:09
  • @Bagus Tesa To clarify: With "...that the heap doesn't grow over time..." I meant the heap size when the system is idling (as in the left part of the first sketch). It always returns to about~2 GB. It *does* consume (considerably) more when it's busy but when idling it always returns to that figure - which is more or less given by some extensive caching that we do. – mmo Jan 24 '23 at 15:18
  • 1
    Are you setting -Xmx? – matt Jan 24 '23 at 15:51
  • @mmo is it right to assume that your jvm is Oracle's java? we have many breed of runtimes these days. *"which is more or less given by some extensive caching that we do"* - again, we can only do blind guesses given we can't easily replicate your problem. but to be honest, you should start by replicating the environment then narrows down the offending thing that makes your machine use that much ram bit-by-bit - like turning off that cache thingy you just said. also, read the link i gave you, it has stuff related to `-Xmx` flag and how java handles heap generally. – Bagus Tesa Jan 25 '23 at 01:58
  • @Bagus Tesa We are still using the AdoptOpenJDK v8 (1.8.0_121). We had downloaded it from the AdoptOpenJDK Website back then (meanwhile - as you probably know - this has been migrated to Adoptium and later to Eclipse-Temurin). – mmo Feb 01 '23 at 09:09

0 Answers0