0

The allocated memory of our Java app (as verified using top) continuously grows as days pass, to 8 GB and more.

When taking a heap dump to figure out whether there was a leak like so:

jmap -dump:live,format=b,file=/tmp/myapp.hprof 17393

I noticed that after taking the dump, memory allocation shrunk considerably. The reason was found in the gc log. See the second line here:

[781944.122s][info][gc] GC(412) Pause Young (Normal) (G1 Evacuation Pause) 3948M->1021M(5256M) 38.430ms
[782877.123s][info][gc] GC(413) Pause Full (Heap Dump Initiated GC) 3929M->471M(1712M) 200.948ms

So my idea was to make G1 do a full heap dump periodically (1 minute interval for testing only), like so:

java -XX:G1PeriodicGCInterval=60000 -XX:G1PeriodicGCSystemLoadThreshold=0.9 ...

Now when I check the logs, periodic GCs are being done when conditions are met, but they aren't full GCs:

[240.042s][info][gc] GC(12) Pause Young (Concurrent Start) (G1 Periodic Collection) 405M->404M(2328M) 16.047ms
[240.042s][info][gc] GC(13) Concurrent Mark Cycle
[240.343s][info][gc] GC(13) Pause Remark 404M->388M(1360M) 2.564ms
[240.479s][info][gc] GC(13) Pause Cleanup 389M->389M(1360M) 0.106ms
[240.485s][info][gc] GC(13) Concurrent Mark Cycle 442.918ms

[300.043s][info][gc] GC(14) Pause Young (Prepare Mixed) (G1 Periodic Collection) 392M->386M(1360M) 0.882ms

[360.047s][info][gc] GC(15) Pause Young (Mixed) (G1 Periodic Collection) 390M->378M(1360M) 1.824ms

[420.049s][info][gc] GC(16) Pause Young (Concurrent Start) (G1 Periodic Collection) 383M->378M(1360M) 1.144ms
[420.049s][info][gc] GC(17) Concurrent Mark Cycle
[420.382s][info][gc] GC(17) Pause Remark 378M->378M(1360M) 1.080ms
[420.489s][info][gc] GC(17) Pause Cleanup 378M->378M(1360M) 0.105ms
[420.495s][info][gc] GC(17) Concurrent Mark Cycle 445.553ms

The G1 tuning docs state:

-XX:+G1PeriodicGCInvokesConcurrent

If set, periodic garbage collections trigger a concurrent marking or continue the existing collection cycle, otherwise trigger a Full GC.

As I am not setting G1PeriodicGCInvokesConcurrent, I would expect full GCs. What am I missing?

N.B. This is Java 17:

$ java -version
openjdk version "17.0.7" 2023-04-18 LTS
OpenJDK Runtime Environment Corretto-17.0.7.7.1 (build 17.0.7+7-LTS)
OpenJDK 64-Bit Server VM Corretto-17.0.7.7.1 (build 17.0.7+7-LTS, mixed mode, sharing)
Evgeniy Berezovsky
  • 18,571
  • 13
  • 82
  • 156

2 Answers2

1

You don't have a memory leak, otherwise it would not fully shrunk and you would have an Out of Memory error and/or high CPU usage in the collector.

You should not try to change the default configuration of the garbage collector. Increasing the number of full GCs will impact performance for no reason or gain. The garbage collector algorithm is complex and runs full GC only if it is needed. Nothing that you mention indicates a need. Most improvements in Java GC algorithms are to reduce the need for full GC, which impacts performance. Changing the configurations needs to be done with understanding of each configuration and fully testing performance of each change.

Instead analyze the heap dumps you are taking to understand what may be using that memory. Note that heap dump analysis requires understanding of Java heap memory use by objects and references.

aled
  • 21,330
  • 3
  • 27
  • 34
  • Thanks. Regarding changing the default configuration, I refer you to https://openjdk.org/jeps/346 to see why this can be useful. In particular: `Since G1 tries hard to completely avoid full GCs, and only triggers a concurrent cycle based on Java heap occupancy and allocation activity, it will not return Java heap memory in many cases unless forced to do so externally. [,.] Even during phases where the VM only uses a fraction of its assigned memory resources due to inactivity, G1 will retain all of the Java heap.` – Evgeniy Berezovsky Jun 09 '23 at 03:31
  • This doesn't mean that there is a problem with it. It is working as designed. Look that the GC time increased 20+ times because of doing full GC. – aled Jun 09 '23 at 09:59
0

Thanks @StephenC for the comment to my question! I have tried -XX:+PrintFlagsFinal and it confirms your suspicion:

$ journalctl -u myapp|grep Periodic
 6月 09 11:48:27 filinux01 start-nnd[86668]: uintx G1PeriodicGCInterval                     = 600000                                 {manageable} {command line}
 6月 09 11:48:27 filinux01 start-nnd[86668]: bool G1PeriodicGCInvokesConcurrent            = true                                      {product} {default}
 6月 09 11:48:27 filinux01 start-nnd[86668]: double G1PeriodicGCSystemLoadThreshold          = 0.900000                               {manageable} {command line}

So I added -XX:-G1PeriodicGCInvokesConcurrent to the jvm arguments and now it works as expected:

[0.006s][info][gc] Using G1
[0.527s][info][gc] GC(0) Pause Young (Normal) (G1 Evacuation Pause) 39M->13M(776M) 3.500ms
[12.049s][info][gc] GC(1) Pause Full (G1 Periodic Collection) 24M->13M(136M) 15.156ms
[18.075s][info][gc] GC(2) Pause Full (G1 Periodic Collection) 13M->13M(80M) 25.214ms
[24.098s][info][gc] GC(3) Pause Full (G1 Periodic Collection) 13M->13M(80M) 20.581ms
Evgeniy Berezovsky
  • 18,571
  • 13
  • 82
  • 156