1

I have a java process running inside a container with Kubernetes orchestration. I was observing a high memory footprint in docker stats.

I have -Xmx=40Gb and docker stats were reporting 34.5 GiB memory. To get a better understanding about heap usage, I tried to take a heap dump of the running process with below command:

jmap -dump:live,format=b,file=/tmp/dump.hprof $pid

But this causes a container restart. The dump file generated is around 9.5 GiB but Eclipse Memory Analyzer reports that the file is incomplete and can't open it.

Invalid HPROF file: Expected to read another 1,56,84,83,080 bytes, but only 52,84,82,104 bytes are available for heap dump record

I didn't find much information in kubelet logs or the container logs except for liveness probe failure which could be caused by heap dump?/

I am unable to recreate the issue so far. I just wanted to get an understanding of what could have happened and could the heap dump interfere with my running process. I understand that -dump:live flag will force a GC cycle and then collect the heap dump, could that have interfered with my running process?

wypul
  • 807
  • 6
  • 9

1 Answers1

0

I have faced situations like this with JVM when it was under stress. Could you attach this file hs_err_pid log? After checking that file, you can configure the OS to generate a core dump (in case your system has not generated a core file yet) and you should be able to get the heap dump from the core file.

On the following link you can read about a similar issue that was related to a JDK bug.

Furthermore, I can recommend you to enable the garbage collector log file and the automatic heap dump using these flags. Kindly attach the gc.log after enabling it and allocate enough space in storage to save heap dumps.

XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -Xloggc:/some/path/gc.log
rcastellcastell
  • 387
  • 1
  • 7