I have a java process running inside a container with Kubernetes orchestration. I was observing a high memory footprint in docker stats.
I have -Xmx=40Gb
and docker stats were reporting 34.5 GiB
memory. To get a better understanding about heap usage, I tried to take a heap dump of the running process with below command:
jmap -dump:live,format=b,file=/tmp/dump.hprof $pid
But this causes a container restart. The dump file generated is around 9.5 GiB
but Eclipse Memory Analyzer reports that the file is incomplete and can't open it.
Invalid HPROF file: Expected to read another 1,56,84,83,080 bytes, but only 52,84,82,104 bytes are available for heap dump record
I didn't find much information in kubelet logs or the container logs except for liveness probe failure which could be caused by heap dump?/
I am unable to recreate the issue so far. I just wanted to get an understanding of what could have happened and could the heap dump interfere with my running process. I understand that -dump:live
flag will force a GC cycle and then collect the heap dump, could that have interfered with my running process?