0

How much time does it generally take to profile a java application which is at about 100G memory consumption on a 150G machine? I started profiling about 2 hours back and its only 20% done as of now. Total memory used by jvm since I started profiling has gone up to 150G (close to RAM size). Is it normal for high memory processes to take huge amount of time while profiling using yourkit or I am doing something wrong. Is it possible that since memory has reached to RAM memory lots of disk swapping is happening which is slowing down memory profiling. How can I make this process faster. If its not possible to make it faster, what are the other ways to investigate memory leak in a java application?

Rishi Kesh Dwivedi
  • 643
  • 2
  • 7
  • 15

1 Answers1

2

Well your JVM is big :)

If you have a running JVM, the fastest way to get some informations about the objects in your heap is to take a jmap histogram :

jmap -histo:live <pid>

It will print all live objects (:live does a gc) in the heap, their instances number, and their (shallow) size.

Of course it isn't suited for complicated analysis, but it is often enough for you to find a leak, especially if the leak is big : compare the histogram with the one you have before the leak.

See doc at http://docs.oracle.com/javase/7/docs/technotes/tools/share/jmap.html.

yannick1976
  • 10,171
  • 2
  • 19
  • 27
  • The problem with "jmap -histo" is that it prints shallow sizes. So in my case I see byte array, char array and String taking huge amount of memory but I can't really trace back to the classes in my code who are responsible for this memory. Call tree in yourkit works pretty nice in this case. – Rishi Kesh Dwivedi Apr 09 '15 at 21:59
  • Very often you will also find a huge number of your own objects after Java core objects. Of course that doesn't work always, for example if you have just one byte array of 70GB, or if just one of your objects holds an array with one billion strings inside. But it also works very often. – yannick1976 Apr 09 '15 at 22:05