1

So I've been trying to track down a good way to monitor when the JVM might potentially be heading towards an OOM situation. They best way that seems to work with our app is to track back-to-back concurrent mode failures through CMS. This indicates that the tenured pool is filling up faster than it can actually clean itself up, or its reclaiming very little.

The JMX bean for tracking GCs has very generic information such as memory usage before/after and the like. This information has been relatively inconsistent at best. Is there a better way I can be monitoring this potential warning sign of a dying JVM?

Zack
  • 1,181
  • 2
  • 11
  • 26
  • are you trying to do this from within the VM or are happy to use some external scripts instead? which JMX beans are you using right now? which JVM (inc version) are you using? – Matt Mar 29 '11 at 20:04
  • It wont necessarily be in the same JVM but it can be. I am using multiple ones but for the memory criteria I'm using the memory and CMS garbage collection ones. I am using 1.6. – Zack Mar 31 '11 at 19:48

2 Answers2

2

Assuming you're using the Sun JVM then I am aware of 2 options;

  1. memory management mxbeans (API ref starts here) which you appear to be using already though note there are some hotspot specific internal ones you can get access to, see this blog for an example of how to use
  2. jstat: cmd reference is here, you'll want the -gccause option. You can either write a script to launch this and parse the output or, theoretically, you could spawn a process from the host jvm (or another one) that then reads the output stream from jstat to detect the gc causes. I don't think the cause reporting is 100% comprehensive though. I don't know a way to get this info programatically from java code.
Bill the Lizard
  • 398,270
  • 210
  • 566
  • 880
Matt
  • 8,367
  • 4
  • 31
  • 61
  • 1
    So using the GC MX bean is what I am doing right now. The only information available is very generic and only can give me overall memeory usage from before and after. I was hoping to be able to do this from within my application but if the only way is to make a system call then so be it. I'll check out jstat. – Zack Mar 30 '11 at 18:05
0

With standard JRE 1.6 GC, heap utilization can trend upwards overtime with the garbage collector running less and less frequently depending on the nature of your application and your maximum specified heap size. That said, it is hard to say what is going on without having more information.

A few methods to investigate further:

You could take a heap dump of your application while it is running using jmap, and then inspect the heap using jhat to see which objects are in heap at any given time.

You could also run your application with -XX:+HeapDumpOnOutOfMemoryError which will automatically make a heap dump on the first out of memory exception that the JVM encounters.

You could create a monitoring bean specific to your application, and create accessor methods you can hit with a remote JMX client. For example methods to return the sizes of queues and other collections that are likely places of memory utilization in your program.

HTH

RandomUser
  • 4,140
  • 17
  • 58
  • 94
  • Unfortunately its a bit more complicated than that. I need to monitor for pressure on tenured memory relatively constantly. I am not able to take a heap dump every 10 seconds for example. But even then, I'm not so much concerned with % of memory being used, but when promotion fails to tenured generation. Thanks for the response though! – Zack Mar 29 '11 at 22:27