1

We have been having issues with our ColdFusion server and getting the JRE configured properly. In order to troubleshoot what was going on with this, we installed Oracle JRockit and switched over the jvm.config to try and find any memory leaks.

Once we installed JRockit our server was running better than ever. We kept the JRockit Program and Console open for several days and our memory usage stayed under 200mb. We finally closed the program on the server and immediately the memory usage problem returned.

Here is a screen shot of the Java Heap from FusionReactor to illustrate what is going on.

I could not post this directly here since I do not have enough reputation points yet: http://www.weblisters.com/icm/FusionReactorJavaHeap-JRockit-Console.png

Here are the main settings from our jvm.config file:

java.home=C:/Progra~2/Java/jrockit-jdk1.6.0_33-R28.2.4-4.1.0/jre  

java.args=-server -Xms1024m -Xmx1024m  -Xgc:parallel -Dsun.io.useCanonCaches=false -Dcoldfusion.rootDir={application.home}/ -XX:+HeapDumpOnOutOfMemoryError -Xmanagement:ssl=false,authenticate=false,autodiscovery=true

This error was thrown right after we closed the Jrockit console: Error: Not enough storage is available to process this command in tsStartJavaThread (src/jvm/threads/vmthread/lifecycle.c:1096).

Attempting to allocate 1G bytes There is insufficient native memory for the Java Runtime Environment to continue.

Does anyone know why garbage collection (GC) appears to work so much better with the JRockit Console window open and running? We can't leave it open as a permanent solution.

trincot
  • 317,000
  • 35
  • 244
  • 286
billvsd
  • 254
  • 2
  • 12
  • What commands were you using to open your console? what was the syntax? – Mark A Kruger Aug 04 '12 at 17:19
  • I suspect that Jrocket is preempting your java args and you are running as on different settings. The error your are getting indicates that your hardware is resource constrained. It needs 1gb of *contiguous* physical memory to operate. I see this error on VM's pretty often. – Mark A Kruger Aug 04 '12 at 17:23
  • After further investigation, it appears that our server's memory usage and performance boost is when we are running the memory leak tool inside of JRockit. We are using Oracle JRockit Mission Control from the user interface, we right click on the JVM for the CF Instance that is running and then click Start Memleak. While doing the memleak test, our memory usage drops to ~48mb. [The moment we close the memleak](http://www.diigo.com/item/image/2u621/5k27). [After starting Memleak](http://www.diigo.com/item/image/2u621/nwbx) – billvsd Aug 05 '12 at 10:06
  • This is a physical server with 8GB of ram. Unfortunately, our hosting provider only licenses us the CF 8 Enterprise 32 bit version. That error was only one of other errors we had, such as out of memory when the server was under higher load. – billvsd Aug 05 '12 at 10:09
  • While running JRockit and the Memleak test, we noticed that loading pages on our server would cause visible jumps in the memory usage, but GC would occur not long after. We have now turned off the JRockit JRE and are back to JRE6.0_24 (32 bit version). – billvsd Aug 05 '12 at 10:57
  • The -XX:+UseParallelGC did not seem to be doing GC. We added in -XX:+UseConcMarkSweepGC and it appears to be running GC every 15 seconds or so by looking at the heap graphs. I'm not sure why the memory usage while running a memleak test would be 48mb, back to CF with -XX:+UseConcMarkSweepGC levels off at about 400MB. – billvsd Aug 05 '12 at 10:58
  • Your experience with the memleak is backwards (ha). Memleak testing is suppose to make your memory climb right? Wierd! As for 32 bit - CF 8 enterprise will run on 64bit. A different host might be in order (talk to me!). – Mark A Kruger Aug 06 '12 at 14:01
  • I like that low pause connector as well in many cases. But one piece of advice I have is that (unless you are crashing - which you were) don't be too too concerned about how much memory is actually being used. The JVM grabs memory and releases it in a way that can seem random and disconnected from your app or traffic or whatever. But obviously if you are crashing that's a different story. Here's a link you might find useful on that topic http://www.coldfusionmuse.com/index.cfm/2012/4/27/knowning-a-normal-jvm-heap - sorry if you know all that already - you seem pretty sharp to me :) – Mark A Kruger Aug 06 '12 at 14:04

1 Answers1

0

I thought that I would post an update to a resolution that ended up working for us. I am not sure exactly why the GC seemed to be running much better while using JRockit (specifically during the memory leak test), but we have found a setting for the JVM machine that seems to have enabled us to control the frequency the GC is called.

-Dsun.rmi.dgc.client.gcInterval=27000 -Dsun.rmi.dgc.server.gcInterval=27000

These two settings allow us to specifically call the GC as frequently or as infrequently as we want, and we needed to change it from the default setting. We also did update our entire java.args line based on a few great blog articles (linked at the bottom). Here is our updated java.args that has our server running like it should.

java.args= -server -DJINTEGRA_NATIVE_MODE -DJINTEGRA_PREFETCH_ENUMS -Xmx1024m -Xms1024m -XX:MaxPermSize=192m -XX:PermSize=192m  -XX:+UseParallelGC -Dsun.rmi.dgc.client.gcInterval=27000 -Dsun.rmi.dgc.server.gcInterval=27000 -Dcoldfusion.rootDir={application.home}/ -Djava.compiler=NONE -Xnoagent -Xdebug 

Blog articles:

Trunkful.com CF_GEMS How to Tune the JVM Part 1

Trunkful.com CF_GEMS How to Tune the JVM Part 2

billvsd
  • 254
  • 2
  • 12