4

I am running a web application using jetty server with maven. The application contains a large pool of static objects in Lists and Maps contributing to the 2.8GB of physical memory usage. After several hours, the server hangs with maximum CPU usage. This happens without any user interaction or requests made on the server.

I have noticed that during those several hours, while the server is running fine, the memory slowly reduces to 1.7GB. My suspicion is that this could be a garbage collection related issue.

Questions:

  1. Could it be that GC hangs while erroneously collecting or inspecting my large object pools and its references?
  2. How would I go about debugging and fixing this issue?

Note that on Windows I do not have this problem. Once the application starts and fills up its pool, it occupies 3.4GB and stays exactly the same without ever crashing.

Server startup and environment:

setenforce 0
export MAVEN_OPTS="-Xmx5120m -Xms5120m -XX:+UseConcMarkSweepGC -Xgcthreads1 -XX:MaxGCPauseMillis=2000 -XX:GCTimeRatio=10"
sudo nohup mvn -D jetty.port=80 jetty:run &

Operating system:

Ubuntu 12.04.1 LTS

Java:

OpenJDK Runtime Environment (IcedTea6 1.11.5) (6b24-1.11.5-0ubuntu1~12.04.1)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)

Maven:

Apache Maven 3.0.4

Jetty:

8.1.8.v20121106
mtk
  • 13,221
  • 16
  • 72
  • 112
Emir Pasic
  • 95
  • 1
  • 7
  • is this showing up during development with the jetty:run or are you trying to run a production type instance using both maven and jetty:run? if it is the later it is by no means a recommended deployment solution. – jesse mcconnell Jan 17 '13 at 19:08
  • It is the later, i.e. a production type instance. It is a very lightweight service, hence the decision to go with Jetty. Although it's first time I use Jetty in production, I found that many companies use jetty in production. Are you suggesting that I should move to something more heavyweight such as Glassfish, JBoss, TomEE+ or that I should run the instance as a jar? Thx – Emir Pasic Jan 17 '13 at 22:59
  • 2
    no, just don't run it via the jetty-maven-plugin, either write a little embedded jetty wrapper or deploy it into a normal jetty distribution...is it not jetty that is the issue, it is a production deployment scenario using maven tooling which neither maven nor its jetty plugin were intended to be used for :) – jesse mcconnell Jan 17 '13 at 23:16
  • Thank you for your input. After running Jetty under different configurations, I eliminated Jetty as a possible cause. However, I took your advice and I am running the app now using the jetty-runner `sudo nohup java -server -Xmx5120m -Xms5120m -XX:+UseConcMarkSweepGC -XX:MaxGCPauseMillis=2000 -XX:GCTimeRatio=10 -jar target/dependency/jetty-runner.jar --port 80 --path / target/geocoder.war & ` – Emir Pasic Jan 22 '13 at 10:08

1 Answers1

2

It's hard to say if it's due to incorrect GC that cause the system hang. I think you can make some moves to get more information:

  1. Add -verbose:gc -Xloggc:/home/admin/logs/gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps to your JVM arguments, those will help you find more information of GC.
  2. Collecting thread dump regularly to see what's going on in your application at runtime.
  3. Get a memory dump when the machine is about to die, which can be analyzed by MAT.
  4. When CPU spikes, use top -H -p<pid> to find the dominator threads and spot them in the thread dump, then you can basically find out which line of the code is doing wrong.

Here is a really good article of How to Analyze Java Thread Dumps.

Aleš
  • 8,896
  • 8
  • 62
  • 107
Gavin Xiong
  • 957
  • 6
  • 9
  • Following up on your suggestions and the article (especially the `jstack` command) allowed me to debug and trace a possible cause for my problem. I am waiting to verify the solution, but it seems that neither GC nor Jetty were causing the hang, but rather a deadlock in using the DBCP's PoolingDataSource, i.e. programming error on my part. – Emir Pasic Jan 22 '13 at 11:29