0

Looking for a solution or some tips on how to figure out what is wrong.

Looking at heapdumps with VisualVM tool which just shows references are being held. Is there a better tool I can use? Is there anything I can run from the command line to release these references? Using jconsole GC doesn't work, only prolongs lockup for about 5 days.

Linux server gets the following Java OOM every 10-14 days:

Apr 18, 2012 1:34:55 PM org.apache.jk.core.MsgContext action
WARNING: Error sending end packet
java.net.SocketException: Broken pipe
    at java.net.SocketOutputStream.socketWrite0(Native Method)
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:109)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
    at org.apache.jk.common.ChannelSocket.send(ChannelSocket.java:508)
    at org.apache.jk.common.JkInputStream.endMessage(JkInputStream.java:112)
    at org.apache.jk.core.MsgContext.action(MsgContext.java:293)
    at org.apache.coyote.Response.action(Response.java:182)
    at org.apache.coyote.Response.finish(Response.java:304)
    at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:204)
    at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:282)
    at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:744)
    at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:674)
    at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:866)
    at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)
    at java.lang.Thread.run(Thread.java:636)
Apr 18, 2012 1:34:55 PM org.apache.jk.common.ChannelSocket processConnection
WARNING: processCallbacks status 2
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid20051.hprof ...
Heap dump file created [1147163590 bytes in 149.230 secs]
Apr 18, 2012 1:59:14 PM ServerCommunicatorAdmin reqIncoming
WARNING: The server has decided to close this client connection.
Apr 18, 2012 1:59:14 PM ServerCommunicatorAdmin reqIncoming
WARNING: The server has decided to close this client connection.
user1356863
  • 143
  • 3
  • 12
  • 2
    Unless you're sending some enormous data strings, I suspect that the problem is the slow accumulation of clutter over time, not some specific cause. – Hot Licks Apr 25 '12 at 17:44
  • I've used Eclipse Memory Analyzer Tool in the past to troubleshoot `OutOfMemoryError`s, but in my case, it's because one object was holding onto an 8 GB SQL result object. You can try using MAT, or you might have to do live memory monitoring if the problem is what Hot Licks describes (just increasing memory consumption). – wkl Apr 25 '12 at 17:46
  • 1
    Still, you need a profiler to be able to analyze memory growth and where your growth is coming from. In my experience in situations like this, mot of the growth did in fact come from one or two sources. But that may or may not be applicable here – ControlAltDel Apr 25 '12 at 17:48
  • Yes, it's typical in Java that there are a handful of areas where data is somehow being retained (unlike C environments where there can be hundreds of leaks). But usually that handful doesn't amount to too much until you run things a long time, and the repeated handfuls build into a mountain. Some sort of heap analyzer is called for. – Hot Licks Apr 25 '12 at 17:53
  • Any chance you are using serialized object stream here? – Gray Apr 25 '12 at 19:01
  • No chance of using a serialized object stream, sorry. My coworker also suggested MAT ... I will have to try that. From VisualVM we see instances are being held. Is there anyway to clear these instances? jconsole GC doesn't work. – user1356863 Apr 25 '12 at 19:34

2 Answers2

0

If fixing slow accumulation of clutter really doesn't help you can think about fine tuning your VM, have a look at the VM args (run java -X). Maybe some of these are interesting for you (but might only prolong the time without OOM exception):

-Xms set initial Java heap size
-Xmx set maximum Java heap size
-Xss set java thread stack size
-Xprof output cpu profiling data

f4lco
  • 3,728
  • 5
  • 28
  • 53
0

Use -Xmx to set the maximum heap size, and -Xss to set the thread stack size. The important thing to remember is that if you create a lot of threads, you should set as small a stack size as possible to maximise the number of threads you can have (you'll get an OOM error when you try to create a thread and there isn't enough space for the stack). And the bigger your heap, the less space there is for thread stacks. So you'll probably need to experiment a bit to get the optimum split between heap and stack.

I've managed quite well using a Linux server with 1Gb of memory (a few years old now but still working quite happily) with 256M for the heap and 64K per thread (-Xmx256m -Xss64k). You should also try the snappily-named -XX:+HeapDumpOnOutOfMemoryError flag, and use JMX to get a .hprof dump that you can analyse with the Eclipse Memory Analyser to see if you have any memory leaks.

user1636349
  • 458
  • 1
  • 4
  • 21