2

I have a simple application which transfers data from one machine to another. As the application running, the size of heap is increasing slowly. So I dumped the heap and analysed it, and I found that the zmq.poll.Poller cost the biggest amount of memory. They belong to thread 'iothread-2':

The heap screenshot is here

enter link description here

My application demo is like this:

public static void main(String[] args) throws Exception {
    ZMQ.Context context = ZMQ.context(1);
    ZMQ.Socket socket = context.socket(ZMQ.DEALER);
    socket.connect("tcp://localhost:5550");
    ZMQ.Poller poller = context.poller(1);
    poller.register(socket, ZMQ.Poller.POLLIN);

    while(!Thread.currentThread().isInterrupted()) {
        poller.poll(5000);
        if (poller.pollin(0)) {
            socket.send("message"); // send message to another machine
            String msg = socket.recvStr(); // get the reply

            // do some stuff
            Thread.sleep(1000);
        }
    }
}

As I checked the Poller Object in heap, I found there were 4 million HashMap$Node, and the value of the hashmap node is a list of 10 null object array list.

The heap was dumped by command:
jmap -dump:live,format=b,file=dump.hprof [pid]

The jdk is 1.8.0_131, OS is CentOS 7.2.1511 and jeromq 0.4.2

Did I use poller wrong? Thanks very much for anyone who helps!

feng chen
  • 21
  • 2

1 Answers1

2

The issue seems to be related rather to a missing resources management:

Native API documentation is strict on this:

The zmq_msg_close() function shall inform the ØMQ infrastructure that any resources associated with the message object referenced by msg are no longer required and may be released. Actual release of resources associated with the message object shall be postponed by ØMQ until all users of the message or underlying data buffer have indicated it is no longer required.

Applications should ensure that zmq_msg_close() is called once a message is no longer required, otherwise memory leaks may occur. Note that this is NOT necessary after a successful zmq_msg_send().

Try to include a proper explicit message disposal and ought see improvement ( yet, depends on the jeromq version, garbage collection dynamics et al ).

user3666197
  • 1
  • 6
  • 50
  • 92
  • 1
    Thanks for your post, do you mean using ZMsg? ZMsg msg = ZMsg.recvMsg(socket); msg.destroy(); – feng chen Mar 27 '18 at 01:47
  • You are welcome. As posted above, this is heck dependent on how well the jeromq follows all native API functionalities. Normally **all** message-instance objects ought be explicitly destructed ( let get deallocated and released inside the non-local threads ( inside the respective `Context`-instance I/O-threads ) ) + the very same way **all** `Socket`-instance objects ought be explicitly `.close()`-ed and **all** `Context`-instance objects ought be explicitly gracefully `.term()`-inated whenever any such resource became useless. Deferring to do so or not doing that at all is a ticket to hell... – user3666197 Mar 27 '18 at 05:42
  • As far as I know, unlike jzmq, the JeroMQ is a plain JAVA implement of ZeroMQ. So there is no Native API call in JeroMQ, right? And the useless message should be gced. The only problem is the Multimap of Poller, which kept null array list in memory. I've updated the question and added screen shot. – feng chen Mar 27 '18 at 10:54
  • I had a similar issue. Memory did not get freed unless I explicitly called ZMsg.destroy(). There are byte[] values in the ZFrame that seem to not get picked up by GC unless you do this. – Chad Juliano Aug 18 '20 at 16:52