2

I wrote a simple test in java (JDK 7) for a ZeroMQ PUB socket publishing data over a MULTICAST channel on Windows 7 using OpenPGM 5.2.122. I tried JZMQ versions 2.2.0, 2.1.3 and 2.1.0 on top of ZeroMQ 3.2.3. The test file is as below.

import org.zeromq.ZMQ;

public class ZMQMulticastPubSocketTest
{
       public static void main(String[] args)
       {
              ZMQ.Context ctx = ZMQ.context(1);
              ZMQ.Socket pub = ctx.socket(ZMQ.PUB);
              pub.setLinger(0);
              pub.setRate(10000000);
              pub.setSendBufferSize(24000000);

              pub.connect("epgm://10.100.20.19;239.9.9.11:5556");
              //pub.bind("tcp://*:5556");
              while(true)
              {
                     pub.sendMore("TESTTOPIC");
                     pub.send("Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_".getBytes(), 0);
              }             
       }
}

I notice that the memory footprint of the process keeps increasing till the computer runs out of memory. It doesn't crash (I am sure malloc() fails are internally handled). I also tried it on our linux servers and it went all the way to consuming 22 GB of ram before I took the process down. Is there a memory leak in the JZMQ wwrapper for multicast?

If I changed the code above to bind to a TCP address (line commented out) the memory footprint stayed stable and barely increased.

I also wrote a C version of the above code. This version is given below and it did not have the same growing memory footprint issue for multicast.

#include "stdafx.h"
#include "zmq.h"
#include "zmq_utils.h"
#include <assert.h>
#include <string>

static int
s_send (void *socket, char *string) {
    int size = zmq_send (socket, string, strlen (string), 0);
    return size;
}

static int
s_sendmore (void *socket, char *string) {
    int size = zmq_send (socket, string, strlen (string), ZMQ_SNDMORE);
    return size;
}

int main(int argc, char* argv[])
{
       void *context = zmq_ctx_new ();
       void *publisher = zmq_socket (context, ZMQ_PUB);
       int rc = zmq_bind (publisher, "epgm://10.100.20.19;239.9.9.11:5556");
       assert (rc == 0);
       long sockOpt = 1000000;
       rc = zmq_setsockopt (publisher, ZMQ_RATE, &sockOpt, sizeof(sockOpt));
       sockOpt = 0;
       rc = zmq_setsockopt (publisher, ZMQ_LINGER, &sockOpt, sizeof(sockOpt));
       sockOpt = 24000000;
       rc = zmq_setsockopt (publisher, ZMQ_SNDBUF, &sockOpt, sizeof(sockOpt));

       char* topic =  "TESTTOPIC";

       while(1)
       {
              s_sendmore(publisher, topic);
              s_send(publisher, "Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_Data_");
       }

       return 0;
}

Does anyone have any idea about why this may be happening?

Thanks.

Chinmay Nerurkar
  • 495
  • 6
  • 22
  • Are you using zmq pure java implementation, or just the Java bindings wrapped over the core zmq library? – raffian Jun 17 '13 at 23:08
  • I am using JZMQ wrapper on top of libzmq C library. – Chinmay Nerurkar Jun 17 '13 at 23:12
  • Did you build core with openpgm? – raffian Jun 17 '13 at 23:14
  • Yes. I built libzmq with with OpenPGM version 5.2.122. I can get a subscriber to subscribe to data on the multicast channel on another machine. But I have this issue with the publisher. – Chinmay Nerurkar Jun 17 '13 at 23:22
  • I also have the similar problem with the JZMQ client. The memory footprint keeps growing until OOM finally. I check the jzmq code, there's a zmq.YQueue implementation used by the Pipe class. This YQueue will increase infinitely if you keep pushing data to it. – Stanley Shi Feb 28 '15 at 03:06

2 Answers2

0

Make sure to set High Water Mark (HWM) otherwise your application will ran out of memory eventually.

Schildmeijer
  • 20,702
  • 12
  • 62
  • 79
0

I notice a couple discrepancies between the two examples. The JZMQ example's rate is 10,000,000 and 1,000,000 for C. The more interesting discrepancy is you are connecting to epgm://10.100.20.19;239.9.9.11:5556 from Java and you are binding to it from C.

Can you verify the memory leak is still the case if you change connect to bind in the Java example?

Trevor Bernard
  • 992
  • 10
  • 13
  • I thought connect and bind are interchangeable for PGM. I changed the connect() to bind() for JZMQ and I have the same result as above. The memory footprint keeps increasing. I also changed the zmq_bind() to zmq_connect() in the C version of the code and ZMQ_RATE to 10000000. The C version still works fine. Does the Java code not give you the same result I see (possibly showing some issue in my library setup)? Here are shots of my task manager http://i.imgur.com/oUf4M1X.jpg http://i.imgur.com/8VwUom0.jpg – Chinmay Nerurkar Jun 27 '13 at 15:50