-1

I'm working HiveMQ and I have two clients one each each thread and I have a main thread that starts these two threads. I've been using System.nanoTime() to time both the sending (for one thread) and the receiving (for the other thread) so I can add up the values to get the total time to send and receive a given number of message. I used synchronized block with wait() and notify() in Java so that the timers start at near the same time as one thread will have to wake up the other. My final time to send and receive seems to vary be about 0-350 milliseconds with 15 messages being sent and received naturally when I run my program. For context the server and the client are running on the same machine using localhost as the server address. Is there anyway I could get more precise time (less varying) across my threads? I want to get as much precision as possible.

Code for Subscriber (sending client):

scheduler.waitToReceive();  // makes the SubThread wait until the PubThread is ready to send a message 

            System.out.println("Timer started");
            startTime = System.nanoTime();

            for (int i = 1; i <= 15; i++) {  
                Mqtt5Publish receivedMessage = receivingClient1.receive(MESSAGEWAITTIME,TimeUnit.SECONDS).get(); // receives the message using the "publishes" instance                                                                         // .get() returns the object if available or throws a NoSuchElementException 
                PubSubUtility.convertMessage(receivedMessage);  // Converts a Mqtt5Publish instance to string and prints 
            }   
            endTime = System.nanoTime();

Code for Publisher (publishing client):

readyToSend = true; 
        scheduler.notifyStartReceive();  // notifies SubThread it can starts receiving messages
        startTime = System.nanoTime();

            for (int i = 1; i <= 15; i++) { 
                 publisher.publishWith()    
                 .topic(publisherTopic)   // publishes to the specified topic
                 .qos(MqttQos.EXACTLY_ONCE)  // sets the quality of service to 2 
                 .payload(convertedMessage)  // the contents of the message 
                 .send();
            }           
        endTime = System.nanoTime();
Chigozie A.
  • 335
  • 4
  • 16
  • You've not said how many messages this 350ms is for or what the network is between you and the broker or what machine either the client or the broker is running on. What makes you think 350ms isn't an acceptable amount of variance for the test? – hardillb Jun 28 '19 at 20:24
  • @hardillb I updated my post with more info. I was really unsure if 350 milliseconds is acceptable given my conditions which is why I wanted to ask someone. I also wanted to know if there was any way I could make the execution time more consistent? – Chigozie A. Jun 28 '19 at 20:50
  • Only you can decide if 350ms is acceptable, it's your system. And without knowing the size/shape of the machine in question and what ever else it's running at the same time, basically makes this question unanswerable – hardillb Jun 28 '19 at 20:55
  • p.s. it's probably Java garbage collection on either the broker or the client introducing the delay. – hardillb Jun 28 '19 at 20:59

1 Answers1

0

Whenever you are doing performance testing, you need to ask yourself EXACTLY what do you want to measure? What can impact the results? Then the test needs to reflect that.

As for meaningful results: 15 messages is not enough to draw any conclusions. You could have run the test at precisely the time that something was running on your system making the numbers slower than usual. My rule of thumb is: run the test at least for 10 seconds to prevent random interference. On the other side, running it to long will increase the likelihood of interference.

Then run the test at least 3 times and find the average, standard deviation. We don't accept tests with more than 10% stddev.

Gambit Support
  • 1,432
  • 1
  • 7
  • 17