My aim is to measure MQTT device-to-device message latency (not throughput) and I'm looking for feedback on my code-hacks. The setup is simple; just one device serving as two end-points (old Linux PC with two terminal sessions; one running the subscriber and the other running the publisher sample app) and the default broker at tcp://m2m.eclipse.org:1883
). I inserted time-capturing code-fragments into the C-language publish/subscribe sample apps on the src/samples
folder.
Below are the changes. Please provide feedback.
Changes to the subscribe sample app (MQTTAsync_subscribe.c
)
Inserted the lines below at the top of the msgarrvd
(message arrived) function
//print arrival time
struct timeval tv;
gettimeofday (&tv, NULL);
printf("Message arrived: %ld.%06ld\n", tv.tv_sec, tv.tv_usec);
Changes to the publish sample app (MQTTAsync_publish.c
)
Inserted the lines below at the top of the onSend
(callback) function
struct timeval tv;
gettimeofday (&tv, NULL);
printf("Message with token value %d delivery confirmed at %ld.%06ld\n",
response->token, tv.tv_sec, tv.tv_usec);
With these changes (after subtracting the time message arrived at the subscriber from the time that the delivery was confirmed at the publisher), I get a time anywhere between 1 millisecond and 0.5 millisecond.
Questions
Does this make sense as a rough benchmark on latency?
Is this the round-trip time?
Is the round-trip time in the right ball-park? Should be less? more?
Is it the one-way time?
Should I design the latency benchmark in a different way? I need a rough measurements (I'm comparing with XMPP).
I'm using the default QoS value (1). Should I change it?
The publisher takes a finite amount of time to connect (and disconnect). Should these be added?