0

I'm testing the performance of IBM MQ (running the latest version in a local docker container) I use a persistent queue.

On the producer side, I can get higher throughput by running multiple producing applications in parallel.

However, on the consumer side, I cannot increase the throughput by parallelizing consumer processes. On the contrary, the throughput is even worse for multiple consumers than for one single consumer.

What could be the reason for the poor consuming performance?

It shouldn't be due to the hardware limit as I'm comparing the consumption with the production and I did only message consumption without any other processing.

Does the GET perform the commit for each message? I don't find any explicit commit method in PyMQI though.

put_demo.py

#!/usr/bin/env python3

import pymqi
import time

queue_manager = 'QM1'
channel = 'DEV.APP.SVRCONN'
host = '127.0.0.1'
port = '1414'
queue_name = 'DEV.QUEUE.1'
message = b'Hello from Python!'
conn_info = '%s(%s)' % (host, port)
nb_messages = 1000

t0 = time.time()
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)

for i in range(nb_messages):
    try:
        queue.put(message)
    except pymqi.MQMIError as e:
        print(f"Fatal error: {str(e)}")

queue.close()
qmgr.disconnect()
t1 = time.time()
print(f"tps: {nb_messages/(t1-t0):.0f} nb_message_produced: {nb_messages}")

get_demo.py

#!/usr/bin/env python3

import pymqi
import time
import os

queue_manager = 'QM1'
channel = 'DEV.APP.SVRCONN'
host = '127.0.0.1'
port = '1414'
queue_name = 'DEV.QUEUE.1'
conn_info = '%s(%s)' % (host, port)
nb_messages = 1000
nb_messages_consumed = 0

t0 = time.time()
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)
gmo = pymqi.GMO(Options = pymqi.CMQC.MQGMO_WAIT | pymqi.CMQC.MQGMO_FAIL_IF_QUIESCING)
gmo.WaitInterval = 1000

while nb_messages_consumed < nb_messages:
    try:
        msg = queue.get(None, None, gmo)
        nb_messages_consumed += 1
    except pymqi.MQMIError as e:
        if e.reason == 2033:
            # No messages, that's OK, we can ignore it.
            pass

queue.close()
qmgr.disconnect()
t1 = time.time()
print(f"tps: {nb_messages_consumed/(t1-t0):.0f} nb_messages_consumed: {nb_messages_consumed}")

run results

> for i in {1..10}; do ./put_demo.py & done
tps: 385 nb_message_produced: 1000
tps: 385 nb_message_produced: 1000
tps: 383 nb_message_produced: 1000
tps: 379 nb_message_produced: 1000
tps: 378 nb_message_produced: 1000
tps: 377 nb_message_produced: 1000
tps: 377 nb_message_produced: 1000
tps: 378 nb_message_produced: 1000
tps: 374 nb_message_produced: 1000
tps: 374 nb_message_produced: 1000

> for i in {1..10}; do ./get_demo.py & done
tps: 341 nb_messages_consumed: 1000
tps: 339 nb_messages_consumed: 1000
tps: 95 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000
tps: 82 nb_messages_consumed: 1000

get_demo.py updated version using syncpoint and batch commit

#!/usr/bin/env python3

import pymqi
import time
import os

queue_manager = 'QM1'
channel = 'DEV.APP.SVRCONN'
host = '127.0.0.1'
port = '1414'
queue_name = 'DEV.QUEUE.1'
conn_info = '%s(%s)' % (host, port)
nb_messages = 1000
commit_batch = 10
nb_messages_consumed = 0

t0 = time.time()
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)
gmo = pymqi.GMO(Options = pymqi.CMQC.MQGMO_WAIT | pymqi.CMQC.MQGMO_FAIL_IF_QUIESCING | pymqi.CMQC.MQGMO_SYNCPOINT)
gmo.WaitInterval = 1000

while nb_messages_consumed < nb_messages:
    try:
        msg = queue.get(None, None, gmo)
        nb_messages_consumed += 1
        if nb_messages_consumed % commit_batch == 0:
            qmgr.commit()
    except pymqi.MQMIError as e:
        if e.reason == 2033:
            # No messages, that's OK, we can ignore it.
            pass

queue.close()
qmgr.disconnect()
t1 = time.time()
print(f"tps: {nb_messages_consumed/(t1-t0):.0f} nb_messages_consumed: {nb_messages_consumed}")

Thanks.

zwush
  • 75
  • 2
  • 9
  • You need to provide more details to help answer this question. 1. What version of MQ is running on the queue manager you connect to. 2. What version of MQ are you using for the client side libraries that pymqi loads (ex: `mqic.so`). 3. Please post your reading app and your writing app. Is the queue that the app puts to and gets from the same QLOCAL or do they exist on two different queue managers? Do you specify `MQPMO_SYNCPOINT` when you open the queue for PUT and `MQGMO_SYNCPOINT` when you open the queue for GET? – JoshMc Feb 07 '20 at 14:44
  • 1
    The links you posted show just doing one put and one get inside a connect-open-get/put-close-disconnect pattern. Please tell us whether you run that in a loop, or whether you put the loop round the put/get. Better still, post your code in the question so that it is clear to us. – Morag Hughson Feb 07 '20 at 20:12
  • @MoragHughson Thank you for your reply. I just added a loop around the put/get, other codes are exactly the same as the samples from the links. – zwush Feb 07 '20 at 21:47
  • @JoshMc 1. 2. I am using MQ v9.1 for both QM and client. 3. I am using exactly the same as the sample codes from the links except that I added a loop around the put/get. Both puts and gets target the same QM / QLOCAL. I didn't specify `MQPMO_SYNCPOINT ` and `MQGMO_SYNCPOINT`. Thanks. – zwush Feb 07 '20 at 21:53
  • I think where you put the loop is key, is it really just around the "put" or is it around the connect/disconnect? If the later the move it to the former. In many cases adding syncpoint to the puts and gets and `qmgr.commit` after even just 1 message can improve performance, but with that being said, IBM did add some efficiencies around apps that did not use sync point with persistent messages. You can gain more but batching up more than 1 message per commit. I still think providing a working example of your put and get app will help all to better understand. – JoshMc Feb 07 '20 at 23:48
  • 1
    @zwush - given that the most important part of your code is the bit you haven't showed us, it would be much better if you added your ACTUAL code to the question, rather than a link to a sample that is somewhat like the code you are using. Also, you must have made other changes in order to be able to get the next message and not a message with the same message id. We really need to see YOUR code. – Morag Hughson Feb 09 '20 at 22:49
  • @MoragHughson I have put my codes above. Thanks. – zwush Feb 10 '20 at 09:39
  • Are you running `put_demo.py` and `get_demo.py` at the same time or do you wait for put to finish before starting get? – JoshMc Feb 10 '20 at 09:55
  • I did put and get separately. – zwush Feb 10 '20 at 10:18
  • Persistent messages put outside a unit of work are forced to commit after each message. This can cause lock contention with multiple getters. – JoshMc Feb 10 '20 at 10:38
  • 1
    Please try adding `MQGMO_SYNCPOINT` to your `gmo` and `qmgr.commit()` to your loop after each get. Then try increasing the number of messages you get before each commit, ex 10 or 50. – JoshMc Feb 10 '20 at 10:39
  • @JoshMc Thanks for your suggestion. The GET performs much better with MQGMO_SYNCPOINT and a batch commit. I tried 10 and the tps is several times higher. – zwush Feb 10 '20 at 10:59
  • Any improvement with just 1 preset commit? – JoshMc Feb 10 '20 at 13:06
  • @JoshMc No in fact it's 20% worse with 1 commit per message + `MQGMO_SYNCPOINT` than without `MQGMO_SYNCPOINT`. If I understood well they both commit for each message. I don't get why there is such a difference. – zwush Feb 10 '20 at 13:13
  • Not sure why it is worse. How about batching or single puts with `MQPMO_SYNCPOINT`? – JoshMc Feb 10 '20 at 13:40
  • @JoshMc Same for the PUT. Batching is better but single commit with `MQPMO_SYNCPOINT` is worse. – zwush Feb 10 '20 at 13:56
  • Would you mind showing your updated example with syncpoint and commit for single and batch? – JoshMc Feb 10 '20 at 14:40
  • I add the updated example in the post. – zwush Feb 10 '20 at 14:58
  • When you did single message commit, was this line not present `if nb_messages_consumed % commit_batch == 0:`? I would assume that calculation would not account for 20% increase, but just wanted to check. Is the QM local and you actually connect to 127.0.0.1? Or is the QM on another host? – JoshMc Feb 10 '20 at 16:06
  • No I set commit_batch to 1 for single commit. I will give another try without that line. QM is local running in a docker container. – zwush Feb 10 '20 at 16:22

0 Answers0