2

I am using a Collector Node in my message flow. It is configured to collect 50 message or wait for 30 seconds. Under load testing, Websphere MQ sometimes says that a long-running transaction has been detected, and the pid corresponds with the pid of the application's execution group. The question is: is it possible that the Collector Node does not commit its internal transaction while waiting for the messages or for the timeout expiry?

JoshMc
  • 10,239
  • 2
  • 19
  • 38
gisly
  • 673
  • 1
  • 8
  • 30
  • To answer your question: yes it does not commit until it times out or reaches the count. See https://www.ibm.com/support/knowledgecenter/en/SSMKHH_10.0.0/com.ibm.etools.mft.doc/ac37820_.htm – JoshMc Sep 07 '19 at 07:29
  • @JoshMc, yeah, please post it. Actually, I didn't understand from the link provided that the transaction does not end untill all messages have been received but I believe in your experience with WebSphere – gisly Sep 09 '19 at 06:55

2 Answers2

2

The MQInput node is where the transactionality is specified. This is described in the IIB v10 KC page Developing integration solutions > Developing message flows > Message flow behavior > Changing message flow behavior > Configuring transactionality for message flows > Configuring MQ nodes for transactions

  • If you set the property to Yes (the default option): if a transaction is not already inflight, the node starts a transaction.

The Collector Node does not commit until it times out or reaches the count. See the IIB v10 KC page Reference > Message flow development > Built-in nodes > Collector node

All input messages that are received under sync point from a transaction or thread by the Collector node are stored in internal queues. Storing the input messages under sync point ensures that the messages remain in a consistent state for the outgoing thread to process; such messages are available only at the end of the transaction or thread that propagates the input messages.

A new transaction is created when a message collection is complete, and is propagated to the next node.

JoshMc
  • 10,239
  • 2
  • 19
  • 38
1

Whenever you configure any node(those are eligible as per IBM documentation) to work under transaction, they don't commit until the unit-of-work gets completed. In your case since 50 messages(if arrived in 30 secs) are requested in one unit-of-work, the message flow that has collector node and all other nodes in that flow commit once all 50 messages are successfully processed. During this time period, Queue manager has to maintain this in-flight state in its logs which I had stated previously which had to be increased. So any large unit-of-work causes this issue irrespective of node used

Since your issue deals with MQ long running transaction, ensure you have enough MQ log space for transaction handling by the queue manager.

To increase the MQ log space go to the below path and increase the primary and secondary number

        ==> IBM\WebSphere MQ\qmgrs\QMNAME\qm.ini

Below are the content that you have to increase. By default it is 3 and 2. Ensure you have space on your disc to whatever number you are increasing it to. Restart your queue manager once the qm.ini file has been updated.

               Log:
                  LogPrimaryFiles=3
                  LogSecondaryFiles=2

Link to MQ config on : https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.con.doc/q018710_.htm

Hope this helps.

Rohan
  • 607
  • 5
  • 5
  • Thanks, that's what we've done. But the question was about whether it's the collector node which causes the long-running transaction – gisly Sep 09 '19 at 06:57
  • Whenever you configure any node(those are eligible as per IBM documentation) to work under transaction, they don't commit until the unit-of-work gets completed. In your case since 50 messages(if arrived in 30 secs) are requested in one unit-of-work, the message flow that has collector node and all other nodes in that flow commit once all 50 messages are successfully processed. During this time period, Queue manager has to maintain this in-flight state in its logs which I had stated previously which had to be increased. So any large unit-of-work causes this issue irrespective of node used. – Rohan Sep 09 '19 at 14:24