0

We have a streaming application(IBM Infosphere Streams),wherein the input to the application is from the input_queue(JMS queue) and the output is again written to some output_queue(JMS Queue). Currently we are talking about messages in excess of 300k per hour, But at times,the application gets clogged(due to the limited resources availabe) and the messages are queued internally by the application and the output_queue doesn't get the messages at the same rate. So fix this ,we have no option but to inhibit the ingestion of messages from input_queue into the streams application ,and restart the application with ingestion enabled. During this time,the internally queued message gets lost.So we are looking forward to recover the lost messages in case streams is required to be stopped. One approach we thought of is,to have another queue which takes in messages within streams while it ingest them. and and removes them once they are posted to output queue and in case streams have to be stopped,we can replay those message in the internal queue first and then begin processing other messages from the input_queue.

Now The question is how efficient this approach is and if possible can any one suggest workarounds. I have been told that posting to the queue over network is an expensive operation.So what options do we have.Suggestions are most welcome .Thanks.

Sal
  • 167
  • 2
  • 10
  • Can you provide more information on your set up? Here's how I understand the problem. You have this set up: MQ Server (input_queue) ----> Stream Application -----> MQ server (output_queue) When you say the application is clogged up, can you elaborate on where the bottleneck is? Is it on the MQ server or in the Streams application? If it's the Streams application getting congested, you solved the problem by restarting the Streams application. – Samantha Chan Jan 31 '17 at 19:33
  • However, with this approach, the messages that were already ingested by Streams are lost? The messages that are still on the input_queue are still there. You are trying to figure out how to solve the congestion problem in Streams? and make sure no messages are lost? – Samantha Chan Jan 31 '17 at 19:35
  • Its not the mq server,as it still has the messages queued up for processing,when streams become congested,the ingestion rate falls down drastically, so ,in order to beat the uncertainity,we inhibits the queue from which the streams consumes the messages,and then stop streams,and again before starting streams we enable the queue and start the streams,during this stop-and-start of streams ,the messages in the streams gets lost. – Sal Jan 31 '17 at 19:58
  • Ok, I understand. I am thinking about what's the best way to save and restore messages that are consumed in Streams. However, I think the root of the problem is that your Streams application is congested. I recommend spending some time tuning the application and try to avoid this congestion problem in the first place. Streams is designed to run for a long time without restart. And this stopping and restarting of Streams application is not a best practise. – Samantha Chan Jan 31 '17 at 20:12
  • 1
    Have you seen this performance tuning guide? https://developer.ibm.com/streamsdev/2014/09/07/optimizing-streams-applications/ Do you know why your application is congested? – Samantha Chan Jan 31 '17 at 20:12

0 Answers0