4

I've very basic architecture:

Web API queue message in a Queue and a Worker (back processing service) is dequeue and processing the message.

The problem is that the Web API doesn't know when the messaged has been processed by the Worker.

Can the Worker alert the Queue that the message has been process successfully and the Queue send back to the Web API "Process Complete Message" Event ?

One solution that I was thinking on:

After the Web API queued the message it checks every couple of second the message status:

If the message status is with "Peck-Lock" - the message is still been processing.

If the message not found on the Queue - the message has been processed (successfully or unsuccessfully it doesn't matter).

But there isn't a pre-made solution to this from Microsoft ?

Ron
  • 1,744
  • 6
  • 27
  • 53

1 Answers1

5

Many architectures that need this type of reporting back are handled by a separate queue, going from the processor to the requester. That's why there's actually even a ReplyTo property on the BrokeredMessage object. The approach would be the requester would have it's own queues that it was watching as well. When it creates a message it sets the ReplyTo property and sends it on. When the worker processes the message it sends a completion message back to the requester using the queue path provided by the original message.

Depending on your needs you may have a single queue for your entire front end, or each instance may have it's own queue. Note that in a distributed system with machines that can be transient, having each Web API front end have its own queues can introduce some complexity.

Usually this is done when the requester needs to know something is completed so it can communicate that in some manner. For example, an web request comes in to do processing. The request is put on the queue and processed in the back end, and a completion message is returned back to the front end where it's picked up and a notification is sent to the user (in some cases via SignalR, which with a backplane in place you'd not have to worry about which front end server received the response message).

Other than communicating the completion either through direct communication to the requester, or via a queue, there is nothing that allows you to watch for a completion of a message from another machine. Checking the message status won't help you because that information won't change unless you are getting a new reference to the message regularly which will not scale well if you have a lot of messages your dealing with.

MikeWo
  • 10,887
  • 1
  • 38
  • 44
  • Thanks Mike, So if I understand you correctly Ill have one master queue between the Web-API and all the Worker instances and for every job(message) in the master queue I will have another sub queue just for talking between the Web-API and specific Worker that process the job ? – Ron Oct 22 '15 at 06:55
  • One queue to handle the Web-API to Worker path. Then either a single queue that all Web-API servers are watching or a queue per Web-API server if the specific originating request Wep-API server needs to know the process was completed. It depends on what the Web-API servers are doing with the response. – MikeWo Oct 22 '15 at 10:44
  • So for the simplest scenario I need two queues: 1 - For the Web API to post job into it. 2 - For the Web API to watch for completion of a job. Did I got it right ? if you have any external links to articles about it I will be thankful :) Thanks Mike – Ron Oct 22 '15 at 12:25
  • Yes, in the simplest case you would have two queues. The pattern is called the Request/Response messaging pattern. For an Azure specific example you can look at http://www.cloudcasts.net/devguide/Default.aspx?id=13050 . For more general info, you can look it up in the Enterprise Integration Patterns resources - http://www.enterpriseintegrationpatterns.com/patterns/messaging/RequestReply.html . – MikeWo Oct 26 '15 at 15:39
  • Can you elaborate on that "backplane" please? If a user browser established a web socket to a specific server in a front-end cluster the response queue needs to be listened by that server and not the other, right? – UserControl Jun 19 '17 at 21:23
  • 1
    @UserControl A backplane is used to help scale SignalR so that messages are sent to all the front end servers, then the one that has the actual connection with the browser can then send the response. See https://learn.microsoft.com/en-us/aspnet/signalr/overview/performance/scaleout-with-windows-azure-service-bus or https://learn.microsoft.com/en-us/aspnet/signalr/overview/performance/scaleout-with-redis for different options to use a back plane with SignalR. – MikeWo Jun 20 '17 at 11:48