0

In my solution there are 2 separate pipelines handle the incoming requests, one to handle the user request and another handles the file uploads.

There are chances of 2 concurrent requests may act on the same cosmos db documents. To overcome the concurrency issue, thought of implementing optimistic concurrency using etags.

The microservice in each pipeline (which posts the message to the queue) will fetch the etag from cosmos collection then posts message along with the latest etag to its respective queue(Azure EventHub), so the queue processor will verify etag matching before updating the cosmos document.

In case, if 412 occurs during etag verification,

will the queue processor do the retry on its own by fetching new etag from cosmos collection?

(or)

instead, the queue processor has to throw back the message back to the queue on 412 occurrence, with recently fetched etag for reprocessing?

Which the best suggested way of implementing retries on 412 scenarios?

  • Event Hubs is not a queue. It is a persistent stream that can be read in a forward-only manner and from which you cannot remove a message. It does not sound like a good fit for this scenario. You may want to consider Service Bus as an alternative. – Jesse Squire Dec 03 '22 at 21:26
  • @JesseSquire, consider it as service bus but I need to know the best way of handing 412 scenarios. – pingpong2020 Dec 04 '22 at 17:04

0 Answers0