1

I have a serverless function that receives orders, about ~30 per day. This function is depending on a third-party API to perform some additional lookups and checks. However, this external endpoint isn't 100% reliable and I need to be able to store order requests if the other API isn't available for a couple of hours (or more..).

My initial thought was to split the function into two, the first part would receive orders, do some initial checks such as validating the order, then post the request into a message queue or pub/sub system. On the other side, there's a consumer that reads orders and tries to perform the API requests, if the API isn't available the orders get posted back into the queue.

However, someone suggested to me to simply use an Azure Durable Function for the requests, and store the current backlog in the function state, using the Aggregator Pattern (especially since the API will be working find 99.99..% of the time). This would make the architecture a lot simpler.

What are the advantages/disadvantages of using one over the other, am I missing any important considerations? I would appreciate any insight or other suggestions you have. Let me know if additional information is needed.

picklepick
  • 1,370
  • 1
  • 11
  • 24
  • 2
    If you expect that 3rd party API to be available 99.9% of the time, then it is simpler for you to have a `retry policy` with exponential backoff [ie. increased time interval between retry attempts]. If you exceed X attempts, then you can put in a queue for later processing. I don't think durable function is apt here as it comes with its own list of constraints for orchestrator functions. – Anand Sowmithiran Jun 10 '22 at 08:55
  • 1
    If you expect the 3rd party API to have much more frequent downtime, it is better to use the Queue to store the unprocessed orders. Having said that, if there is only **transient errors** by 3rd party API, then durable function's built-in [Retry options](https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.webjobs.durableorchestrationcontext.callactivitywithretryasync?view=azure-dotnet-legacy) can be used. – Anand Sowmithiran Jun 10 '22 at 09:22

1 Answers1

2

You could solve this problem with Durable Task Framework or Azure Storage or Service Bus Queues, but at your transaction volume, I think that's overcomplicating the solution.

If you're dealing with ~30 orders per day, consider one of the simpler solutions:

  • Use Polly, a well-supported resilience and fault-tolerance framework.
  • Write request information to your database. Have an Azure Function Timer Trigger read occasionally and finish processing orders that aren't marked as complete.

Durable Task Framework is great when you get into serious volume. But there's a non-trivial learning curve for the framework.

Rob Reagan
  • 7,313
  • 3
  • 20
  • 49