0

When publishing a large amount of events to a topic (where the retry and time to live is in the minutes) many fail to get delivered to subscribed functions. Does anyone know of any settings, or approaches to ensure scaling react quickly without dropping them all?

I am creating a Azure Function app that essentially passes events to an event grid topic at high rate, and other functions subscribed to a topic will handle the events. These events are meant to be short lived and not persist longer than a specified set of minutes. Ideally I want to see the app scale to handle the load without dropping events. the overall goal is that each event will trigger an outbound api endpoint call to my own api to test performance/load.

I have reviewed documentation on MSDN, and other locations but not much fits my scenario (most talk in terms of incoming events and not outbound http events).

For scaling I have looked into host.json settings for http (as there is none for grid events and grid events look to be similar to http triggers) and setting those seemed to have made some improvements

The end result I expect is that for every publish to a topic endpoint it gets delivered to a function and executed with a low fail delivery/drop rate.

What I am seeing is that when publishing many events to a topic (and at a consistent rate), the majority of events get dead-lettered/dropped

1 Answers1

0

Consumption plan is limited by the computing power that is assigned to your function. In essence that there are some limits up to which it can scale, and then it becomes the bottle neck.

I suggest to have a look at the limitations.

And here you can some insights about computing power differences.

If you want to enable automatic scaling, or scaling in the number of vm instances I suggest using an app service plan. The cheapest option where scaling is supported is Standard pricing tier.

kgalic
  • 2,441
  • 1
  • 9
  • 21
  • Thanks again @kgalic I have reviewed the limitations link you provided previously, and from what I can tell have implemented all the suggestions (managing connections and have modified the http setting for host.json). based on what you have mentioned it seems that the consumption plan has issues scaling to something like 600 events a second(I have tried lower and still see issues)? – user2395731 Apr 25 '19 at 16:06
  • In cloud computing, and especially with PaaS services like Azure Functions, it is hard to find where is the border. It depends on computing power, code complexity etc. the way it works is that you run some tests, load tests, and see if the results are within your latency requirements for example like in your case. If you are not happy, that tells something, right :) ? So you mitigate these issues so that you either choose another service potentially in some cases, or you scale up an existing one. – kgalic Apr 25 '19 at 16:28