2

I have an Azure function app triggered by an HttpRequest. The function app reads the request, tosses one copy of it into a storage table for safekeeping and sends another copy to a queue for further processing by another element of the system. I have a client running an ApacheBench test that reports approximately 148 requests per second processed. That rate of processing will not be enough for our expected load.

My understanding of function apps is that it should spawn as many instances as is needed to handle the load sent to it. But this function app might not be scaling out quickly enough as it’s only handling that 148 requests per second. I need it to handle at least 200 requests per second.

I’m not 100% sure the problem is on my end, though. In analyzing the performance of my function app I found a LOT of 429 errors. What I found online, particularly https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-request-limits, suggests that these errors could be due to too many requests being sent from a single IP. Would several ApacheBench 10K and 20K request load tests within a given day cause the 429 error?

However, if that’s not it, if the problem is with my function app, how can I force my function app to spawn more instances more quickly? I assume this is the way to get more throughput per second. But I’m still very new at working with function apps so if there is a different way, I would more than welcome your input.

Maybe the Premium app service plan that’s in public preview would handle more throughput? I’ve thought about switching over to that and running a quick test but am unsure if I’d be able to switch back?

Maybe EventHub is something I need to investigate? Is that something that might increase my apparent throughput by catching more requests and holding on to them until the function app could accept and process them?

Thanks in advance for any assistance you can give.

Denise
  • 61
  • 5

2 Answers2

1

You dont provide much context of you app but this is few steps how you can improve

  1. If you want more control you need to use App Service plan with always on to avoid cold start, also you will need to configure auto scaling since you are responsible in this plan and auto scale is not enabled by default in app service plan.

  2. Your azure function must be fully async as you have external dependencies so you dont want to block thread while you are calling them.

  3. Look on the limits. Using host.json you can tweek it.

429 error means that function is busy to process your request, so probably when you writing to table you are not using async and blocking thread

Vova Bilyachat
  • 18,765
  • 4
  • 55
  • 80
  • I wasn't sure how much to provide. The function itself really does nothing else but write to the table and queue. However, I will definitely look into making sure it's fully async. And thank you so much for your assistance! – Denise May 10 '19 at 12:09
  • @Denise then post your code, I think issue is with async and plan. If you are using sync usually writing to blob and a queue will take some time and if you blocking then it would explain why it so slow – Vova Bilyachat May 10 '19 at 12:23
  • Thanks! Actually, based on your first comment I went looking at the async of my code. What I had I've copied below. But I just found that in addition to the ICollector (which you can see below is what I had used to refer to the table and the queue) there's also an IAsyncCollector. Seems that if I use that object type and the AddAsync method (i.e., outTable.AddAsync(thisPayload, CancellationToken cancellationToken = default(CancellationToken));) then maybe that would be the way to go. So thanks for sending me down that path!! :-) – Denise May 10 '19 at 14:44
  • Seems comments are length limited... so I'll just include the signature and the two lines where I add to the table and queue: public static async Task Run(HttpRequest req, ICollector outTable, ICollector outQueue, ILogger log) // get request body, create a couple of GUIDs and strings that are needed // build the thisPayload and Content JSON outTable.Add(thisPayload); outQueue.Add(Content); – Denise May 10 '19 at 14:46
  • Could not get that to format right so I don't know if it's going to be helpful at all. But THANK YOU VERY MUCH for sending me down this path as it feels right. Thanks! – Denise May 10 '19 at 14:48
  • @Denise to insert code you would need to update question. Yes you must always use asynccollector since its async. – Vova Bilyachat May 10 '19 at 22:09
  • @Denise if you still need my help, update question if you all good now please accept answer :) thank you – Vova Bilyachat May 10 '19 at 22:11
  • Does that mean clicking the green checkmark? If so, I just did. If there's something else I need to do, please let me know. – Denise May 13 '19 at 13:46
0

Function apps work very well and scale as it says. It could be because request coming from Single IP and Azure could be considering it DDOS. You can do the following

AzureDevOps Load Test

You can load test using one of the azure service . I am very sure they have better criteria of handling IPs. Azure DeveOps Load Test

Provision VM in Azure

The way i normally do is provision the VM (windows 10 pro) in azure and use JMeter to Load test. I have use this method to test and it works fine. You can provision couple of them and subdivide the load.

Use professional Load testing services

If possible you may use services like Loader.io . They use sophisticated algos to run the load test and provision bunch of VMs to run the same test.

Use Application Insights

If not already you must be using application insights to have a better look from server perspective. Go to live stream and see how many instance it would provision to handle the load test . You can easily look into events and error logs that may be arising and investigate. You can deep dive into each associated dependency and investigate the problem.

Imran Arshad
  • 3,794
  • 2
  • 22
  • 27
  • Thank you! I was wondering if that might be part of the issue. Thank you for the testing suggestions. – Denise May 10 '19 at 12:10