1

I have a project with one API (httpTrigger) function and one queueTrigger.

When jobs are processing in the queueTrigger, the API becomes slow/unavailable. Probably my function only accepts one job simultaneously.

Not sure why. Must be a setting somewhere.

hosts.json:

{
  "version": "2.0",
  "extensions": {
    "queues": {
        "batchSize": 1,
        "maxDequeueCount": 2,
        "newBatchThreshold": 0,
        "visibilityTimeout" : "00:01:00"
    }
  },
  "logging": ...
  "extensionBundle": ...
  "functionTimeout": "00:10:00"
}

The batchSize is set to 1. I only want one job to process simultaneously. But this should not affect my API? The setting is only for the queues trigger?

functions.json for API:

{
  "bindings": [
    {
      "authLevel": "function",
      "type": "httpTrigger",
      "direction": "in",
      "name": "req",
      "route": "trpc/{*segments}"
    },
    {
      "type": "http",
      "direction": "out",
      "name": "$return"
    }
  ],
  "scriptFile": "../dist/api/index.js"
}

functions.json for queueTrigger:

{
  "bindings": [
    {
      "name": "import",
      "type": "queueTrigger",
      "direction": "in",
      "queueName": "process-job",
      "connection": "AZURE_STORAGE_CONNECTION_STRING"
    },
    {
      "type": "queue",
      "direction": "out",
      "name": "$return",
      "queueName": "process-job",
      "connection": "AZURE_STORAGE_CONNECTION_STRING"
    }
  ],
  "scriptFile": "../dist/process-job.js"
}

Other settings in azure that may be relevant:

FUNCTIONS_WORKER_PROCESS_COUNT = 4

Scale out options in azure (sorry about the swedish)

enter image description here

Update I tried to update maximum burst to 8. Tried to change to dynamicConcurrency.

No success.

Feeling is the jobs occupy 100% of the CPU and API then becomes slow/times out. Regardless of concurrency settings etc.

Joe
  • 4,274
  • 32
  • 95
  • 175

2 Answers2

1

Firstly, I would suggest you put each of your functions in their own Function app so that they're isolated and then one can't impact the other. You don't have to mess with settings.

If you're a glutton for punishment and are resolute in keeping these in the same Function app then I just have one comment

FUNCTIONS_WORKER_PROCESS_COUNT should be set to lower values, not higher, if you think your processes are causing the underlying VMs to run out of resources. This is the limit before the host starts a new instance. If this is set to a low number then you'll get more instances rather than having your existing instances becoming oversaturated with things to do.

Dean MacGregor
  • 11,847
  • 9
  • 34
  • 72
0

If your queue trigger runs multiple invocations of Functions, It is possible that your Function is reaching maximum concurrency limit, If you are running multiple functions in a single Function app, And, As your have kept the scaling only for 1 instance. According to the default behaviour of concurrency, both the functions are running on the same instance, And As the batch size of Queue trigger is set to 1 , The queue processing is taking time and hampering the overall performance of the function running on one instance. You can enable Dynamic concurrency in your Function app, So the Function app invocations will scale dynamically to meet your Function triggers performance.

Add this setting in your host.json:-

{ 
        "version": "2.0", 
        "concurrency": { 
            "dynamicConcurrencyEnabled": true, 
            "snapshotPersistenceEnabled": true 
        } 
    }

As I have enabled Dynamic concurrency, I do not need to add the Batch size or other settings for my queue trigger as they are ignored, Refer below:-

enter image description here

As you have set the Batch size to 1, Your queue trigger uses static concurrency, So you need to set the concurrency with MaxConcurrentSessions for your Function app to scale according to the Triggers.

You can also increase the number of worker processes with the setting below:-

FUNCTIONS_WORKER_PROCESS_COUNT 

enter image description here

And try increasing the maxDequeueCount in your host.json as this setting determines the number of times the message can be dequeued before moving it to poison state. Setting this value too low hampers your function performance.

Also, Try to increase your function app to more instance than 1 and try running the functions again.

Additionally you can visit Diagnose and solve problems of your function app and select availability and performance to get insights on your Function app performance like below:-

enter image description here

enter image description here

Refer this MS document here:- Concurrency in Azure Functions | Microsoft Learn

SiddheshDesai
  • 3,668
  • 1
  • 2
  • 11
  • 1
    I tried to change to dynamic concurrency. No improvement. Where exactly do I set MaxConcurrentSessions? Why do you suggest change maxDequeueCount when at the same time suggest to change to dynamic concurrency? From my understanding, no options in queues are necassary when using dynamic concurrency? – Joe Apr 14 '23 at 21:28
  • MaxConcurrentSessions is not directly supported in the Host.json for queue trigger, You need to set WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT to the maximum instance you want your function app to scale and maxConcurrentCalls": to the same value asset for WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT refer here- https://stackoverflow.com/questions/64471610/azure-servicebus-maxconcurrentcalls-totally-ignored# > Also, Did you check Diagnose and solve problems? does it give any insight?\ – SiddheshDesai Apr 16 '23 at 09:00