0

I have created an azure durable function, which is calling about 8 activity functions. These activity functions create about 100 Threads, which start http requests. The data I receive has to be transformated. This process needs some calculation power. For example: All functions need locally about 3 Minutes, until they are finished.

Now I want to publish my azure function in azure with the consumption plan and its advantages. The problem I have now is, that the function takes more than 10 minutes, which exceeds the maximum execution time.

I do not want to use an app service. I am looking for a way of increasing the core count with the scaling controller or some behaviour, which makes the scaling controller increase the performance.

If possible I do not want to change my code architecture. I thought about splitting up the durable function into smaller pieces and start each function with a http request manually. But since the functions are interacting with each other, this seems like a very big change to the code without knowing if it makes the scaling controller use more instances.

MJohnyJ
  • 99
  • 9
  • When you talk about 'the function takes more than 10 minutes' which function are you talking about? It would be fine for the orchestrator function to last longer than 10 minutes since the state is persisted to storage. It is only an issue if an activity function takes longer than 10 minutes. – Marc Sep 17 '19 at 07:19
  • Try to make the activity functions as small as possible (as @Marc mentioned they are subject to the 10 minute timeout), think about having one http request per activity function. You can still start activities in parallel from the orchestrator function then wait for them to finish and do you final processing then. – ayls Sep 17 '19 at 08:16

1 Answers1

1

In the consumption plan you won’t be able to control scale directly. That said you can at least control how many activities are executed on a single instance. If you set maxConcurrentActivityFunctions to something like 1 it will make sure that even if it hasn’t scaled out yet, only one activity will be processed at a time on an instance so it’s not having to share compute with too many other concurrent instances.

The overall scale-out will be driven by the length of the activity queue. So it’s in your best interest to fan out to as many smaller activities as you can, rather than only a few big ones that themselves fan out. But using some of the knobs linked above you should hopefully at least be able to dedicate more CPU to the activities that are executing.

jeffhollan
  • 3,139
  • 15
  • 18
  • I am afraid the solution did not help but your comment made me understand what Azure functions with a consumption plan are for. I switched from using the Azure functions to using a docker container, timed by an azure logic app. I could scale that container very well with 4 vcores and now it does exactly what I what, reliable and pretty cheap since it is always dynamically created. That was the final solution for me, thank you! – MJohnyJ Sep 30 '19 at 09:22