With Azure Web Apps I can scale on a bunch of useful metrics such as disk or network I/O, CPU, and memory usage. Thus far in the Azure Functions documentation it seems like you only get memory usage as a metric for scaling when using the dynamic service plan. From what I've read, there is additional undocumented magic that determines how it scales; I really wish it were documented fully.
If I have a function app that sometimes uses a lot of CPU, sometimes a lot of disk or network I/O, sometimes mostly RAM--a mixture of metrics, or even just one of the former two, will Function scaling work?
More specifically for my case, or my intended case, if the Function uses a queue trigger and has a mixture of resource requirements based on each particular execution, but all the jobs fit within the selected memory tier, would the scaling take into account factors other than memory, like CPU and I/O or the number of messages in the queue and the number being processed on one machine, in order to spread load out on additional machines?
My worry is that if the jobs don't use much memory, for example, but high CPU or network traffic, that I will end up with a lot of jobs on one machine, running slowly waiting on other resources, rather than spreading the load out across multiple instances.
Our current solution/service only scales based on CPU, so Functions seems like swapping one problem for another similar problem. Some jobs were taking forever to finish because they are I/O bound and the service was overprovisioning them on a small number of worker bees. We actually wrote code that spins on the CPU to fake high usage, in order to suggest to the service that it should scale up, which is pretty crappy and wasteful.