Before I even start, I'll say that I was not 100% sure whether SO is the appropriate SX for this question. Let me know if I should ask this on some other SX.
The question is about FaaS in general, but if you can better explain this in a context of a particular FaaS platform/provider, that's great as well.
I'm currently reading up on serverless computing (FaaS to be more specific) and trying to get myself somewhat comfortable with the subject.
Now almost everywhere I turn, I encounter the following statements about FaaS:
1) Most FaaS platforms support down-to-zero scaling;
2) FaaS providers charge their users based on their function execution time (usually measured in ms);
3) Potential cold starts (i.e., creating a new instance instead of reusing an existing one) are an issue in FaaS as they considerably degrade performance of your application;
Points 1 and 2 are considered benefits - you get exactly what you need (including nothing at all, if applicable) andy pay for exactly what you get.
Point 3 is considered a drawback - the request takes considerably more time to complete. I've seen authors describing cold starts as a sign of FaaS platforms not yet being mature. I've seen practitioners saying that they set up periodic requests just to keep their functions from becoming inactive and "going under" thus triggering the cold start the next time it's called.
My question is - why are cold starts viewed as undesirable instead of as a trade-off?
What I mean is, considering that the user pays for execution time in FaaS, wouldn't it usually be in their best interests to avoid having warm, but idle function instances? To me it seems like a cost vs high availibility decision. Do I misunderstand something? Does having a warm, but idle function instance does not count towards one's execution time? Even if so:
a) isn't it undesirable from the providers' perspective (having to allocate resources that are neither used nor paid for)?
b) sending periodic requests (as mentioned above) surely does cost you, right?