How can i control the distribute of requests inside a pod? For example: I have one pod with one container that runs NodeJs Hello world with 10sec sleep. At first without scaling, i just want to hold other requests until the container finish processing a request.
Im trying to implement a simple Function as a service with Kubernetes.