1

This may be a silly question but I'm curious to know the answer:

If I am running a Kubernetes cluster on AWS (EKS) which autoscaling policy will take precedence? The auto scaling policy on the load balancer or the policy within the pods themselves?

Jonas
  • 121,568
  • 97
  • 310
  • 388
NikkiG-IT
  • 13
  • 2
  • 4
    Pod scaling is different from EC2 instance scaling. One scales pods (number of running containers) and the other scales the number of nodes in your cluster. Generally, you should not use an autoscaling policy for your ASG and instead use the [k8s cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler). – jordanm Dec 30 '20 at 19:56

1 Answers1

0

The k8s cluster autoscaler does not scale worker nodes based on memory / cpu. It scales up when pods are not scheduled due to insufficient memory / cpu. Pod autoscaling can be configured to scale up when memory / cpu usage crosses a certain threshold.

So in a workflow involving a pod autoscaler like horizontal pod autoscaler and a node autoscaler like k8s cluster autoscaler, the below happens

Load on services increases -> Pod memory / cpu threshold crossed -> Pod autoscales -> Pod scheduling paused due to insufficient memory / cpu on worker nodes -> Node autoscales -> Pod scheduled on new node.

If your question is about any other node autoscaling strategy which involves memory / cpu based autoscaling, then the answer on what autoscales first will be different. Please provide the specific use case if any.