1

while using both HPA and Cluster Autoscaler in Kubernetes, I have this scenario below.

Maximum 3 pods can fit into a single node. I have setup HPA min replica as 15, and max as 39. So, at first, I only had total 5 nodes which can accomodate all 15 pods. However, as load increases, more than 15 pods have spun up, triggering Cluster Autoscaler. When the peak time passed, HPA downsized pod number back to 15. Here, I originally hoped that HPA removes the pods in nodes where there were only 1 pod left, and hoped that Cluster (Node) size would return to 5. However, I found that total 9 nodes were left (6 nodes with 2 pods, and 3 nodes with 1 pod). For cost efficiency, I want 5 nodes to accomodate all 15 nodes.

Would this be possible? Thank you.

Piljae Chae
  • 987
  • 10
  • 23
  • 1
    Possible, can you list the parameters you used to install your autoscaler in your question. – gohm'c Jan 20 '22 at 07:38
  • @gohm'c Hi, I just installed Cluster Autoscaler with autoscalingGroups specified (name, maxSize, minSize). – Piljae Chae Jan 21 '22 at 09:43
  • autoscalingGroups refer to auto discovery of node group/pool. This setting is not related to your question. You need to elaborate if you are using on-prem K8S, or cloud managed K8S (eg. EKS? GKE?), and how you install your cluster autoscaler (e.g manifest, helm?). Be specific about the parameters you used for the installation. – gohm'c Jan 21 '22 at 09:51
  • @gohm'c Sure, I'm using EKS 1.19, and cluster autoscaler was installed via Helm with the autoscalingGroups option I've specified above. – Piljae Chae Jan 21 '22 at 10:02

1 Answers1

1

I think this is known about. Some projects such as https://github.com/kubernetes-sigs/descheduler have spun up to try to reoptimise spreads. I haven't checked out the descheduler yet, but it does sound like something that may help you.

Blender Fox
  • 4,442
  • 2
  • 17
  • 30