-1

I was exploring the resource quota in kubernetes. My problem statement is there has been a situation where a person accidently wrote a large value for memory limit like 10Gi and that caused a unwanted autoscaling triggered.

I want to cap the resourcequota. I was reading about Limit Ranges (https://kubernetes.io/docs/concepts/policy/limit-range/) and Resource Quota Per PriorityClass (https://kubernetes.io/docs/concepts/policy/resource-quotas/). I want to cap on memory and cpu limit requests values for a pod/container. What are the best practices or recommendations for such use case.

cloudbud
  • 2,948
  • 5
  • 28
  • 54
  • You won't be able to cap limits/requests for a pod. LimitRanges can be used setting defaults when none are set. ResourceQuotas could cap objects count or requests/limits sum within a single namespace. You probably want to use ResourceQuotas, and prevent end-users from editing those objects themselves. – SYN Aug 24 '22 at 17:37
  • Shall I use resource quota with priorities? – cloudbud Aug 24 '22 at 20:23
  • Resourcequota with priority class? Adding cpu limit/request on namespace level doesn't make sense to me as new applications might be created in future – cloudbud Aug 24 '22 at 20:39
  • with priorityClasses + scoped resourcequotas, you would have to trust your dev to properly set a priorityClass onto their pods. otherwise, they won't be subject to quota. As a general rule: priorityClasses are not meant for absolute resources allocation, rather scheduling priority relatively to other workloads in your cluster. Setting lim/req at namespace level might not make sense to you, then again: if you want to actually limit resources allocations beyond end-users' mistakes: this is what you're looking for. When someone wants to deploy something and needs more: they'ld ask. – SYN Aug 24 '22 at 20:44
  • the limit range can be used to cap the cpu and memory limits for pod in my opinion here – cloudbud Aug 24 '22 at 20:46
  • indeed, you're right. – SYN Aug 24 '22 at 21:11

1 Answers1

1

If you use terraform and eks blueprints you can just define the quotas per teams as is explained here

  # EKS Application Teams

  application_teams = {
    # First Team
    team-blue = {
      "labels" = {
        "appName"     = "example",
        "projectName" = "example",
        "environment" = "example",
        "domain"      = "example",
        "uuid"        = "example",
      }
      "quota" = {
        "requests.cpu"    = "1000m",
        "requests.memory" = "4Gi",
        "limits.cpu"      = "2000m",
        "limits.memory"   = "8Gi",
        "pods"            = "10",
        "secrets"         = "10",
        "services"        = "10"
      }
      manifests_dir = "./manifests"
      # Belows are examples of IAM users and roles
      users = [
        "arn:aws:iam::123456789012:user/blue-team-user",
        "arn:aws:iam::123456789012:role/blue-team-sso-iam-role"
      ]
    }

    # Second Team
    team-red = {
      "labels" = {
        "appName"     = "example2",
        "projectName" = "example2",
      }
      "quota" = {
        "requests.cpu"    = "2000m",
        "requests.memory" = "8Gi",
        "limits.cpu"      = "4000m",
        "limits.memory"   = "16Gi",
        "pods"            = "20",
        "secrets"         = "20",
        "services"        = "20"
      }
      manifests_dir = "./manifests2"
      users = [

        "arn:aws:iam::123456789012:role/other-sso-iam-role"
      ]
    }
  }

In my case I created a quota per namespace in the vars.yaml for each cluster and add them with a for expression:

main.tf

locals {
  app_namespaces         = var.app_namespaces
}

...

application_teams = {
 for name, values in local.app_namespaces : name => {
   quota = values.quota
  }
} 

values.yaml

app_namespaces:
  backend:
    roles:
      - Backend-Engineers
    quota:
      requests.cpu: 1000m
      requests.memory: 4G
      limits.cpu: 2000m
      limits.memory: 8Gi
      pods: 10
      secrets: 10
      services: 10
Payomeke
  • 43
  • 1
  • 7