0

I am creating a k8s custom controller. Basically when a custom resource is created, some additional resources will be created. These include a configmap, deployment, and service. The project was created with kubebuilder. If the controller.go includes logic to watch for configmap, the pod will be terminated as OOMKilled, error code 137. Watching other type of objects such as deployment, service and statefulset works fine. The section of code is

err = c.Watch(&source.Kind{Type: &corev1.ConfigMap{}}, &handler.EnqueueRequestForOwner{
        IsController: true,
        OwnerType:    &ltmv1beta1.Ltm{},
    })
    if err != nil {
        log.Println(err)
        return err
    }

ltmv1beta1 is the CR. This is almost identical as the example code created by kubebuilder. Also have the correct access rights granted for the role

      services                                                      []                 []              [get list watch create update patch delete]
      configmaps                                                    []                 []              [get list watch create update patch delete]
      secrets                                                       []                 []              [get list watch create update patch delete]
      mutatingwebhookconfigurations.admissionregistration.k8s.io    []                 []              [get list watch create update patch delete]
      validatingwebhookconfigurations.admissionregistration.k8s.io  []                 []              [get list watch create update patch delete]
      statefulsets.apps                                             []                 []              [get list watch create update patch delete]
      ltms.ltm.k8s.io                                               []                 []              [get list watch create update patch delete]
      deployments.apps/status                                       []                 []              [get update patch]
      ltms.ltm.k8s.io/status                                        []                 []              [get update patch]

Could not figure out why this only happens to configmap. Thanks.

GalloCedrone
  • 4,869
  • 3
  • 25
  • 41
Tony
  • 3
  • 1
  • Hi, your pod is having less amount of RAM, you need to increase it. is your cluster on minikube ? – Suresh Vishnoi May 05 '19 at 08:27
  • What is the pod definition or more specifically - what are its resource limits? Does this work ok if you run the controller out of the cluster - i.e on your development machine? – antweiss May 05 '19 at 08:56
  • It is cluster on AWS. I checked there is only several configmaps exist and tons of deployment and services. What puzzles me why only watching configmaps cause the problem – Tony May 05 '19 at 17:16
  • Problem solved. Kubebuilder specified a default resource limit in the manager.yaml. Removed it and let cluster manage the resource and works fine. Thanks all – Tony May 05 '19 at 17:42
  • Please consider putting your solution as an answer to your question and marking it as an approved solution. It will help community to find your solution if somebody else will have similar problem. – MWZ May 24 '19 at 09:37

0 Answers0