2

I have a Kong deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: local-test-kong
  labels:
    app: local-test-kong
spec:
  replicas: 1
  selector:
    matchLabels:
      app: local-test-kong
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: local-test-kong
    spec:
      automountServiceAccountToken: false
      containers:
        - envFrom:
            - configMapRef:
                name: kong-env-vars
          image: kong:2.6
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/sh
                  - -c
                  - /bin/sleep 15 && kong quit
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /status
              port: status
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          name: proxy
          ports:
            - containerPort: 8000
              name: proxy
              protocol: TCP
            - containerPort: 8100
              name: status
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /status
              port: status
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 5
          resources: # ToDo
            limits:
              cpu: 256m
              memory: 256Mi
            requests:
              cpu: 256m
              memory: 256Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /kong_prefix/
              name: kong-prefix-dir
            - mountPath: /tmp
              name: tmp-dir
            - mountPath: /kong_dbless/
              name: kong-custom-dbless-config-volume
      terminationGracePeriodSeconds: 30
      volumes:
        - name: kong-prefix-dir
        - name: tmp-dir
        - configMap:
            defaultMode: 0555
            name: kong-declarative
          name: kong-custom-dbless-config-volume

I applied this YAML in GKE. Then i ran kubectl describe on its pod.

➜  kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
local-test-kong-678598ffc6-ll9s8   1/1     Running   0          25m
➜  kubectl describe pod/local-test-kong-678598ffc6-ll9s8
Name:         local-test-kong-678598ffc6-ll9s8
Namespace:    local-test-kong
Priority:     0
Node:         gke-paas-cluster-prd-tf9-default-pool-e7cb502a-ggxl/10.128.64.95
Start Time:   Wed, 23 Nov 2022 00:12:56 +0800
Labels:       app=local-test-kong
              pod-template-hash=678598ffc6
Annotations:  kubectl.kubernetes.io/restartedAt: 2022-11-23T00:12:56+08:00
Status:       Running
IP:           10.128.96.104
IPs:
  IP:           10.128.96.104
Controlled By:  ReplicaSet/local-test-kong-678598ffc6
Containers:
  proxy:
    Container ID:   containerd://1bd392488cfe33dcc62f717b3b8831349e8cf573326add846c9c843c7bf15e2a
    Image:          kong:2.6
    Image ID:       docker.io/library/kong@sha256:62eb6d17133b007cbf5831b39197c669b8700c55283270395b876d1ecfd69a70
    Ports:          8000/TCP, 8100/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Wed, 23 Nov 2022 00:12:58 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     256m
      memory:  256Mi
    Requests:
      cpu:      256m
      memory:   256Mi
    Liveness:   http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
    Readiness:  http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
    Environment Variables from:
      kong-env-vars  ConfigMap  Optional: false
    Environment:     <none>
    Mounts:
      /kong_dbless/ from kong-custom-dbless-config-volume (rw)
      /kong_prefix/ from kong-prefix-dir (rw)
      /tmp from tmp-dir (rw)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kong-prefix-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  tmp-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kong-custom-dbless-config-volume:
    Type:        ConfigMap (a volume populated by a ConfigMap)
    Name:        kong-declarative
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  25m   default-scheduler  Successfully assigned local-test-kong/local-test-kong-678598ffc6-ll9s8 to gke-paas-cluster-prd-tf9-default-pool-e7cb502a-ggxl
  Normal  Pulled     25m   kubelet            Container image "kong:2.6" already present on machine
  Normal  Created    25m   kubelet            Created container proxy
  Normal  Started    25m   kubelet            Started container proxy
➜  

I applied the same YAML in my localhost's MicroK8S (on MacOS) and then I ran kubectl describe on its pod.

➜  kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
local-test-kong-54cfc585cb-7grj8   1/1     Running   0          86s
➜  kubectl describe pod/local-test-kong-54cfc585cb-7grj8
Name:         local-test-kong-54cfc585cb-7grj8
Namespace:    local-test-kong
Priority:     0
Node:         microk8s-vm/192.168.64.5
Start Time:   Wed, 23 Nov 2022 00:39:33 +0800
Labels:       app=local-test-kong
              pod-template-hash=54cfc585cb
Annotations:  cni.projectcalico.org/podIP: 10.1.254.79/32
              cni.projectcalico.org/podIPs: 10.1.254.79/32
              kubectl.kubernetes.io/restartedAt: 2022-11-23T00:39:33+08:00
Status:       Running
IP:           10.1.254.79
IPs:
  IP:           10.1.254.79
Controlled By:  ReplicaSet/local-test-kong-54cfc585cb
Containers:
  proxy:
    Container ID:   containerd://d60d09ca8b77ee59c80ea060dcb651c3e346c3a5f0147b0d061790c52193d93d
    Image:          kong:2.6
    Image ID:       docker.io/library/kong@sha256:62eb6d17133b007cbf5831b39197c669b8700c55283270395b876d1ecfd69a70
    Ports:          8000/TCP, 8100/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Running
      Started:      Wed, 23 Nov 2022 00:39:37 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     256m
      memory:  256Mi
    Requests:
      cpu:      256m
      memory:   256Mi
    Liveness:   http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
    Readiness:  http-get http://:status/status delay=10s timeout=5s period=10s #success=1 #failure=3
    Environment Variables from:
      kong-env-vars  ConfigMap  Optional: false
    Environment:     <none>
    Mounts:
      /kong_dbless/ from kong-custom-dbless-config-volume (rw)
      /kong_prefix/ from kong-prefix-dir (rw)
      /tmp from tmp-dir (rw)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kong-prefix-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  tmp-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  kong-custom-dbless-config-volume:
    Type:        ConfigMap (a volume populated by a ConfigMap)
    Name:        kong-declarative
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  92s   default-scheduler  Successfully assigned local-test-kong/local-test-kong-54cfc585cb-7grj8 to microk8s-vm
  Normal   Pulled     90s   kubelet            Container image "kong:2.6" already present on machine
  Normal   Created    90s   kubelet            Created container proxy
  Normal   Started    89s   kubelet            Started container proxy
  Warning  Unhealthy  68s   kubelet            Readiness probe failed: Get "http://10.1.254.79:8100/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  68s   kubelet            Liveness probe failed: Get "http://10.1.254.79:8100/status": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
➜  

It's the exact same deployment YAML. However, the deployment created inside GKE cluster are running all fine with no complaints. But, the deployment created inside my localhost microk8s (on MacOS) is showing probe failures.

What could i be missing here while deploying to microk8s (on MacOS)?

Rakib
  • 12,376
  • 16
  • 77
  • 113

2 Answers2

1

Your readiness probes are failing on the local pod on port 8100. It looks like you have a firewall(s) rule preventing internal pod and/or pod to pod communication.

As per the docs:

You may need to configure your firewall to allow pod-to-pod and pod-to-internet communication:

sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed
Rico
  • 58,485
  • 12
  • 111
  • 141
  • I see. This could be the reason. I am using microk8s on MacOS which (as per my finding) does not have `ufw`. Do you have any suggestions for that? – Rakib Nov 23 '22 at 03:51
  • I understand microk8s on MacOS uses **MultiPass** to run an ubuntu instance on the Mac. Thanks for your direction. I will try this out and get back here :) – Rakib Nov 23 '22 at 04:46
  • Thanks for pointing me to this. I think there was some misconfiguration on my `microk8s-vm` that was launched by Multipass. I just did a clean re-setup of microk8s by uninstalling and reinstalling my "microk8s-vm" via multipass and now the deployment is passing all readiness & liveness probes as expected. Out-of-the-box, inside a new `microk8s-vm`, upon running `multipass shell microk8s-vm` followed by running `sudo ufw status`, it says **Status: inactive**. So, by default setup, i believe there is no impact of firewall rules in microk8s. – Rakib Nov 23 '22 at 05:36
  • That's good to hear. – Rico Nov 24 '22 at 00:27
1

I had exactly the same problem using Microk8s. I had the following plugins enabled: hostpath-storage and dns. I don't deploy Kong, but RabbitMQ (here's my example project).

I got the following error:

Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  53s   default-scheduler  Successfully assigned default/hello-rabbit-server-0 to microk8s-vm
  Normal   Pulled     52s   kubelet            Container image "docker.io/bitnami/rabbitmq:3.10.19-debian-11-r4" already present on machine
  Normal   Created    52s   kubelet            Created container setup-container
  Normal   Started    52s   kubelet            Started container setup-container
  Normal   Pulled     21s   kubelet            Container image "docker.io/bitnami/rabbitmq:3.10.19-debian-11-r4" already present on machine
  Normal   Created    21s   kubelet            Created container rabbitmq
  Normal   Started    21s   kubelet            Started container rabbitmq
  Warning  Unhealthy  3s    kubelet            Readiness probe failed: dial tcp 10.1.254.78:5672: connect: connection refused

What fixed the issue for me was to enable the host-access addon in Microk8s:

microk8s enable host-access

Now the readiness probes are working fine.

jonashackt
  • 12,022
  • 5
  • 67
  • 124