2

I have installed Kubernetes cluster on CentOS-8 But nodes status shows NotReady, Namespace status of coredns shows pending and Weave-net status shows CrashLoopBackOff. i have re-installed as well, but still result is same alsotaint commands not working . how can i fix this issue?

# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
K8s-Master   NotReady   master   42m   v1.18.8

# kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                  READY   STATUS             RESTARTS   AGE   IP                NODE          NOMINATED NODE   READINESS GATES
kube-system   coredns-66bff467f8-5vtjf              0/1      Pending            0          42m   <none>            <none>        <none>           <none>
kube-system   coredns-66bff467f8-pr6pt              0/1      Pending            0          42m   <none>            <none>        <none>           <none>
kube-system   etcd-K8s-Master                       1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-apiserver-K8s-Master             1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-controller-manager-K8s-Master    1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-proxy-pw2bk                      1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   kube-scheduler-K8s-Master             1/1      Running            0          42m   90.91.92.93   K8s-Master        <none>           <none>
kube-system   weave-net-k4mdf                       1/2      CrashLoopBackOff   12         41m   90.91.92.93   K8s-Master        <none>           <none>

# kubectl describe pod coredns-66bff467f8-pr6pt --namespace=kube-system
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  70s (x33 over 43m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

# kubectl describe node | grep -i taint
Taints:             node.kubernetes.io/not-ready:NoExecute

# kubectl taint nodes --all node.kubernetes.io/not-ready:NoExecute
error: node K8s-Master already has node.kubernetes.io/not-ready taint(s) with same effect(s) and --overwrite is false

# kubectl describe pod weave-net-k4mdf --namespace=kube-system
Events:
  Type     Reason     Age                   From                  Message
  ----     ------     ----                  ----                  -------
  Normal   Scheduled  43m                   default-scheduler    Successfully assigned kube-system/weave-net-k4mdf to K8s-Master
  Normal   Pulling    43m                   kubelet, K8s-Master  Pulling image "docker.io/weaveworks/weave-kube:2.7.0"
  Normal   Pulled     43m                   kubelet, K8s-Master  Successfully pulled image "docker.io/weaveworks/weave-kube:2.7.0"
  Normal   Pulling    43m                   kubelet, K8s-Master  Pulling image "docker.io/weaveworks/weave-npc:2.7.0"
  Normal   Pulled     42m                   kubelet, K8s-Master  Successfully pulled image "docker.io/weaveworks/weave-npc:2.7.0"
  Normal   Started    42m                   kubelet, K8s-Master  Started container weave-npc
  Normal   Created    42m                   kubelet, K8s-Master  Created container weave-npc
  Normal   Started    42m (x4 over 43m)     kubelet, K8s-Master  Started container weave
  Normal   Created    42m (x4 over 43m)     kubelet, K8s-Master  Created container weave
  Normal   Pulled     42m (x3 over 42m)     kubelet, K8s-Master  Container image "docker.io/weaveworks/weave-kube:2.7.0" already present on machine
  Warning  BackOff    3m1s (x191 over 42m)  kubelet, K8s-Master  Back-off restarting failed container
  Normal   Pulled     33s (x4 over 118s)    kubelet, K8s-Master  Container image "docker.io/weaveworks/weave-kube:2.7.0" already present on machine
  Normal   Created    33s (x4 over 118s)    kubelet, K8s-Master  Created container weave
  Normal   Started    33s (x4 over 118s)    kubelet, K8s-Master  Started container weave
  Warning  BackOff    5s (x10 over 117s)    kubelet, K8s-Master  Back-off restarting failed container

# kubectl logs weave-net-k4mdf -c weave --namespace=kube-system
ipset v7.2: Set cannot be destroyed: it is in use by a kernel component
Arghya Sadhu
  • 41,002
  • 9
  • 78
  • 107
user4948798
  • 1,924
  • 4
  • 43
  • 89

1 Answers1

2
ipset v7.2: Set cannot be destroyed: it is in use by a kernel component

Above error is because of a race condition.

Referring from this issue you can edit the weave daemonset yaml to add below as a workaround.

              command:
                - /bin/sh
                - -c
                - sed '/ipset destroy weave-kube-test$/ i sleep 1' /home/weave/launch.sh | /bin/sh

So the the weave daemonset would look like

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: weave-net
  annotations:
    cloud.weave.works/launcher-info: |-
      {
        "original-request": {
          "url": "/k8s/v1.13/net.yaml",
          "date": "Fri Aug 14 2020 07:36:34 GMT+0000 (UTC)"
        },
        "email-address": "support@weave.works"
      }
  labels:
    name: weave-net
  namespace: kube-system
spec:
  minReadySeconds: 5
  selector:
    matchLabels:
      name: weave-net
  template:
    metadata:
      labels:
        name: weave-net
    spec:
      containers:
        - name: weave
          command:
            - /bin/sh
            - -c
            - sed '/ipset destroy weave-kube-test$/ i sleep 1' /home/weave/launch.sh | /bin/sh
...
Arghya Sadhu
  • 41,002
  • 9
  • 78
  • 107
  • weave i installed with this command `kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"` so by default it will store the `yaml` file in which path? – user4948798 Aug 14 '20 at 06:51
  • In this path `/opt/cni/bin/weave` executable not there. – user4948798 Aug 14 '20 at 06:57
  • 1
    Dont apply the yaml directly...download it locally via curl and edit it and then apply it...alternatively edit the daemonset using `kubectl edit ds weave-net -n kube-system` – Arghya Sadhu Aug 14 '20 at 07:41