12
root@master2:/home/osboxes# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                READY     STATUS    RESTARTS   AGE       IP             NODE
default       hello-kubernetes-55857678b4-4xbgd   1/1       Running   1          18h       10.244.1.138   node
default       hello-kubernetes-55857678b4-gtvn4   1/1       Running   1          18h       10.244.1.139   node
default       hello-kubernetes-55857678b4-wttht   1/1       Running   1          18h       10.244.1.140   node
kube-system   coredns-78fcdf6894-s4l8n            1/1       Running   1          18h       10.244.0.14    master2
kube-system   coredns-78fcdf6894-tfjps            1/1       Running   1          18h       10.244.0.15    master2
kube-system   etcd-master2                        1/1       Running   1          18h       10.0.2.15      master2
kube-system   kube-apiserver-master2              1/1       Running   1          18h       10.0.2.15      master2
kube-system   kube-controller-manager-master2     1/1       Running   1          18h       10.0.2.15      master2
kube-system   kube-flannel-ds-4br99               1/1       Running   1          18h       10.0.2.15      node
kube-system   kube-flannel-ds-6c2x9               1/1       Running   1          18h       10.0.2.15      master2
kube-system   kube-proxy-mf9fg                    1/1       Running   1          18h       10.0.2.15      node
kube-system   kube-proxy-xldph                    1/1       Running   1          18h       10.0.2.15      master2
kube-system   kube-scheduler-master2              1/1       Running   1          18h       10.0.2.15      master2
root@master2:/home/osboxes# kubectl exec -it hello-kubernetes-55857678b4-4xbgd sh
error: unable to upgrade connection: pod does not exist

What does this error indicate? I am able to docker exec ... into the container from the node.

I have set this cluster up myself using kubeadm.

Verbose:

kubectl -v=10 exec -it hello-kubernetes-55857678b4-4xbgd sh
I0703 08:44:01.250752   10307 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0703 08:44:01.252809   10307 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0703 08:44:01.254167   10307 loader.go:359] Config loaded from file /etc/kubernetes/admin.conf
I0703 08:44:01.255808   10307 round_trippers.go:386] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://192.168.0.33:6443/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd'
I0703 08:44:01.272882   10307 round_trippers.go:405] GET https://192.168.0.33:6443/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd 200 OK in 16 milliseconds
I0703 08:44:01.273262   10307 round_trippers.go:411] Response Headers:
I0703 08:44:01.273485   10307 round_trippers.go:414]     Date: Tue, 03 Jul 2018 12:44:01 GMT
I0703 08:44:01.273692   10307 round_trippers.go:414]     Content-Type: application/json
I0703 08:44:01.273967   10307 round_trippers.go:414]     Content-Length: 2725
I0703 08:44:01.275168   10307 request.go:897] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"hello-kubernetes-55857678b4-4xbgd","generateName":"hello-kubernetes-55857678b4-","namespace":"default","selfLink":"/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd","uid":"002c6e23-7e23-11e8-b38f-0800273a59cb","resourceVersion":"5725","creationTimestamp":"2018-07-02T18:09:02Z","labels":{"app":"hello-kubernetes","pod-template-hash":"1141323460"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"hello-kubernetes-55857678b4","uid":"001893c6-7e23-11e8-b38f-0800273a59cb","controller":true,"blockOwnerDeletion":true}]},"spec":{"volumes":[{"name":"default-token-n9t9d","secret":{"secretName":"default-token-n9t9d","defaultMode":420}}],"containers":[{"name":"hello-kubernetes","image":"paulbouwer/hello-kubernetes:1.4","ports":[{"containerPort":8080,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"default-token-n9t9d","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"node","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}]},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-02T18:09:02Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-03T12:32:26Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":null},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2018-07-02T18:09:02Z"}],"hostIP":"10.0.2.15","podIP":"10.244.1.138","startTime":"2018-07-02T18:09:02Z","containerStatuses":[{"name":"hello-kubernetes","state":{"running":{"startedAt":"2018-07-03T12:32:26Z"}},"lastState":{"terminated":{"exitCode":255,"reason":"Error","startedAt":"2018-07-02T18:09:21Z","finishedAt":"2018-07-03T10:10:58Z","containerID":"docker://e82d0338a51aef35869b755b8020704367859855f043d80897e48f4e9c7da869"}},"ready":true,"restartCount":1,"image":"paulbouwer/hello-kubernetes:1.4","imageID":"docker-pullable://paulbouwer/hello-kubernetes@sha256:a9fc93acfbc734827a72107bf7f759745a66ea61758863c094c36e5f4f4b810b","containerID":"docker://4a7e472b35b776700e61605826655950501d114ce182dc178d79d0f50775281d"}],"qosClass":"BestEffort"}}
I0703 08:44:01.290627   10307 round_trippers.go:386] curl -k -v -XPOST  -H "X-Stream-Protocol-Version: v4.channel.k8s.io" -H "X-Stream-Protocol-Version: v3.channel.k8s.io" -H "X-Stream-Protocol-Version: v2.channel.k8s.io" -H "X-Stream-Protocol-Version: channel.k8s.io" -H "User-Agent: kubectl/v1.11.0 (linux/amd64) kubernetes/91e7b4f" 'https://192.168.0.33:6443/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd/exec?command=sh&container=hello-kubernetes&container=hello-kubernetes&stdin=true&stdout=true&tty=true'
                                                                                                               I0703 08:44:01.317914   10307 round_trippers.go:405] POST https://192.168.0.33:6443/api/v1/namespaces/default/pods/hello-kubernetes-55857678b4-4xbgd/exec?command=sh&container=hello-kubernetes&container=hello-kubernetes&stdin=true&stdout=true&tty=true 404 Not Found in 26 milliseconds
                                                                                                                                                                                         I0703 08:44:01.317938   10307 round_trippers.go:411] Response Headers:
                                              I0703 08:44:01.317944   10307 round_trippers.go:414]     Date: Tue, 03 Jul 2018 12:44:01 GMT
                                                                                                                                          I0703 08:44:01.317948   10307 round_trippers.go:414]     Content-Length: 18
    I0703 08:44:01.317951   10307 round_trippers.go:414]     Content-Type: text/plain; charset=utf-8
                                                                                                    F0703 08:44:01.318118   10307 helpers.go:119] error: unable to upgrade connection: pod does not exist
Chris Stryczynski
  • 30,145
  • 48
  • 175
  • 286

4 Answers4

15

It seems to have used the wrong network interface.

I had to manually set KUBELET_EXTRA_ARGS=--node-ip=ABCXYZ in /etc/default/kubernetes on both the node and master (replace abcxyz with the appropriate IP address).

You can check they have the correct ip address with:

kubectl get nodes -o wide 

Which outputs:

NAME      STATUS    ROLES     AGE       VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION      CONTAINER-RUNTIME
master2   Ready     master    19h       v1.11.0   192.168.0.33   <none>        Ubuntu 17.10   4.13.0-46-generic   docker://1.13.1
node      Ready     <none>    18h       v1.11.0   192.168.0.34   <none>        Ubuntu 17.10   4.13.0-16-generic   docker://1.13.1
Chris Stryczynski
  • 30,145
  • 48
  • 175
  • 286
4

TL;DR

Test if any two nodes on the cluster are using the same IP Address by typing "kubectl get nodes -o wide". If there are, set a new fix IP or in the case DHCP is enabled,  get a new IP executing 

sudo dhclient -r yourNetworkInterface

sudo dhclient yourNetworkInterface

Problem

After joining a new worker node to a running Kubernetes cluster, it was no longer possible to successfully execute "kubectl exec" on containers created after the new node joined the cluster, instead, the error message: "error: unable to upgrade connection: pod does not exist" popped up.

enter image description here

kubectl get pods -A showed that the problematic pod was running on the newly joined node. 

enter image description here

Cause 

kubectl get nodes -A revealed that the master and worker node were using the same IP address.

enter image description here

Solution 

Assign a new IP address to the new node, either using dhclient in case DHCP is enabled, if it is not enabled set a new fix IP using netplan

enter image description here

Jorge P.
  • 303
  • 4
  • 8
1

In ubuntu/debian family, use the /etc/default/kubelet file, to append the IP of the node/worker

cat /etc/default/kubelet KUBELET_EXTRA_ARGS="--node-ip=192.168.56.XX"

Sugesh Nair
  • 91
  • 1
  • 6
  • 1
    Just ensure that this is the IP of the network interface where the master can "see" the worker node. In my case, at first two nodes had the same "internal IP" (on VirtualBox machines it's 10.0.2.15 by default), this seems to prevent `kube exec` to talk to the right node (since all it saw was 10.0.2.15) – Hieu Thai Jul 29 '22 at 07:54
0

For my case, I am using Ubuntu virtualbox of 'bento/ubuntu-20.04', I was facing the same issue. The change was made on /etc/systemd/system/kubelet.service.d/10-kubeadm.conf with following extra line:

Environment="KUBELET_EXTRA_ARGS=--node-ip=172.16.0.xxx",

the actual ip can be your ethernet interface 's IP associated with your VMs, cp node and worker node should be associated to different IPs. After the change, reload and start kubelet servcie: sudo systemctl daemon-reload sudo systemctl restart kubelet.service

Repeat one the other node.

To confirm, from cp node, run "kubectl get node -o wide" and make sure new internal IPs associated to both nodes.

Shao
  • 21
  • 5