0

kubeadm join on slave finds master, but master never sees slave:

user1@ubuntu:~$ kubectl get nodes

NAME      STATUS    ROLES     AGE       VERSION

ubuntu    Ready     master    1h        v1.8.0


user1@ubuntu:~$ kubectl get pods --all-namespaces

NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE

kube-system   etcd-ubuntu                             1/1       Running   0          1h

kube-system   kube-apiserver-ubuntu                   1/1       Running   0          1h

kube-system   kube-controller-manager-ubuntu          1/1       Running   0          1h

kube-system   kube-dns-545bc4bfd4-576sl               3/3       Running   0          1h

kube-system   kube-flannel-ds-fwqct                   1/1       Running   0          1h

kube-system   kube-proxy-fkk6m                        1/1       Running   0          1h

kube-system   kube-scheduler-ubuntu                   1/1       Running   0          1h

kube-system   kubernetes-dashboard-7f9dbb8685-b5gmh   1/1       Running   0          26m

user1@ubuntu:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:46:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
economy
  • 4,035
  • 6
  • 29
  • 37
  • What is the terminal output when you run `kubeadm join ...` on slave node? Is the kubelet service running on your slave node? – ichbinblau Oct 16 '17 at 01:59
  • Yes, [discovery] Successfully established connection... on slave. Getting kubelet error syncing pod kube-dns using Flannel, Weave and Calico. Getting 'CrashLoopBackOff' for kube-dns on master. I've seen others report a similar issue but no solution found. – bladerunner512 Oct 17 '17 at 14:43

2 Answers2

0

From your output, your compute node was not registered, otherwise it would be in state "Not Ready" but at least existed.
Please provide output of your kubeadm join ... as well as corresponding kubelet logs. Make sure you have no firewall on compute node blocking kubelet port.

onorua
  • 385
  • 4
  • 18
0

join appears to be successful: slave@ubuntu:~# kubeadm join --token 888fb2.176443c7da1f21b9 192.168.80.158:6443 --discovery-token-ca-cert-hash sha256:43d13c540a4c70686b5a3bd54a0514eddcaf5d0f5876f5b3a059eee4de833609 [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 17.03 [preflight] Starting the kubelet service [discovery] Trying to connect to API Server "192.168.80.158:6443" [discovery] Created cluster-info discovery client, requesting info from "https://192.168.80.158:6443" [discovery] Requesting info from "https://192.168.80.158:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.80.158:6443" [discovery] Successfully established connection with API Server "192.168.80.158:6443" [bootstrap] Detected server version: v1.8.1 [bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

Node join complete:

Certificate signing request sent to master and response received. Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.

get kublet 'Error syncing pod kube-dns': MESSAGE=I1017 08:58:20.458189 92898 kuberuntime_manager.go:748] Back-off 5m0s restarting failed container=dnsmasq pod=kube-dns... MESSAGE=E1017 08:58:20.458293 92898 pod_workers.go:182] Error syncing pod b6b29930-aece-11e7-9319-000c2941e694 ("kube-dns... MESSAGE=, failed to "StartContainer" for "dnsmasq" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=dnsmasq pod=kube-dns