0

I am new to Kubernetes, so some of my questions may be basic.

My setup: 2 VM (running Ubuntu 16.04.2)

Kubernetes Version: 1.7.1 on both Master Node(kube4local) and Slave Node(kube5local) My Steps: 1. On both Master and Slave Nodes, installed the required kubernetes(kubelet kubeadm kubectl kubernetes-cni) packages and docker(docker.io) packages. On the Master Node: 1.

vagrant@kube4local:~$ sudo kubeadm init  
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [kube4local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 1051.552012 seconds
[token] Using token: 3c68b6.8c3f8d5a0a29a3ac
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 3c68b6.8c3f8d5a0a29a3ac 10.0.2.15:6443

vagrant@kube4local:~$ mkdir -p $HOME/.kube
vagrant@kube4local:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
vagrant@kube4local:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
vagrant@kube4local:~$ sudo kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created

On the Slave Node:

Note: I am able to do a basic ping test, and ssh, scp commands between the master node running in VM1 and slave node running in VM2 works fine.

Ran the join command. Output of join command in slave node:

vagrant@kube5local:~$ sudo kubeadm join --token 3c68b6.8c3f8d5a0a29a3ac 10.0.2.15:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[preflight] WARNING: hostname "" could not be reached
[preflight] WARNING: hostname "" lookup : no such host
[preflight] Some fatal errors occurred:
        hostname "" a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`

Why i get this error, my /etc/hosts correct:

[preflight] WARNING: hostname "" could not be reached
[preflight] WARNING: hostname "" lookup : no such host

Output of Status Commands On the Master Node:

vagrant@kube4local:~$ sudo kubectl cluster-info
Kubernetes master is running at https://10.0.2.15:6443

vagrant@kube4local:~$ sudo kubectl get nodes
NAME         STATUS    AGE       VERSION
kube4local   Ready     26m       v1.7.1

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Output of ifconfig on Master Node(kube4local):

vagrant@kube4local:~$ ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:3a:c4:00:50
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

enp0s3    Link encap:Ethernet  HWaddr 08:00:27:19:2c:a4
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe19:2ca4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:260314 errors:0 dropped:0 overruns:0 frame:0
          TX packets:58921 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:334293914 (334.2 MB)  TX bytes:3918136 (3.9 MB)

enp0s8    Link encap:Ethernet  HWaddr 08:00:27:b8:ef:b6
          inet addr:192.168.56.104  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:feb8:efb6/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:247 errors:0 dropped:0 overruns:0 frame:0
          TX packets:154 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:36412 (36.4 KB)  TX bytes:25999 (25.9 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:19922 errors:0 dropped:0 overruns:0 frame:0
          TX packets:19922 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:1996565 (1.9 MB)  TX bytes:1996565 (1.9 MB)

Output of /etc/hosts on Master Node(kube4local):

vagrant@kube4local:~$ cat /etc/hosts
192.168.56.104 kube4local   kube4local
192.168.56.105 kube5local   kube5local
127.0.1.1       vagrant.vm      vagrant
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Output of ifconfig on Slave Node(kube5local):

vagrant@kube5local:~$ ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:bb:37:ab:35
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

enp0s3    Link encap:Ethernet  HWaddr 08:00:27:19:2c:a4
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe19:2ca4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:163514 errors:0 dropped:0 overruns:0 frame:0
          TX packets:39792 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:207478954 (207.4 MB)  TX bytes:2660902 (2.6 MB)

enp0s8    Link encap:Ethernet  HWaddr 08:00:27:6a:f0:51
          inet addr:192.168.56.105  Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe6a:f051/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:195 errors:0 dropped:0 overruns:0 frame:0
          TX packets:151 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:30463 (30.4 KB)  TX bytes:26737 (26.7 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Output of /etc/hosts on Slave Node(kube4local):

vagrant@kube5local:~$ cat /etc/hosts
192.168.56.104 kube4local   kube4local
192.168.56.105 kube5local   kube5local
127.0.1.1       vagrant.vm      vagrant
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
nat
  • 21
  • 2
  • 4

1 Answers1

2

Nat this is bug in version v1.7.1. you can use v1.7.0 version or skip the pre-flight check.

kubeadm join --skip-preflight-checks 

you can refer this thread for more details.

kubernets v1.7.1 kubeadm join hostname "" could not be reached error

sfgroups
  • 18,151
  • 28
  • 132
  • 204
  • Thx, that worked for me.But there was another problem, after adding slaves in cluster, command kubectl doesn`t show slave nodes. Only master: vagrant@kube4local:~$ kubectl get nodes NAME STATUS AGE VERSION kube4local Ready 1h v1.7.1 – nat Jul 21 '17 at 13:03
  • your using vagrant, default route should be on `enp0s8` interface. can you show the default routes on both machine? – sfgroups Jul 21 '17 at 13:12
  • Of course. Master: `enp0s8 Link encap:Ethernet HWaddr 08:00:27:b8:ef:b6 inet addr:192.168.56.104 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:feb8:efb6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:156 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15711 (15.7 KB) TX bytes:648 (648.0 B)` – nat Jul 21 '17 at 13:19
  • Slave: `enp0s8 Link encap:Ethernet HWaddr 08:00:27:6a:f0:51 inet addr:192.168.56.105 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe6a:f051/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:835 errors:0 dropped:0 overruns:0 frame:0 TX packets:366 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:142447 (142.4 KB) TX bytes:54315 (54.3 KB)` – nat Jul 21 '17 at 13:20
  • this is not showing the routes. post this command output `ip r` – sfgroups Jul 21 '17 at 13:38
  • Good, one more thing. when you reboot it will back for other interface, make sure you change it after reboot. – sfgroups Jul 21 '17 at 14:39
  • Yes, after reboot interface is changing. Please explain to me how to properly configure the interface so that the nodes can see each other? – nat Jul 25 '17 at 11:13
  • Master route: `vagrant@kube4local:~$ ip r default via 192.168.56.104 dev enp0s8 10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 192.168.56.0/24 dev enp0s8 proto kernel scope link src 192.168.56.104` – nat Jul 25 '17 at 11:13
  • Slave route: `vagrant@kube5local:~$ ip r 10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 192.168.56.0/24 dev enp0s8 proto kernel scope link src 192.168.56.105` – nat Jul 25 '17 at 11:14