3

I have a Kubernetes cluster in vagrant (1.14.0) and installed calico.

I have installed the kubernetes dashboard. When I use kubectl proxy to visit the dashboard:

Error: 'dial tcp 192.168.1.4:8443: connect: connection refused'
Trying to reach: 'https://192.168.1.4:8443/'

Here are my pods (dashboard is restarting frequently):

$ kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-etcd-cj928                          1/1     Running   0          11m
calico-node-4fnb6                          1/1     Running   0          18m
calico-node-qjv7t                          1/1     Running   0          20m
calico-policy-controller-b9b6749c6-29c44   1/1     Running   1          11m
coredns-fb8b8dccf-jjbhk                    1/1     Running   0          20m
coredns-fb8b8dccf-jrc2l                    1/1     Running   0          20m
etcd-k8s-master                            1/1     Running   0          19m
kube-apiserver-k8s-master                  1/1     Running   0          19m
kube-controller-manager-k8s-master         1/1     Running   0          19m
kube-proxy-8mrrr                           1/1     Running   0          18m
kube-proxy-cdsr9                           1/1     Running   0          20m
kube-scheduler-k8s-master                  1/1     Running   0          19m
kubernetes-dashboard-5f7b999d65-nnztw      1/1     Running   3          2m11s

logs of the dasbhoard pod:

2019/03/30 14:36:21 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ

I can telnet from both master and nodes to 10.96.0.1:443.

What is configured wrongly? The rest of the cluster seems to work fine, although I see this logs in kubelet:

failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml"

kubelet seems to run fine on the master. The cluster was created with this command:

kubeadm init --apiserver-advertise-address="192.168.50.10" --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master --pod-network-cidr=192.168.0.0/16
DenCowboy
  • 13,884
  • 38
  • 114
  • 210

3 Answers3

3

you should define your hostname in /etc/hosts

#hostname
YOUR_HOSTNAME
#nano /etc/hosts
YOUR_IP HOSTNAME

if you set your hostname in your master but it did not work try

# systemctl stop kubelet
# systemctl stop docker
# iptables --flush
# iptables -tnat --flush
# systemctl start kubelet
# systemctl start docker

and you should install dashboard before join worker node

and disable your firewall

and you can check your free ram.

yasin lachini
  • 5,188
  • 6
  • 33
  • 56
  • adding hostname worked, can you explain why I had to do this? – DenCowboy Mar 30 '19 at 21:34
  • In kubernetes everything define by name because when a pod is destroyed it is get an other IP so we should define our hostname and cordns just work with hostname not IP And if you get your cordns log before You could see an error in it – yasin lachini Mar 31 '19 at 06:22
  • I am still having trouble with this. That whole list is fine except I've already joined the worker nodes. Any idea how I can still setup the dashboard with the worker nodes connected? – Matthew Vine Dec 17 '21 at 19:49
0

Exclude -- node-name parameter from kubeadm init command

try this command

kubeadm init --apiserver-advertise-address=$(hostname -i) --apiserver-cert-extra-sans="192.168.50.10" --pod-network-cidr=192.168.0.0/16
P Ekambaram
  • 15,499
  • 7
  • 34
  • 59
0

For me the issue was I needed to create a NetworkPolicy that allowed Egress traffic to the kubernetes API

bmoe24x
  • 121
  • 1
  • 6