-1

I am getting connection refused error while fetching the nodes of my kubernetes cluster from master. I have tried all debugging methods available in the internet but none seems working. I have one master node and 2 worker node setup in my cluster.

  1. kubectl get nodes

    master@master-vm:~$ kubectl get nodes
    The connection to the server X.X.X.X:6443 was refused - did you specify the right host or port?
    master@master-vm:~$
    
  2. Kubelet status

    master@master-vm:~/.kube$ systemctl status kubelet
    Active: active (running) since Tue 2021-03-16 19:53:33 IST; 20s ago
    kubelet.go:2263] node "master-vm" not found
    
  3. Docker Status

    master@master-vm:~$ systemctl status docker
     ● docker.service - Docker Application Container Engine
    Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
    Active: active (running) since Tue 2021-03-16 19:37:59 IST;
    
  4. 6443 Port details in netstat

    master@master-vm:~$ sudo netstat -pnlt | grep 6443
    [sudo] password for master:
    tcp6      76      0 :::6443                 :::*                    LISTEN      1107/kube-apiserver
    master@master-vm:~$
    
  5. Swap is OFF

    master@master-vm:~$ sudo swapoff -a
    master@master-vm:~$ free -m
                  total        used        free      shared  buff/cache   available
    Mem:           3936        1159        1468          11        1307        2545
    Swap:             0           0           0
    master@master-vm:~$
    
  6. Kubectl version

    master@master-vm:~$ kubectl version
    Client Version: version.Info{Major:"1", Minor:"17", 
    GitVersion:"v1.17.4",
    GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", 
    GitTreeState:"clean", 
    BuildDate:"2020-03-12T21:03:42Z", 
    GoVersion:"go1.13.8", 
    Compiler:"gc", 
    Platform:"linux/amd64"}
    
    The connection to the server X.X.X.X:6443 was refused - did you specify the right host or port?
    master@master-vm:~$
    
  7. Firewall Status

    master@master-vm:~$ sudo ufw status verbose #ubuntu
    Status: inactive
    master@master-vm:~$ sudo ufw disable #ubuntu
    Firewall stopped and disabled on system startup
    master@master-vm:~$
    
  8. /etc/hosts

    127.0.0.1       localhost
    127.0.1.1       master-vm
    
  9. Kubeadm Version

     master@master-vm:~$ kubeadm version
     kubeadm version: &version.Info{Major:"1", Minor:"17", 
     GitVersion:"v1.17.4", 
     GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", 
     GitTreeState:"clean", 
     BuildDate:"2020-03-12T21:01:11Z", 
     GoVersion:"go1.13.8", 
     Compiler:"gc", 
     Platform:"linux/amd64"}
     master@master-vm:~$ 
    
  10. Kubectl config view

     master@master-vm:~/.kube$ kubectl config view
     apiVersion: v1
     clusters:
       cluster:
         certificate-authority-data: DATA+OMITTED
         server: https://X.X.X.X:6443
       name: kubernetes
     contexts:
       context:
         cluster: kubernetes
         user: kubernetes-admin
       name: kubernetes-admin@kubernetes
     current-context: kubernetes-admin@kubernetes
     kind: Config
     preferences: {}
     users:
       name: kubernetes-admin
       user:
         client-certificate-data: REDACTED
         client-key-data: REDACTED
     master@master-vm:~/
    

Any suggestion on how to rectify the above error ?

Som
  • 1,522
  • 1
  • 15
  • 48

2 Answers2

1

It looks to me that api server runs on ipv6 (notice the tcp6 in the output of netstat).

Try starting your k8s with

kubeadm init --apiserver-advertise-address=<private-ipv4 of master host>

--apiserver-advertise-address - The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.

If this does not help, you may want to have a look at api-server logs. Either use docker logs to get the logs or:

cat /var/log/pods/kube-system_kube-apiserver-master-vm_xxxxxxx/kube-apiserver/0.log
Matt
  • 7,419
  • 1
  • 11
  • 22
0

Try executing the following lines of code on the master node.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Next try executing kubectl get nodes on the same master node and it should work fine.

skogul1997
  • 11
  • 3