2

I am working on setting up a multi-node, multi hardware server Kubernetes Cluster. I am using Calico and Kubeadm.

I used vagrant with ansible and virtualbox to set node accros the network on many hardware servers. It is working, the nodes joined the master with the join-command using kubeadm. And the master is recognizing the remote nodes.

vagrant@Server-1-MASTER:~$ kubectl get nodes
NAME                STATUS   ROLES    AGE     VERSION
master              Ready    master   3h37m   v1.18.2
server-1-worker-1   Ready    <none>   3h23m   v1.18.2
server-2-worker-1   Ready    <none>   171m    v1.18.2
server-2-worker-2   Ready    <none>   41m     v1.18.2

I am facing a configuration issue that I am trying to fix to set rules and access for the worker nodes.

Issue: On the workers, an error occurs

no configuration has been provided, try setting KUBERNETES_MASTER environment variable

When using kubectl commands

vagrant@Server-1-WORKER-1:~$ kubectl get pods

error: no configuration has been provided, try setting KUBERNETES_MASTER 

environment variable

vagrant@Server-1-WORKER-1:~$ kubectl get ns

error: no configuration has been provided, try setting KUBERNETES_MASTER 

environment variable

And the env variable is empty :

vagrant@Server-1-WORKER-1:~$ echo $KUBECONFIG


I am asking if someone could help me to fix this problem, set the right variable value to build a complete working cluster.

Thank you.

Mohamed Zouari
  • 395
  • 1
  • 5
  • 14

2 Answers2

9

Kubeadm does not automatically setup kubeconfig file on the worker nodes.Copy the kubeconfig file located at /etc/kubernetes/admin.conf from the master nodes over to the worker nodes and set KUBECONFIG environment variable to point to the path of the config file.

Arghya Sadhu
  • 41,002
  • 9
  • 78
  • 107
1

Copy the /etc/kubernetes/admin.conf from the master node to the worker node to the same location.

Then export the veriable. Find the steps in the image

export KUBECONFIG=/etc/kubernetes/admin.conf