75

When i run the kubectl version command , I get the following error message.

kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout

How do I resolve this?

Uwe Keim
  • 39,551
  • 56
  • 175
  • 291
IT_novice
  • 1,211
  • 3
  • 13
  • 22
  • 1
    Hi, can you validate that your client is requesting to correct api server with the following commad `kubectl config view`? – Suresh Vishnoi Mar 13 '18 at 15:40
  • 1
    I guess so. I am a novice and learning K8s.Here's the output from config ` apiVersion: v1 clusters: - cluster: certificate-authority-data: REDACTED server: https://192.168.178.24:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED` – IT_novice Mar 13 '18 at 16:03
  • 1
    hi you are having cluster on minikube or AKS ? – Suresh Vishnoi Mar 13 '18 at 16:05
  • 1
    i am having a cluster on raspberry-pi . One master and two nodes – IT_novice Mar 13 '18 at 16:06
  • 1
    `kubectl config use-context kubernetes` will help you – Suresh Vishnoi Mar 13 '18 at 16:06
  • Thanks. I had earlier installed minikube and just realized that I didn't uninstall properly. Thanks again for your help. – IT_novice Mar 13 '18 at 16:08
  • so its working now ? I think when you delete minikube it does not remove the data from kubeconfig. – Suresh Vishnoi Mar 13 '18 at 16:08

15 Answers15

60

You can get relevant information about the client-server status by using the following command.

kubectl config view 

Now you can update or set k8s context accordingly with the following command.

kubectl config use-context CONTEXT-CHOSEN-FROM-PREVIOUS-COMMAND-OUTPUT

you can do further action on kubeconfig file. the following command will provide you with all necessary information.

kubectl config --help
Kamafeather
  • 8,663
  • 14
  • 69
  • 99
Suresh Vishnoi
  • 17,341
  • 8
  • 47
  • 55
41

You have to run first

minikube start

on your terminal. This will do following things for you:

 Restarting existing virtualbox VM for "minikube" ...
⌛  Waiting for SSH access ...
  "minikube" IP address is 192.168.99.100
  Configuring Docker as the container runtime ...
  Version of container runtime is 18.06.3-ce
⌛  Waiting for image downloads to complete ...
✨  Preparing Kubernetes environment ...
  Pulling images required by Kubernetes v1.14.1 ...
  Relaunching Kubernetes v1.14.1 using kubeadm ... 
⌛  Waiting for pods: apiserver proxy etcd scheduler controller dns
  Updating kube-proxy configuration ...
  Verifying component health ......
  kubectl is now configured to use "minikube"
  Done! Thank you for using minikube!
Venu Gopal Tewari
  • 5,672
  • 42
  • 41
17

If you use minikube then you should run, kubectl config use-context minikube If you use latest docker for desktop that comes with kubernetes then you should run, kubectl config use-context docker-for-desktop

Khalid
  • 664
  • 7
  • 16
16

I had the same issue when I tried use kubrnetes installed with Docker. It turned out that it was not enbled by default. enter image description here

First I enabled kubrnetes in Docker options and then I changed context for docker-for-desktop

kubectl config get-contexts
kubectl config use-context docker-desktop

It solved the issue.

Mukus
  • 4,870
  • 2
  • 43
  • 56
Krzysztof Madej
  • 32,704
  • 10
  • 78
  • 107
13

I was facing the same issue on Ubuntu 18.04.1 LTS.

The solution provided here worked for me.

Just putting the same data here:

  1. Get current cluster name and Zone:

    gcloud container clusters list

  2. Configure Kubernetes to use your current cluster:

    gcloud container clusters get-credentials [cluster name] --zone [zone]

Hope it helps.

RobC
  • 22,977
  • 20
  • 73
  • 80
asim
  • 452
  • 4
  • 12
  • 2
    in gcloud, when connecting to a cluster through the UI, "gcloud container clusters get-credentials ... --zone ... --project .." is the first command executed. After that, the kubectl works. – Chris Nov 02 '19 at 13:21
  • Even If the Cluster name and Zone is configured correctly as mentioned in O/P of clusters list . Please run the second command again. This happened to me when I created a new cluster. – asim Dec 08 '19 at 15:41
  • 2
    This saved my butt. Here's some more details into why: if I do a `kubectl config view` I can see that my auth provider was expired (`expiry: "--yesterday--"`). Takewaway: on gcloud kube auth expiration does not surface in an intelligible manner. – Kevin Won Mar 04 '21 at 22:07
  • also works with `--region [region]` – Chris Chiasson Sep 13 '22 at 23:56
12

This problem occurs because of minikube. Restart minikube will solve this problem.Run below command and it will work-

minikube stop
minikube delete
minikube start
Yash Varshney
  • 365
  • 1
  • 5
  • 16
4

Was facing the same problem with accessing GKE master from Google Cloud Shell.

Then I followed this GCloud doc to solve it.

  1. Open GCloud Shell

  2. Get External IP of the current GCloud Shell with:

    dig +short myip.opendns.com @resolver1.opendns.com

  3. Add this External IP into the "Master authorized networks" section of the GKE cluster - with a CIDR suffix of /32

After that, running kubectl get nodes from the GCloud Shell worked right away.

Rakib
  • 12,376
  • 16
  • 77
  • 113
  • I tried it but it results in the error: "Invalid master authorized networks: network "35.233.xxx.xxx/32" is not a reserved network, which is required for private endpoints." – ekkis Jun 19 '21 at 00:14
2

I got similar problem when I run

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout

And here's how I tried and finally worked.

I installed Docker Desktop on Mac (Version 2.0.0.3) firstly. Then I installed the kubectl with command

$ brew install kubectl
.....
==> Pouring kubernetes-cli-1.16.0.high_sierra.bottle.tar.gz
Error: The `brew link` step did not complete successfully
The formula built, but is not symlinked into /usr/local
Could not symlink bin/kubectl
Target /usr/local/bin/kubectl
already exists. You may want to remove it:
  rm '/usr/local/bin/kubectl'

To force the link and overwrite all conflicting files:
  brew link --overwrite kubernetes-cli

To list all files that would be deleted:
  brew link --overwrite --dry-run kubernetes-cli

Possible conflicting files are:
/usr/local/bin/kubectl -> /Applications/Docker.app/Contents/Resources/bin/kubectl
.....

That doesn't matter, we have already got the kubectl. Then I install minikube with command

$ brew cask install minikube
...
==> Linking Binary 'minikube-darwin-amd64' to '/usr/local/bin/minikube'.
  minikube was successfully installed!

start minikube first time (VirtualBox not installed)

$ minikube start
  minikube v1.4.0 on Darwin 10.13.6
  Downloading VM boot image ...
    > minikube-v1.4.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
    > minikube-v1.4.0.iso: 135.73 MiB / 135.73 MiB [-] 100.00% 7.75 MiB p/s 18s
  Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
  Retriable failure: create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
...
  Unable to start VM
❌  Error: [VBOX_NOT_FOUND] create: precreate: VBoxManage not found. Make sure VirtualBox is installed and VBoxManage is in the path
  Suggestion: Install VirtualBox, or select an alternative value for --vm-driver
  Documentation: https://minikube.sigs.k8s.io/docs/start/
⁉️   Related issues:
    ▪ https://github.com/kubernetes/minikube/issues/3784

Install VirtualBox, then start minikube second time (VirtualBox installed)

$ minikube start
  13:37:01.006849   35511 cache_images.go:79] CacheImage kubernetesui/dashboard:v2.0.0-beta4 -> /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4 failed: read tcp 10.49.52.206:50350->104.18.125.25:443: read: operation timed out
  Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
E1002 13:37:33.632298   35511 start.go:706] Error caching images:  Caching images for kubeadm: caching images: caching image /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: read tcp 10.49.52.206:50350->104.18.125.25:443: read: operation timed out
❌  Unable to load cached images: loading cached images: loading image /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: stat /Users/kaka.go/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta4: no such file or directoryminikube v1.4.0 on Darwin 10.13.6
  Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
E1002 
  Downloading kubeadm v1.16.0
  Downloading kubelet v1.16.0
  Pulling images ...
  Launching Kubernetes ... 

  Error starting cluster: timed out waiting to elevate kube-system RBAC privileges: Temporary Error: creating clusterrolebinding: Post https://192.168.99.100:8443/apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings: dial tcp 192.168.99.100:8443: i/o timeout

  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
  https://github.com/kubernetes/minikube/issues/new/choose
❌  Problems detected in kube-addon-manager [b17d460ddbab]:
    error: no objects passeINFO:d  == Kuto apberneply
    error: no objectNsF Op:a == Kubernetssed tes ado appdon ely

start minikube 3rd time

$ minikube start
  minikube v1.4.0 on Darwin 10.13.6
  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
  Using the running virtualbox "minikube" VM ...
⌛  Waiting for the host to be provisioned ...
  Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
  Relaunching Kubernetes using kubeadm ... 

! still got stuck on Relaunching

I enable Kubernetes config in Docker Preferences setting, restart my Mac and switch the Kubernetes context to docker-for-desktop.

Oh, the kubectl version works this time, but with the context docker-for-desktop

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:25:46Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

start minikube 4th time (after restart system maybe)

$ minikube start
  minikube v1.4.0 on Darwin 10.13.6
  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
  Starting existing virtualbox VM for "minikube" ...
⌛  Waiting for the host to be provisioned ...
  Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
  Relaunching Kubernetes using kubeadm ... 
⌛  Waiting for: apiserver proxy etcd scheduler controller dns
  Done! kubectl is now configured to use "minikube"

Finally, it works with minikube context...

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
gokaka
  • 136
  • 1
  • 5
0

i checked the firewall port and it was closed, i opened it and it started working.

0

If you are using azure and have recently changed your password try this:

az account clear
az login

After logging in successfully:

az aks get-credentials --name project_name --resource-group resource_group_name

Now when you run

kubectl get nodes

you should see something. Also, make sure you are using the correct kubectl context.

Arthur Costa
  • 1,451
  • 19
  • 32
0

My problem was that I use 2 virtual networks on my VM. The network which Kubernetes uses is always the one of the Default Gateway. However the communication network between my VMs was the other one.

You can force Kubernetes to use a different network by using the folowing flags:

sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-cert-extra-sans=xxx.xxx.xxx.xxx --apiserver-advertise-address=xxx.xxx.xxx.xxx

Change the xxx.xxx.xxx.xxx with the commmunication IP address of your K8S master.

ESP32
  • 8,089
  • 2
  • 40
  • 61
0

I have two contexts and I got this error when I was in the incorrect one of the two so I switched the context and this error was resolved.

To see your current context: kubectl config current-context

To see the contexts you have: kubectl config view

To switch context: kubectl config use-context context-cluster-name

anosha_rehan
  • 1,522
  • 10
  • 17
0

Adding this here so it can help someone with a similar problem.

In our case, we had to configure our VPC network to export its custom routes for VPC peering “gke-jn7hiuenrg787hudf-77h7-peer” in project “” to the control plane's VPC network.

The control plane's VPC network is already configured to import custom routes. This provides a path for the control plane to send packets back to on-premise resources.

SagarM
  • 311
  • 4
  • 14
0

Step-1: Run command to see all list of context:

kubectl config view

Step-2: Now switch your context where you want to work.

kubectl config use-context [context-name]

For example:

kubectl config use-context docker-desktop
0

I face the same issue, it might be your ip was not added into authorize network list in the Kubernetes Cluster. Simply navigate to:

GCP console -> Kubernetes Engine -> Click into the Clusters you wish to interact with

In the target Cluster page look for:

Control plane authorized networks -> click pencil icon -> Add Authorized Network

Add your External Ip with a CIDR suffix of /32 (xxx.xxx.xxx.xxx/32).

One way to get your external IP on terminal / CMD:

curl -4 ifconfig.co
Henry Teh
  • 151
  • 3
  • 5