5

i am trying to install a kops cluster on AWS and to that as a pre-requisite i installed kubectl as per these instructions provided,

https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl

but when i try to verify the installation, i am getting the below error.

ubuntu@ip-172-31-30-74:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

i am not sure why! because i had set up cluster with similar way earlier and everything worked fine. Now wanted to set up a new cluster, but kind of stuck in this.

Any help appreciated.

Shruthi Bhaskar
  • 1,212
  • 6
  • 20
  • 32
  • 1
    kubectl reads information from your ~/.kube/config . If you created a new cluster ensure you have the proper ~/.kube/config – Koe Jul 10 '18 at 17:49
  • Possible duplicate of [kops - get wrong kubectl context](https://stackoverflow.com/questions/50582788/kops-get-wrong-kubectl-context) – Anton Kostenko Jul 11 '18 at 08:23

3 Answers3

3

Two things :

  1. If every instruction was followed properly, and still facing same issue. @VAS answer might help.
  2. However in my case, i was trying to verify with kubectl as soon as i deployed a cluster. It is to be noted that based on the size of the master and worker nodes cluster takes some time to come up.

Once the cluster is up, kubectl was successfully able to communicate. As silly as it sounds, i waited out for 15 mins or so until my master was successfully running. Then everything worked fine.

Shruthi Bhaskar
  • 1,212
  • 6
  • 20
  • 32
2

The connection to the server localhost:8080 was refused - did you specify the right host or port?

This error usually means that your kubectl config is not correct and either it points to the wrong address or credentials are wrong.

If you have successfully created a cluster with kops, you just need to export its connections settings to kubectl config.

kops export --name=<your_cluster_name> --config=~/.kube/config

If you want to use a separate config file for this cluster, you can do it by setting the environment variable:

export KUBECONFIG=~/.kube/you_cluster_name.config
kops export kubecfg --name you_cluster_name --config=~$KUBECONFIG

You can also create a kubectl config for each team member using KOPS_STATE_STORE:

export KOPS_STATE_STORE=s3://<somes3bucket>
# NAME=<kubernetes.mydomain.com>
kops export kubecfg ${NAME}
VAS
  • 8,538
  • 1
  • 28
  • 39
0

In my particular case I forgot to configure kubectl after the installation which resulted in the exact same symptoms.

More specifically I forgot to create and populate the config file in $HOME/.kube directory. You can read about how to do this properly here, but this should suffice to make the error go away:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
João Matos
  • 6,102
  • 5
  • 41
  • 76