1

I've been following Kelsey Hightower's Kubernetes the Hard Way which walks you through manually setting up a k8s cluster.

This is not running on minikube - it's running on a remote VPS.

I'm on the step where I set up the k8s control plane.

However when trying to run a health check on kube-apiserver I get the following:

$ kubectl cluster-info --kubeconfig admin.kubeconfig

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

I'm not quite sure where to start debugging from here.

Configuration

All 3 k8s services are running:

systemctl status kube-apiserver kube-controller-manager kube-scheduler etcd

# => All 4 return: "active (running)"

The kube-apiserver is configured to start with systemd:

 $ cat /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
  --advertise-address=$INTERNAL_IP_REDACTED \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/audit.log \
  --authorization-mode=Node,RBAC \
  --bind-address=0.0.0.0 \
  --client-ca-file=/var/lib/kubernetes/ca.pem \
  --enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --etcd-cafile=/var/lib/kubernetes/ca.pem \
  --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \
  --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \
  --etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \
  --event-ttl=1h \
  --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
  --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
  --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
  --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
  --runtime-config='api/all=true' \
  --service-account-key-file=/var/lib/kubernetes/service-account.pem \
  --service-account-signing-key-file=/var/lib/kubernetes/service-account-key.pem \
  --service-account-issuer=https://$EXTERNAL_IP_REDACTED:6443 \
  --service-cluster-ip-range=10.32.0.0/24 \
  --service-node-port-range=30000-32767 \
  --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
  --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

The kube-apiserver is definitely running and listening on port :6443

 $ lsof -iTCP -sTCP:LISTEN -n -P | grep 6443
kube-apis 989442            root    7u  IPv6 9345693      0t0  TCP *:6443 (LISTEN)

Here is the admin.kubeconfig file that is configured to look for the cluster at 127.0.0.1:6443

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tL...
    server: https://127.0.0.1:6443
  name: kubernetes-the-hard-way
contexts:
- context:
    cluster: kubernetes-the-hard-way
    user: admin
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: LS0tLS1CRU...
    client-key-data: LS0tLS1C....

There is an SSL certificate provisioned (created earlier in the Kubernetes tutorial)

$ ls -hlt /var/lib/kubernetes/ca*
-rw------- 1 root root 1.7K Dec 18 00:56 /var/lib/kubernetes/ca-key.pem
-rw-r--r-- 1 root root 1.3K Dec 18 00:56 /var/lib/kubernetes/ca.pem

Finally, NGINX is also configured to redirect port 80 traffic to the health check end point

$ cat /etc/nginx/sites-available/kubernetes.default.svc.cluster.local
server {
  listen      80;
  server_name kubernetes.default.svc.cluster.local;

  location /healthz {
     proxy_pass                    https://127.0.0.1:6443/healthz;
     proxy_ssl_trusted_certificate /var/lib/kubernetes/ca.pem;
  }
}

As mentioned above, I'm not really seeing what's wrong and I have no idea where else to start investigating.

Thank you!

abhchand
  • 119
  • 1
  • 2
  • 2
    Are you running `kubectl` on the same host where ` kube-apiserver` is running? – AlexD Dec 23 '21 at 15:58
  • @AlexD - correct, I am running it on the same host. – abhchand Dec 23 '21 at 20:38
  • check output of `env |grep -i proxy`. – AlexD Dec 23 '21 at 20:45
  • What is the value for INTERNAL_IP_REDACTED? I am asking this because I want to make sure you are sending the request to the right target which is also whitelisted in the server certificate. – Rajesh Dutta Dec 24 '21 at 13:29
  • @RajeshDutta that redacted line is `--advertise-address=10.132.0.5`. The internal IP was determined from `ifconfig`. Thanks! – abhchand Dec 24 '21 at 16:41
  • @AlexD there is no output returned from `env | grep -i proxy`. Thanks! – abhchand Dec 24 '21 at 16:42
  • @abhchand 10.132.0.5 is internal IP address, and I think this is obtained from POD network range. Try to ping this ip from the node where you are trying kubectl command. I doubt you will get a response. If you want to access this outside the master node then you need a frontend load balancer or you need to advertise the kube-api server using a node IP(provided that IP address should be mentioned in the server certificate). – Rajesh Dutta Dec 25 '21 at 11:23

1 Answers1

0

I’ve followed the same guide step by step to recreate the deployment.

I found that you might be running the command outside the controller vm.

Please issue the following command to login to the controller vm:

gcloud compute ssh controller-0

Then, retry the cluster-info command

kubectl cluster-info --kubeconfig admin.kubeconfig

Here is my output when doing the test.

Example from controller-1 VM

RUNNING COMMAND INSIDE CONTROLLER-1 VM

xxxxxxx_ayaladeltoro@controller-1:~$ kubectl cluster-info --kubeconfig admin.kubeconfig
Kubernetes control plane is running at https://127.0.0.1:6443
 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
infosys_ayaladeltoro@controller-1:~$ curl -H "Host: kubernetes.default.svc.cluster.local" -i http://127.0.0.1/healthz
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Fri, 24 Dec 2021 18:57:54 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
Cache-Control: no-cache, private
X-Content-Type-Options: nosniff
X-Kubernetes-Pf-Flowschema-Uid: 88df1f3d-a43f-4f2d-b2b6-661e0cd190f2
X-Kubernetes-Pf-Prioritylevel-Uid: cd1dd298-6e5e-4091-aac6-baf7c816ba6b

EXITING CONTROLLER-1 VM

xxxxxxx_ayaladeltoro@controller-1:~$ exit
logoutConnection to 34.83.87.134 closed.

RUNNING COMMAND FROM CLOUD SHELL VM

xxxxxxx_ayaladeltoro@cloudshell:~ (ayaladeltoro-training-project)$ kubectl cluster-info --kubeconfig admin.kubeconfig
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?