0

I am having trouble to bring up my pods at my local K8s. It is installed on Ubuntu 18.04 (1 Master VM, 1 Node VM).

Kubernetes-Master:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

Kubernetes-Slave:/var/lib/kubelet/pki$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

I noticed the following (slave = worker node):

Kubernetes-Master:~$ kubectl get nodes
NAME                STATUS     ROLES    AGE   VERSION
kubernetes-master   NotReady   master   62d   v1.17.0
kubernetes-slave    NotReady   <none>   62d   v1.17.0

By checking the node:

Kubernetes-Master:~$ kubelet
F1223 10:25:38.045551   20431 server.go:253] error reading /var/lib/kubelet/pki/kubelet.key, certificate and key must be supplied as a pair

Kubernetes-Slave:/var/lib/kubelet/pki$ kubelet
F1223 10:20:14.651684    3558 server.go:253] error reading /var/lib/kubelet/pki/kubelet.key, certificate and key must be supplied as a pair

Both VMs were down for a few days. After booting one pod didn't start. One restart later, all pods stayed down:

Kubernetes-Master:~$ kubectl get all -o wide -n gitbucket
NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE   SELECTOR
service/gitbucket-svc   ClusterIP   10.97.69.199   <none>        8080/TCP   67m   app=gitbucket

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                       SELECTOR
deployment.apps/gitbucket   0/1     0            0           67m   gitbucket    gitbucket/gitbucket:latest   app=gitbucket

NAME                                   DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES                       SELECTOR
replicaset.apps/gitbucket-67cc5686df   1         0         0       67m   gitbucket    gitbucket/gitbucket:latest   app=gitbucket,pod-template-hash=67cc5686df

Any idea what's going on?

Fabiansc
  • 241
  • 2
  • 5
  • 12
  • `F1223 10:25:38.045551 20431 server.go:253] error reading /var/lib/kubelet/pki/kubelet.key, certificate and key must be supplied as a pair` – Oleg Butuzov Dec 23 '19 at 10:21
  • Also read this [information](https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/#understanding-how-the-kubelet-checkpoints-config) – Oleg Butuzov Dec 23 '19 at 10:27
  • Any idea, why/how this came into place? I just rebooted having 2 days downtime. No configuration was changed. Further: I could not find how to resolve the key-pair issue.. :( – Fabiansc Dec 23 '19 at 11:41
  • It's not actually related to `key pair` issue, it's the user you logged in as doesn't have access to `/var/lib/kubelet` check who has access to `ls -la /var/lib/kubelet` probably it might be `root` try logging in as `superuser` or run `sudo kubelet` see what happens, atleast that error message disappears. – BinaryMonster Dec 24 '19 at 12:59
  • you are correct. I needed to run the kubelet-command using a different user with appropriate rights. – Fabiansc Dec 25 '19 at 18:23

2 Answers2

0

You may have problem with node-authorization. Thanks to the Node authorizer kubelet will o perform API operations.

Any request that is successfully authenticated (including an anonymous request) is then authorized. The default authorization mode is AlwaysAllow, which allows all requests- kubelet authorization.

There are many possible reasons to subdivide access to the kubelet API:

  • anonymous auth is enabled, but anonymous users’ ability to call the kubelet API should be limited
  • bearer token auth is enabled, but arbitrary API users’ (like service accounts) ability to call the kubelet API should be limited
  • client certificate auth is enabled, but only some of the client certificates signed by the configured CA should be allowed to use the kubelet API

To subdivide access to the kubelet API, delegate authorization to the API server:

  1. ensure the authorization.k8s.io/v1beta1 API group is enabled in the API server
  2. start the kubelet with the --authorization-mode=Webhook and the --kubeconfig flags the kubelet calls the SubjectAccessReview API on the configured API server to determine whether each request is authorized
  3. the kubelet authorizes API requests using the same request attributes approach as the apiserver.

More information you can find here: pki-kubernetes.

Authentication in Kubernetes: auth-kubernetes.

Malgorzata
  • 6,409
  • 1
  • 10
  • 27
0

I think I found the issue. It is related to a change at CSInode when switching from Kubernetes 1.16 to 1.17. I had a scheduled patch running (Ubuntu Landscape) after upgrading my memory, which migrated from 1.16 to 1.17. Details can be found here: Worker start to fail CSINodeIfo: error updating CSINode annotation

Upgrade details are documented here (works): https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

If you utilize ISTIO:

Istio (1.3.3 in my case) will block the upgrade. If you would like to execut the upgrade to Kubernetes 1.17, the easiest way to proceed is uninstalling istio and re-installing it after your update is completed. I could not find a defined migration-path at istio (only bug or feature discussions). Keep in mind:

  • re-establish the kubernetes secret (istio-system namespace), including your certificates
  • re-adjust istio-ingressgateway with your port directives (istio edge gateway)
  • re-create your custom application-gateway (if you have chose to use the same)
  • re-establish all virtual services

Example Configuration

Fabiansc
  • 241
  • 2
  • 5
  • 12