0

I am debugging a DNS issue with Microk8s on Ubuntu, where I cannot communicate with external services from inside a pod. I am now at a point, where I discovered that microk8s kubectl get nodes results in 2 nodes being returned, where as to my understanding there should be only one (single machine with a single installation):

NAME                                   STATUS     ROLES    AGE   VERSION
hostname.domain.com   NotReady   <none>   47d   v1.19.3-34+a56971609ff35a
hostname              Ready      <none>   38h   v1.19.5-34+8af48932a5ef06

All pods /services / controllers are running on hostname, where DNS does not seem to work. So I tried to remove that node from the cluster as per https://stackoverflow.com/questions/35757620/how-to-gracefully-remove-a-node-from-kubernetes. After a restart of microk8s that node came back.

Since both nodes share the same configuration, down to the same IP, I want to try and switch over to hostname.domain.com as the sole node. How can I do that?

Mark Watney
  • 361
  • 1
  • 10
Marco
  • 158
  • 1
  • 1
  • 9

1 Answers1

0

I lost patience and since this is a development machine, I went down the wooden route:

microk8s reset
sudo snap remove microk8s
sudo snap install microk8s --classic --channel=1.19

This solved all issues.

Marco
  • 158
  • 1
  • 1
  • 9