1

I set up a two clusters with rancher 2.5.x, one single-node management cluster for running the rancher server and one "production" server which handles the application stacks.

This worked all fine, now during updating rancher server to 2.6 something failed apparently and the rancher server is down ever since. The management cluster itself is still up, only the rancher server not. However, since the access is passed throught rancher server I cannot connect to any of the clusters via kubectl or helm.

I do see that all required containers on the management cluster are still up and running: enter image description here

Also, i can ssh to this server. So I do have access to all resources, but since i cannot connect to the cluster istself i cannot fix this issue. I guess it would be quite easy to just fix the rancher helm release to make it work again. But I have no idea how i could do that. I thought about running kubectl or helm locally on the node in the management cluster, but i don't know how to get the kubeconfig for that. The kubeconfig i used before connects to the rancher server, which happens to be the problem now.

Is there any chance to connect to the cluster without using the rancher generated kubeconfig?

Peter
  • 96
  • 1
  • 12

0 Answers0