i set up a k8s cluster and managed to kill my kubeconfig apparently unrecoverably :( I do have access to the nodes though and respectively to the containers running on the controlplance and etcd. Is there any way to retrieve a working kubeconfig from within the cluster?
i used rancher to set up this cluster, unfortunately rancher broke pretty badly, when the ip of the host system changed and the letsencrypt certs ran out.
All deployments are actually running perfectly well, i just don't get access to the cluster anymore :(
this is my kube config:
apiVersion: v1
clusters:
- cluster:
server: https://[broken-rancher-server]/k8s/clusters/c-7l92q
name: my-cluster
- cluster:
certificate-authority-data: UlZMG1VR3VLYUVMT...
server: https://1.1.1.1:6443
name: my-cluster-prod-cp-etcd-1
- cluster:
certificate-authority-data: UlZMG1VR3VLYUVMT...
server: https://1.1.1.2:6443
name: my-cluster-prod-cp-etcd-2
contexts:
- context:
cluster: my-cluster-prod-cp-etcd-1
user: u-jv5hx
name: my-cluster-prod-cp-etcd-1
- context:
cluster: my-cluster-prod-cp-etcd-2
user: u-jv5hx
name: my-cluster-prod-cp-etcd-2
current-context: my-cluster
kind: Config
preferences: {}
users:
- name: u-jv5hx
user:
token: kubeconfig-u-jv5hx.c-7l92q:z2jjt5wx7xxxxxxxxxxxxxxxxxx7nxhxn6n4q
if i get access to this cluster again i can simply setup a new rancher instance and import that cluster, but for this i need access to it first.
Any hint is greatly appreciated since i pretty much ran out of ideas by now.