8

I have a Kubernetes cluster in Azure using AKS and I'd like to 'login' to one of the nodes. The nodes do not have a public IP.

Is there a way to accomplish this?

Rico
  • 58,485
  • 12
  • 111
  • 141
Greg Bala
  • 3,563
  • 7
  • 33
  • 43
  • You should be able to login to aks with `az aks get-credentials --resource-group myAKSResourceGroup --name myAKSCluster` – Hackerman Nov 21 '18 at 21:23
  • @Hackerman that logs-in "kubectl" so to speak. I want to shell-in to one of the worker nodes... I want to get into the linux shell of the actual VM – Greg Bala Nov 21 '18 at 21:26
  • Take a look at my answer. – Hackerman Nov 21 '18 at 23:59
  • I deleted my answer, and also thanks for the downvote. The choosen answer has really poor quality, but, it's ok. – Hackerman Nov 22 '18 at 14:16
  • sorry @Hackerman, my down vote was meant to be feedback that you misunderstood my question; if there is a way to take it back, I will do that; maybe restore the answer and I will take my down vote away – Greg Bala Nov 26 '18 at 16:44

3 Answers3

11

The procedure is longly decribed in an article of the Azure documentation: https://learn.microsoft.com/en-us/azure/aks/ssh. It consists of running a pod that you use as a relay to ssh into the nodes, and it works perfectly fine:

You probably have specified the ssh username and public key during the cluster creation. If not, you have to configure your node to accept them as the ssh credentials:

$ az vm user update \
  --resource-group MC_myResourceGroup_myAKSCluster_region \
  --name node-name \
  --username theusername \
  --ssh-key-value ~/.ssh/id_rsa.pub

To find your nodes names:

az vm list --resource-group MC_myResourceGroup_myAKSCluster_region -o table

When done, run a pod on your cluster with an ssh client inside, this is the pod you will use to ssh to your nodes:

kubectl run -it --rm my-ssh-pod --image=debian
# install ssh components, as their is none in the Debian image
apt-get update && apt-get install openssh-client -y

On your workstation, get the name of the pod you just created:

$ kubectl get pods

Add your private key into the pod:

$ kubectl cp ~/.ssh/id_rsa pod-name:/id_rsa

Then, in the pod, connect via ssh to one of your node:

ssh -i /id_rsa theusername@10.240.0.4

(to find the nodes IPs, on your workstation):

az vm list-ip-addresses --resource-group MC_myAKSCluster_myAKSCluster_region -o table
dbourcet
  • 364
  • 5
  • 12
2

This Gist and this page have pretty good explanations of how to do it. Sshing into the nodes and not shelling into the pods/containers.

Rico
  • 58,485
  • 12
  • 111
  • 141
  • Although sometime SSH'ing into a node is necessary, you should be able to do anything you need using a DaemonSet, and this approach is also more scalable (as you scale your cluster, the DaemonSet is applied to every new node). You mount hostPaths from the node (or the whole fs) like in this [comment](https://github.com/Azure/AKS/issues/590#issuecomment-412075626). – Alessandro Vozza Nov 22 '18 at 08:11
  • thanks @Rick, I did not yet try this (i found another way of investigating my issue) but read the article and this seems exactly what I wanted to do so thanks for the answer! – Greg Bala Nov 22 '18 at 14:04
  • 1
    @alev this was just for investigating some networking at the node level. not really a production process – Greg Bala Nov 22 '18 at 14:05
0

you can use this instead of SSH. This will create a tiny priv pod and use nsenter to access the noed. https://github.com/mohatb/kubectl-wls

mohab
  • 78
  • 10
  • The linked page might answer the question but it might become unavailable. So please add the relevant details from there here to make it persistent and findable via search. – Markus Aug 24 '20 at 13:24