37

How to SSH into a Kubernetes Node or Server hosted on AWS? I have hosted a Kubernetes Server and Node on AWS. I'm able to see the nodes and server from my local laptop with the kubectl get node command.

I need to create a persistent volume for my node but I'm unable to ssh into it.

Is there any specific way to ssh into the node or server?

Christopher Peisert
  • 21,862
  • 3
  • 86
  • 117
anish anil
  • 2,299
  • 7
  • 21
  • 41

5 Answers5

19

Use kubectl ssh node NODE_NAME

This kubectl addon is from here. https://github.com/luksa/kubectl-plugins. And I have verified that. This works similarly to the oc command in openshift.

P.S. This is connecting inside of a freshly created pod on the specified node. In that sense, you do not get access to node itself (as you wanted) but (just) a privileged pod

mPrinC
  • 9,147
  • 2
  • 32
  • 31
4t8dds
  • 565
  • 7
  • 19
  • Didn't work for me. Kustomize Version: v4.5.4 – Hamza Saeed Dec 09 '22 at 08:19
  • @HamzaSaeed Hi, this is just some shell and some yaml files to deploy additional pod when to ssh. Not sure any relationship with kustomize. Some error log may help debug. – 4t8dds Dec 12 '22 at 02:09
  • 1
    Is there any way around it? I don't wanna use third party plugins on my production cluster due to security reasons. – Hamza Saeed Dec 12 '22 at 11:29
  • 3
    @HamzaSaeed Have you ever checked the file https://github.com/luksa/kubectl-plugins/blob/master/kubectl-ssh? It is just lines of bash and yaml. You can inspect it line by line within 5 minutes and remove the harmful code as you can if you can find any. Furthermore, just use the same idea to write a kubeclt plugin is simple. You can write on on your own. This is what I can do here. – 4t8dds Dec 13 '22 at 07:26
  • 2
    IMPORTANT: this addon will NOT execute in the context of the requested node but rather in the context of a newly created pod on that node – mPrinC Feb 05 '23 at 14:07
  • I do not recommend this plugin, because it uses a highly privileged security context, which should be authorized on your cluster (container isolation breakout, anc chroot in mounter host file system). – cactuschibre Feb 21 '23 at 15:00
10

Try this: ssh -i <path of the private key file> admin@<ip of the aws kube instances>

The perm file should be in $HOME/.ssh/kube_rsa

Swapnil Pandey
  • 577
  • 3
  • 8
  • 25
  • Yeah, Thats exactly how i usually login to the other VM's. But this one simply fails. ssh -i "AWS_key.pem" ubuntu@ec2-54-177-**-***.us-west-1.compute.amazonaws.com ssh: connect to host ec2-54-177-**-***.us-west-1.compute.amazonaws.com port 22: Operation timed out – anish anil Jan 30 '18 at 13:01
  • Try to modify your security group. Refer: https://forums.aws.amazon.com/thread.jspa?threadID=66813 – Swapnil Pandey Jan 31 '18 at 04:44
1

I haven't tried this on AWS specifically, but you can get a shell onto a Node using the following trick.

If you need access to the underlying Nodes for your Kubernetes cluster (and you don't have direct access - usually if you are hosting Kubernetes elsewhere), you can use the following deployment to create Pods where you can login with kubectl exec, and you have access to the Node's IPC and complete filesystem under /node-fs. To get a node console that is just like you have SSHd in, after logging in, perform chroot /node-fs. It is inadvisable to keep this running, but if you need access to the node, this will help. Because it is a DaemonSet, it starts one of these Pods on each Node.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: privpod
spec:
  selector:
    matchLabels:
      mydaemon: privpod
  template:
    metadata:
      labels:
        mydaemon: privpod
    spec:
      hostNetwork: true
      hostPID: true
      hostIPC: true
      containers:
        - name: privcontainer
          image: johnnyb61820/network-toolkit
          securityContext:
            privileged: true
          command:
            - tail
            - "-f"
            - /dev/null
          volumeMounts:
            - name: nodefs
              mountPath: /node-fs
            - name: devfs
              mountPath: /dev
      volumes:
        - name: nodefs
          hostPath:
            path: / 
        - name: devfs
          hostPath:
            path: /dev

This is from Appendix C.13 of Cloud Native Applications with Docker and Kubernetes. I've found this useful especially if I need to deal with physical drives or something similar. It's not something you should leave running, but helps when you are in a pinch.

johnnyb
  • 622
  • 5
  • 17
0

Kubernetes nodes can be accessed similar way how we ssh into other linux machines. Just try ssh with the external ip of that node and you can login into it that way.

NightOwl19
  • 419
  • 5
  • 24
0

If worker nodes are in private subnet, you can use bastion host with ssh agent forwarding as defined in https://aws.amazon.com/blogs/security/securely-connect-to-linux-instances-running-in-a-private-amazon-vpc/