I am new to EKS and Kubernetes -
Here is what happened
- A EKS cluster was created with a specific IAM Role
- When trying to connect to the cluster with kubectl commands it was throwing
error You must be logged in to the server (Unauthorized)
I followed the steps detailed here
https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/
Assumed to the role that created the EKS cluster
Exported them to new profile
dev
in aws credentialsRan
AWS_PROFILE=dev kubectl get nodes
. It was able to list all my nodes.
Note: I had already run aws eks --region <region> update-kubeconfig --name <cluster-name>
- Now I tried to add the role/SAML User that is trying to access the EKS cluster by applying the configmap as below and ran
AWS_PROFILE=dev kubectl apply -f aws-auth.yaml
aws-auth.yaml
being
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:sts::******:assumed-role/aws_dev/abc@def.com
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
notice the role arn is a SAML User assumed to aws_dev
role that tries to connect to the cluster.
Once this was applied, the response was configmap/aws-auth configured
I now tried to execute kubectl get nodes
without the AWS_PROFILE=dev
and it fails again with error You must be logged in to the server (Unauthorized)
.
I also executed AWS_PROFILE=dev kubectl get nodes
which previously worked but fails now.
I am guessing the aws-auth information messed up and is there a way to revert the kubectl apply
that was done above.
any kubectl command fails now. What might be happening? How can I rectify this?