1

I was trying to add permission to view nodes to my admin IAM using information in this article (https://aws.amazon.com/premiumsupport/knowledge-center/eks-kubernetes-object-access-error/) and ended up saving the configmap with a malformed mapUsers section (didn't include the username at all)

Now every kubectl command return an error like this: Error from server (Forbidden): nodes is forbidden: User "" cannot list resource "nodes" in API group "" at the cluster scope

How can I circumvent corrupted configmap and regain access to the cluster? I found two questions at Stackoverflow but as I am very new to kubernetes and still buffled as to exactly I need to do.

Mistakenly updated configmap aws-auth with rbac & lost access to the cluster

I have an access to root user but kubectl doesn't work for this user, too.

Is there another way to authenticate to the cluster?

Update 1

Yesterday I recreated this problem on a new cluster: I still got this error even if I am the root user.

The structure of the configmap goes like this:

apiVersion: v1
data:
  mapRoles:  <default options>
  mapUsers: |
    - userarn: arn:aws:iam::<root id>:root
    username: #there should be a username value on this line, but it's missing in my configmap; presumable this is the cause
      groups:
      - system:bootstrappers
      - system:nodes

Update 2

Tried to use serviceAccount token, got an error:

Error from server (Forbidden): configmaps "aws-auth" is forbidden: User "system:serviceaccount:kube-system:aws-node" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
user1379654
  • 11
  • 1
  • 4

1 Answers1

1

How did you create your cluster? The IAM user, or IAM role, that you used to actually create it, is grandfathered in as a sysadmin. So long as you use the same credentials that you used for

aws eks create-cluster

You can do

aws eks update-kubeconfig, followed by using kubectl to modify the configmap and give other entites the permissions.

You haven't said what you actually tried. Lets do some more troubleshooting-

  1. system:serviceaccount:kube-system:aws-node this is saying that THIS kubernetes user does not have permission to modify configmaps. But, that is completely correct- it SHOULDNT. What command did you run to get that error? What were the contents of your kubeconfig context to generate that message? Did you run the command from a worker node maybe?

  2. You said "I have access to the root user". Access in what way? Through the console? With an AWS_SECRET_ACCESS_KEY? You'll need the second - assuming thats the case run aws iam get-caller-identity and post the results.

  3. Root user or not, the only user that has guaranteed access to the cluster is the one that created it. Are there ANY OTHER IAM users or roles in your account? Do you have cloudtrail enabled? If so, you could go back and check the logs and verify that it was the root user that issued the create cluster command.

  4. After running get-caller-identity, remove your .kube/config file and run aws eks update-kubeconfig. Tell us the output from the command, and the contents of the new config file.

  5. Run kubectl auth can-i '*' '*' with the new config and let us know the result.

Paul Becotte
  • 9,767
  • 3
  • 34
  • 42
  • Unfortunately, it's not "this early in the process", the cluster has been functionining for a long time. So it's very important to regain the access to it Yesterday I recreated this problem on a new cluster: I still got this error even if I am the root user and the creator of the cluster. See the Update section on my answer – user1379654 Sep 30 '21 at 08:25
  • 1. I run the command on local computer like this: kubectl --token= edit configmap aws-auth -n kube-system 2. I have access to the root user in every way. 3. There is only one user -- root 4. There's no need to remove .kub/config, it's already properly configured for the root user 5. $ kubectl auth can-i '*' '*' Output: – user1379654 Oct 05 '21 at 14:15