3

We have a EKS cluster running the 1.21 version. We want to give admin access to worker nodes. We modified the aws-auth config map and added "system:masters" for eks worker nodes role. Below is the code snipped for the modified configmap.

data:
  mapRoles: |
    - groups:
      - system:nodes
      - system:bootstrappers
      - system:masters
      rolearn: arn:aws:iam::686143527223:role/terraform-eks-worker-node-role
      username: system:node:{{EC2PrivateDNSName}}

After adding this section, the EKS worker nodes successfully got admin access to the cluster. But in the EKS dashboard, the nodegroups are in a degraded state. It shows the below error in the Health issues section. Not able to update cluster due to this error. Please help.

Your worker nodes do not have access to the cluster. Verify if the node instance role is present and correctly configured in the aws-auth ConfigMap.

2 Answers2

1

During an issue such as this one, a quick way to get more details is by looking at the "Health issues" section on the EKS service page. As can be seen in the attached screenshot below, which has the same error in the description, there is an access permissions issue with the specific role eks-quickstart-test-ManagedNodeInstance.

enter image description here

The aforementioned role lacks permissions to the cluster and the same can be updated in the aws-auth.yaml configuration as described below:

  1. Run the following command from the role/user which created the EKS cluster:

kubectl get cm aws-auth -n kube-system -o yaml > aws-auth.yaml

  1. Add the role along with the required permissions such as system:masters in the mapRoles: section as shown below:
mapRoles: |
    - rolearn: arn:aws:iam::<AWS-AccountNumber>:role/eks-quickstart-test-ManagedNodeInstance
      username: system:node:{{EC2PrivateDNSName}}
      groups:
          - system:bootstrappers
          - system:nodes
          - system:masters
  1. Apply the updates to the cluster with the command:

kubectl apply -f aws-auth.yaml

This should resolve the permission issues and your cluster nodes should be visible as healthy and ready for pods to be scheduled.

Vishwas M.R
  • 1,341
  • 16
  • 23
  • 1
    This really helped me. I had updated the ConfigMap for aws-auth when giving permission to Lambda backend, and ended up ovewritting the original config for the worker nodes. Thank you sir! – Alisson Reinaldo Silva Apr 24 '23 at 04:13
0

The error message indicates that the instance role (terraform-eks-worker-node-role) is lack of AWS managed policy AmazonEKSWorkerNodePolicy. Here's a troubleshooting guide for reference.

To provide cluster admin to your agent pod; bind the cluster-admin role to your agent pod serviceaccount:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: <of your own>
  namespace: <where your agent runs>
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: <use by your agent pod>
  namespace: <where your agent runs>
gohm'c
  • 13,492
  • 1
  • 9
  • 16
  • Thanks. I checked and this policy is there with the IAM role. Whenever I removed the ```system:masters``` line from the aws-auth config map for ```terraform-eks-worker-node-role``` role, everything starts working fine but whenever I add it problem starts coming. – abhinav tyagi Sep 18 '22 at 19:49
  • `system:masters` is not needed for node. – gohm'c Sep 19 '22 at 01:11
  • We are running some dynamic Jenkins agents that come up as pods on the worker nodes. So that's why to give Kubernetes admin access to those pods in order to deploy and update resources on Kubernetes, we need worker nodes to have admin access to the cluster. So that's why we added ```system:masters```. Is there any other way we can give admin access to the cluster to Kubernetes worker nodes? – abhinav tyagi Sep 19 '22 at 06:10
  • For that see the updated answer. – gohm'c Sep 19 '22 at 07:01
  • Thanks a lot. It solved my problem. I used ClusterRoleBinding to give access across all namespaces. – abhinav tyagi Sep 19 '22 at 14:52