1

I've been following this post to create user access to my kubernetes cluster (running on Amazon EKS). I did create key, csr, approved the request and downloaded the certificate for the user. Then I did create a kubeconfig file with the key and crt. When I run kubectl with this kubeconfig, I'm recognized as system:anonymous.

$ kubectl --kubeconfig test-user-2.kube.yaml get pods
Error from server (Forbidden): pods is forbidden: User "system:anonymous" cannot list pods in the namespace "default"

I expected the user to be recognized but get denied access.

$ kubectl --kubeconfig test-user-2.kube.yaml version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-18T11:37:06Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:13:43Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}


$ kubectl --kubeconfig test-user-2.kube.yaml config view
apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: REDACTED
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: test-user-2
  name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: test-user-2
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

# running with my other account (which uses heptio-authenticator-aws)
$ kubectl describe certificatesigningrequest.certificates.k8s.io/user-request-test-user-2
Name:               user-request-test-user-2
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Wed, 01 Aug 2018 15:20:15 +0200
Requesting User:
Status:             Approved,Issued
Subject:
         Common Name:    test-user-2
         Serial Number:
Events:  <none>

I did create a ClusterRoleBinding with admin (also tried cluster-admin) roles for this user but that should not matter for this step. I'm not sure how I can further debug 1) if the user is created or not or 2) if I missed some configuration.

Any help is appreciated!

Eren Güven
  • 2,314
  • 19
  • 27

3 Answers3

3

As mentioned in this article:

When you create an Amazon EKS cluster, the IAM entity user or role (for example, for federated users) that creates the cluster is automatically granted system:master permissions in the cluster's RBAC configuration. To grant additional AWS users or roles the ability to interact with your cluster, you must edit the aws-auth ConfigMap within Kubernetes.

  1. Check if you have aws-auth ConfigMap applied to your cluster:

    kubectl describe configmap -n kube-system aws-auth
    
  2. If ConfigMap is present, skip this step and proceed to step 3. If ConfigMap is not applied yet, you should do the following:

Download the stock ConfigMap:

curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/aws-auth-cm.yaml

Adjust it using your NodeInstanceRole ARN in the rolearn: . To get NodeInstanceRole value check out this manual and you will find it at steps 3.8 - 3.10.

data:
  mapRoles: |
    - rolearn: <ARN of instance role (not instance profile)>

Apply this config map to the cluster:

kubectl apply -f aws-auth-cm.yaml

Wait for cluster nodes becoming Ready:

kubectl get nodes --watch
  1. Edit aws-auth ConfigMap and add users to it according to the example below:

    kubectl edit -n kube-system configmap/aws-auth
    
    # Please edit the object below. Lines beginning with a '#' will be ignored,
    # and an empty file will abort the edit. If an error occurs while saving this file will be
    # reopened with the relevant failures.
    #
    apiVersion: v1
    data:
      mapRoles: |
        - rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
          username: system:node:{{EC2PrivateDNSName}}
          groups:
            - system:bootstrappers
            - system:nodes
      mapUsers: |
        - userarn: arn:aws:iam::555555555555:user/admin
          username: admin
          groups:
            - system:masters
        - userarn: arn:aws:iam::111122223333:user/ops-user
          username: ops-user
          groups:
            - system:masters
    

Save and exit the editor.

  1. Create kubeconfig for your IAM user following this manual.
VAS
  • 8,538
  • 1
  • 28
  • 39
2

I got this back from AWS support today.

Thanks for your patience. I have just heard back from the EKS team. They have confirmed that the aws-iam-authenticator has to be used with EKS and, because of that, it is not possible to authenticate using certificates.

I haven't heard whether this is expected to be supported in the future, but it is definitely broken at the moment.

JohnJ
  • 43
  • 3
0

This seems to be a limitation of EKS. Even though the CSR is approved, user can not authenticate. I used the same procedure on another kubernetes cluster and it worked fine.

Eren Güven
  • 2,314
  • 19
  • 27
  • Recent response from AWS Premium Support: "I have checked in our internal tools and I can see that this is an open feature request with the EKS service team and is therefore not yet implemented. Unfortunately, we will not be able to provide an ETA on when this feature will be developed and released but I would suggest you subscribe or track the following blogs/feeds in order to know about the availability of this feature whenever it is available." – Eren Güven Nov 29 '18 at 11:31