0

I'm using an EKS cluster. I have a script which calls the 'watch' API on a custom resource. When I run this script using my cluster admin credentials from my laptop, I can see that events arrive as expected. However, whenever I run this script inside a pod using the in-cluster security credentials, no events ever arrive, yet there are no authentication or other errors. It doesn't appear to be a namespace problem, as I see the same behaviour whether or not resources are created in the same namespace as the script is authenticated, and where the pod is located.

What could be causing this?

The API request I'm making is:

GET /apis/mydomain.com/v1/mycustomresource?watch=1

Any help gratefully received.

Here's the ClusterRole:

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1 
metadata:
  name: manage-mycustomresource
  namespace: kube-system
  labels:
    rbac.authorization.k8s.io/aggregate-to-admin: "true" 
    rbac.authorization.k8s.io/aggregate-to-edit: "true" 
rules:
- apiGroups: ["*"] 
  resources: ["*"] 
  verbs: ["*"] 

...and here's the ClusterRoleBinding:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    meta.helm.sh/release-name: mycustomresource-operator
    meta.helm.sh/release-namespace: kube-system
  creationTimestamp: "2020-07-01T13:23:08Z"
  labels:
    app.kubernetes.io/managed-by: Helm
  name: mycustomresource-operator
  resourceVersion: "12976069"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/mycustomresource-operator
  uid: 41e6ef6d-cc96-43ec-a58e-48299290f1bc
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: mycustomresource-operator
  namespace: kube-system

...and the ServiceAccount for the pod:

apiVersion: v1
kind: ServiceAccount
metadata:
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::043180741939:role/k8s-mycustomresource-operator
    meta.helm.sh/release-name: mycustomresource-operator
    meta.helm.sh/release-namespace: kube-system
  creationTimestamp: "2020-07-01T13:23:08Z"
  labels:
    app.kubernetes.io/instance: mycustomresource-operator
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: mycustomresource-operator
    app.kubernetes.io/version: 1.16.0
    helm.sh/chart: mycustomresource-operator-0.1.0
  name: mycustomresource-operator
  namespace: kube-system
  resourceVersion: "12976060"
  selfLink: /api/v1/namespaces/kube-system/serviceaccounts/mycustomresource-operator
  uid: 4f30b10b-1deb-429e-95e4-2ff2a91a32c3
secrets:
- name: mycustomresource-operator-token-qz9xz

The deployment, upon which the script runs:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
    meta.helm.sh/release-name: mycustomresource-operator
    meta.helm.sh/release-namespace: kube-system
  creationTimestamp: "2020-07-01T13:23:08Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: mycustomresource-operator
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: mycustomresource-operator
    app.kubernetes.io/version: 1.16.0
    helm.sh/chart: mycustomresource-operator-0.1.0
  name: mycustomresource-operator
  namespace: kube-system
  resourceVersion: "12992297"
  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/mycustomresource-operator
  uid: 7b118d47-e467-48f9-b497-f9e4592e6baf
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/instance: mycustomresource-operator
      app.kubernetes.io/name: mycustomresource-operator
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: mycustomresource-operator
        app.kubernetes.io/name: mycustomresource-operator
    spec:
      containers:
      - image: myrepo.com/myrepo/k8s-mycustomresource-operator:master
        imagePullPolicy: Always
        name: mycustomresource-operator
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: mycustomresource-operator
      serviceAccountName: mycustomresource-operator
      terminationGracePeriodSeconds: 30
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2020-07-01T13:23:08Z"
    lastUpdateTime: "2020-07-01T13:23:10Z"
    message: ReplicaSet "mycustomresource-operator-5dc74765cd" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  - lastTransitionTime: "2020-07-01T15:13:31Z"
    lastUpdateTime: "2020-07-01T15:13:31Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  observedGeneration: 1
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1

James
  • 31
  • 3

1 Answers1

0

Check the permission of the service account using

kubectl auth can-i watch mycustomresource --as=system:serviceaccount:kube-system:ycustomresource-operator -n kube-system
Arghya Sadhu
  • 41,002
  • 9
  • 78
  • 107
  • I believe in the ClusterRole I am aggregating these permissions into `cluster-admin`, so this shouldn't be the problem: ```labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" ``` I tried directly setting the roleRef to `manage-customresource` as suggested, but that didn't work either I'm afraid. – James Jul 02 '20 at 10:34
  • You are right I overlooked it..can you check the permission of the service account using the command – Arghya Sadhu Jul 02 '20 at 10:38