70

When deploying a service via a Helm Chart, the installation failed because the tiller serviceaccount was not allowed to create a ServiceMonitor resource.

Note:

  • ServiceMonitor is a CRD defined by the Prometheus Operator to automagically get metrics of running containers in Pods.
  • Helm Tiller is installed in a single namespace and the RBAC has been setup using Role and RoleBinding.

I wanted to verify the permissions of the tiller serviceaccount.
kubectl has the auth can-i command, queries like these (see below) always return no.

  • kubectl auth can-i list deployment --as=tiller
  • kubectl auth can-i list deployment --as=staging:tiller

What is the proper way to check permissions for a serviceaccount?
How to enable the tiller account to create a ServiceMonitor resource?

Joost den Boer
  • 4,556
  • 4
  • 25
  • 39

4 Answers4

119

After trying lots of things and Googling all over the universe I finally found this blogpost about Securing your cluster with RBAC and PSP where an example is given how to check access for serviceaccounts.

The correct command is:
kubectl auth can-i <verb> <resource> --as=system:serviceaccount:<namespace>:<serviceaccountname> [-n <namespace>]

To check whether the tiller account has the right to create a ServiceMonitor object:
kubectl auth can-i create servicemonitor --as=system:serviceaccount:staging:tiller -n staging

Note: to solve my issue with the tiller account, I had to add rights to the servicemonitors resource in the monitoring.coreos.com apiGroup. After that change, the above command returned yes (finally) and the installation of our Helm Chart succeeded.

Updated tiller-manager role:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: tiller-manager
  labels:
    org: ipos
    app: tiller
  annotations:
    description: "Role to give Tiller appropriate access in namespace"
    ref: "https://docs.helm.sh/using_helm/#example-deploy-tiller-in-a-namespace-restricted-to-deploying-resources-only-in-that-namespace"
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
  resources: ["*"]
  verbs: ["*"]
- apiGroups:
    - monitoring.coreos.com
  resources:
    - servicemonitors
  verbs:
    - '*'
phs
  • 10,687
  • 4
  • 58
  • 84
Joost den Boer
  • 4,556
  • 4
  • 25
  • 39
  • 6
    `kubectl auth can-i` really helpful command for these kind of problems thanks – Ivan Aracki Jul 18 '19 at 10:14
  • One common mistake of using `kubectl auth can-i` is forgetting add `-n ` for `rolebinding`, since the `rolebinding` only grants permission in a namespace. – B.Z. Jan 30 '23 at 13:40
  • `--list` is also useful to show all permissions for given account: `kubectl auth can-i --as=system:serviceaccount:default:default --list` – arve0 May 05 '23 at 06:55
16

this displays what permissions you have on a service account prom-stack-grafana: e.g.

kubectl -n monitoring auth can-i --list --as=system:serviceaccount:monitoring:prom-stack-grafana

Sreeni
  • 361
  • 2
  • 10
8

Note: kubectl auth can-i command has an edge case / gotcha / mistake to avoid worth being aware of.
Basically a user can be named with a similar syntax to a service account, and it can trick it.
It had me tripped up for quite a while so I wanted to share it.

alias k=kubectl
k create ns dev 
k create role devr --resource=pods --verb=get -n=dev 
k create rolebinding devrb --role=devr --user=system:serviceaccount:dev:default -n=dev # wrong syntax 
k auth can-i get pods -n=dev --as=system:serviceaccount:dev:default  # right syntax
# yes 

(The fact that k auth can-i said yes made me think my rolebinding was correct syntax, but it's wrong)

This is correct:

k delete ns dev
k create ns dev 
k create role devr --resource=pods --verb=get -n=dev 
k create rolebinding devrb --role=devr --serviceaccount=dev:default -n=dev # right syntax 
k auth can-i get pods -n=dev --as=system:serviceaccount:dev:default  # right syntax
# yes

Here is visual proof of how it's wrong:

k create rolebinding devrb1 --role=devr --user=system:serviceaccount:dev:default -n=dev --dry-run=client -o yaml | grep subjects -A 4
# subjects:
# - apiGroup: rbac.authorization.k8s.io
#   kind: User
#   name: system:serviceaccount:dev:default

k create rolebinding devrb2 --role=devr --serviceaccount=dev:default -n=dev --dry-run=client -o yaml | grep subjects -A 4
# subjects:
# - kind: ServiceAccount
#   name: default
#   namespace: dev

If ever in doubt about syntax for imperative rbac commands, here's a fast way to look it up:

  1. kubernetes.io/docs
  2. search "rbac"
  3. control+f "kubectl create rolebinding" on the page to get example of correct syntax.
neoakris
  • 4,217
  • 1
  • 30
  • 32
0

If you want to test live, create a kube config with your serviceAccount secrets. I've create the script above to do it automatically:

#!/usr/bin/env bash

set -euo pipefail

function generate_sa() {
  local sa
  local namespace
  local context
  local target_namespace
  local output_file
  local "${@}"
  sa=${sa:?set sa}
  namespace=${namespace:?set namespace of the service account}
  context=${context:?set context}
  target_namespace=${target_namespace:? set target context namespace}
  output_file=${output_file:-/tmp/kube.conf}

  cluster=$(kubectl config view -o yaml | yq '.contexts.[] | select ( .name == "'"${context}"'") | .context.cluster')
  if [ -z "${cluster}" ]; then
    echo "We didn't find the cluster from context ${context}"
    exit 1
  fi

  server=$(kubectl config view -o yaml | yq '.clusters.[] | select ( .name == "'"${cluster}"'") | .cluster.server')

  secret=$(kubectl get sa "${sa}" -o jsonpath='{.secrets[0].name}' -n "${namespace}")
  ca=$(kubectl get secret/"${secret}" -o jsonpath='{.data.ca\.crt}' -n "${namespace}")
  token=$(kubectl get secret/"${secret}" -o jsonpath='{.data.token}' -n "${namespace}" | base64 --decode)

  cat <<EOF > "${output_file}"
---
apiVersion: v1
kind: Config
clusters:
- name: ${cluster}
  cluster:
    certificate-authority-data: ${ca}
    server: ${server}
contexts:
- name: ${cluster}
  context:
    cluster: ${cluster}
    namespace: ${target_namespace}
    user: system:serviceaccount:${namespace}:${sa}
current-context: ${cluster}
users:
- name: system:serviceaccount:${namespace}:${sa}
  user:
    token: ${token}
EOF

echo >&2 "Now run: export KUBECONFIG=${output_file}"
}

generate_sa "${@}"

Then execute that, it will create a kubeconfig file.

generate_sa_config.sh \
  sa=service-account-name \
  namespace=namespace-of-service-account \
  context=kubernetes-cluster-context-name \
  target_namespace=namespace-of-context

Don't forget to export KUBECONFIG environment variable. Now it's like you are the real ServiceAccount and you can play with roles.

xbo
  • 1
  • 1