Currently on k8s 1.24.13
I have client cert auth enabled and functioning normally (almost) from my extension api-server using the client-ca-file
option with clients presenting certs in which the Common Name (CN) maps to a User subject in RBAC rules. This works great for my use case with HTTPS traffic from containers.
More info on this here: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#x509-client-certs
However, when I present a client to the server via a kubeconfig file which includes certificate-authority-data
, client-certificate-data
, and client-key-data
(all base64 encoded) the responses from my extension api-server indicate that instead of reading the CN from the presented client cert data in the kubeconfig, it is ignoring this and attempting to fallback to a default service account which ofcourse does not have the proper permissions. I can confirm this is happening from both a container that uses a kubeconfig method to connect, and when I attempt to use a kubeconfig directly from outside the cluster network.
Example error response (despite providing a proper cert in kubeconfig):
User "system:serviceaccount:ops:default" cannot get resource "namespaces" in API group "" at the cluster scope
Context: ops
is the namespace where this combo kube-api-server + kube-controller-manager pod is running.
I've verified the cert data is correct, verified the RBAC rules are setup for the cert CN I am expecting to be presented, but I don't know where else to look. It doesn't make sense that specifically using kubeconfig method to connect results in the cert data being effectively ignored.