What I want to achieve is this: I need to generate KubeConfig based on a service account token
The cluster is not integrated with Azure AD.
Here is how the generated Kubeconfig looks like:
apiVersion: v1
kind: Config
clusters:
- name: <cluster-name>
cluster:
certificate-authority-data: <cluster certificate>
server: https://[aks host]:443
contexts:
- name: <ctx>
context:
cluster: <cluster-name>
namespace: <ns>
user: <service account name>
current-context: <ctx>
users:
- name: <service account name>
user:
token: <service account token>
I am however ending up with this error error: You must be logged in to the server (Unauthorized)
.
When I do az aks get-credentials ...
the resulted kubeconfig has the client-certificate-data
and client-key-data
. Of course, if I add those to the generated Kubeconfig above, it works fine, but still, that's the admin credentials. The service account is bound to a namespace with some RBAC rules in place.
The question is: is the token not enough for Azure AKS to authenticate? If it's not enough, what's another way of achieving that?
P.S: I don't want to use Azure DevOps Kubernetes Service Connection of type Service Account, I just need a KubeConfig.
Update: Here is the same question: How to create a kubectl config file for serviceaccount
I have missed the "=" in the token -.-. That solved my problem