6

I'm deploying an AKS k8s cluster with terraform.

The cluster has rbac enabled with azure active directory.

The cluster creation goes fine and after that terraform tries to perfom some taks on the cluster like creating k8s-roles storage classes...., and fails there with an Unauthorized error message, like this :

module.k8s_cluster.module.infra.kubernetes_storage_class.managed-premium-retain: Creating...
module.k8s_cluster.module.infra.kubernetes_cluster_role.containerlogs: Creating...
module.k8s_cluster.module.infra.kubernetes_namespace.add_pod_identity: Creating...
module.k8s_cluster.module.infra.kubernetes_storage_class.managed-standard-retain: Creating...
module.k8s_cluster.module.infra.kubernetes_storage_class.managed-premium-delete: Creating...
module.k8s_cluster.module.appgw.kubernetes_namespace.agic[0]: Creating...
module.k8s_cluster.module.infra.kubernetes_storage_class.managed-standard-delete: Creating...

Error: Unauthorized

  on .terraform/modules/k8s_cluster/modules/infra/k8s-roles.tf line 1, in resource "kubernetes_cluster_role" "containerlogs":
   1: resource "kubernetes_cluster_role" "containerlogs" {



Error: Unauthorized

  on .terraform/modules/k8s_cluster/modules/infra/k8s-storages-classes.tf line 1, in resource "kubernetes_storage_class" "managed-standard-retain":
   1: resource "kubernetes_storage_class" "managed-standard-retain" {



Error: Unauthorized

  on .terraform/modules/k8s_cluster/modules/infra/k8s-storages-classes.tf line 14, in resource "kubernetes_storage_class" "managed-standard-delete":
  14: resource "kubernetes_storage_class" "managed-standard-delete" {



Error: Unauthorized

  on .terraform/modules/k8s_cluster/modules/infra/k8s-storages-classes.tf line 27, in resource "kubernetes_storage_class" "managed-premium-retain":
  27: resource "kubernetes_storage_class" "managed-premium-retain" {



Error: Unauthorized

  on .terraform/modules/k8s_cluster/modules/infra/k8s-storages-classes.tf line 40, in resource "kubernetes_storage_class" "managed-premium-delete":
  40: resource "kubernetes_storage_class" "managed-premium-delete" {



Error: Unauthorized

  on .terraform/modules/k8s_cluster/modules/infra/r-aad-pod-identity.tf line 5, in resource "kubernetes_namespace" "add_pod_identity":
   5: resource "kubernetes_namespace" "add_pod_identity" {



Error: Unauthorized

  on .terraform/modules/k8s_cluster/modules/tools/agic/helm-agic.tf line 1, in resource "kubernetes_namespace" "agic":
   1: resource "kubernetes_namespace" "agic" {

As you can see these are not azure errors, but kubernetes

It seems like I don't have rights to perform the above resources creation task on the newly created cluster. What and where to do in order to grant my user account permissions for these terraform task?

nixmind
  • 2,060
  • 6
  • 32
  • 54
  • How did you configure the Terraform Kubernetes provider? Have you statically define TLS certificate credentials or you are using your current kubectl context? – Jean-Philippe Bond Dec 17 '20 at 02:16
  • None of both, I used terraform with my azure account, I don't have any context setup for terrafom actually – nixmind Dec 17 '20 at 05:27
  • Have you done : az aks get-credential? – Jean-Philippe Bond Dec 17 '20 at 13:14
  • 2
    Found this post from kubernet.dev fixed my issue in a sec. it has to do with Azure AKS AAD https://www.kubernet.dev/terraform-and-aad-rbac-integration-for-aks/ – Oren Sep 13 '21 at 07:15
  • @Oren this should be posted as an answer, even if it just copy/pastes from the URL, there's no guarantee people won't get 404 in the future. – Aristu Oct 15 '21 at 17:42

1 Answers1

4

The simplest answer is to change your kubernetes provider configuration from

provider "kubernetes" {
  load_config_file       = "false"
  host                   = azurerm_kubernetes_cluster.main.kube_config.0.host
  username               = azurerm_kubernetes_cluster.main.kube_config.0.username
  password               = azurerm_kubernetes_cluster.main.kube_config.0.password
  client_certificate     = "${base64decode(azurerm_kubernetes_cluster.main.kube_config.0.client_certificate)}"
  client_key             = "${base64decode(azurerm_kubernetes_cluster.main.kube_config.0.client_key)}"
  cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.main.kube_config.0.cluster_ca_certificate)}"
}

to

provider "kubernetes" {
  load_config_file       = "false"
  host                   = azurerm_kubernetes_cluster.main.kube_admin_config.0.host
  username               = azurerm_kubernetes_cluster.main.kube_admin_config.0.username
  password               = azurerm_kubernetes_cluster.main.kube_admin_config.0.password
  client_certificate     = "${base64decode(azurerm_kubernetes_cluster.main.kube_admin_config.0.client_certificate)}"
  client_key             = "${base64decode(azurerm_kubernetes_cluster.main.kube_admin_config.0.client_key)}"
  cluster_ca_certificate = "${base64decode(azurerm_kubernetes_cluster.main.kube_admin_config.0.cluster_ca_certificate)}"
}

If the local accounts are disabled this will not work, but you can use this instead :

provider "kubernetes" {
  host                   = data.azurerm_kubernetes_cluster.this.kube_config.0.host
  cluster_ca_certificate = base64decode(data.azurerm_kubernetes_cluster.this.kube_config.0.cluster_ca_certificate)
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command = "./kubelogin"
    args = [
      "get-token",
      "--login",
      "spn",
      "--environment",
      "AzurePublicCloud",
      "--tenant-id",
      var.tenant_id,
      "--server-id",
      var.aad_server_id,
      "--client-id",
      var.client_id,
      "--client-secret",
      var.client_secret
    ]
  }
}

Note that you'll need to include the kubelogin binary in the repo. More details here.

Will
  • 1,792
  • 2
  • 23
  • 44