5

I am trying to use Terraform Helm provider (https://www.terraform.io/docs/providers/helm/index.html) to deploy a workload to GKE cluster.

I am more or less following Google's example - https://github.com/GoogleCloudPlatform/terraform-google-examples/blob/master/example-gke-k8s-helm/helm.tf, but I do want to use RBAC by creating the service account manually.

My helm.tf looks like this:

variable "helm_version" {
  default = "v2.13.1"
}

data "google_client_config" "current" {}

provider "helm" {
  tiller_image = "gcr.io/kubernetes-helm/tiller:${var.helm_version}"
  install_tiller = false # Temporary

  kubernetes {
    host                   = "${google_container_cluster.data-dome-cluster.endpoint}"
    token                  = "${data.google_client_config.current.access_token}"

    client_certificate     = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.client_certificate)}"
    client_key             = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.client_key)}"
    cluster_ca_certificate = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.cluster_ca_certificate)}"
  }
}


resource "helm_release" "nginx-ingress" {
  name  = "ingress"
  chart = "stable/nginx-ingress"

  values = [<<EOF
rbac:
  create: false
controller:
  stats:
    enabled: true
  metrics:
    enabled: true
  service:
    annotations:
      cloud.google.com/load-balancer-type: "Internal"
    externalTrafficPolicy: "Local"
EOF
  ]

  depends_on = [
    "google_container_cluster.data-dome-cluster",
  ]
}

I am getting the following error:

Error: Error applying plan:

1 error(s) occurred:

* module.data-dome-cluster.helm_release.nginx-ingress: 1 error(s) occurred:

* helm_release.nginx-ingress: error creating tunnel: "pods is forbidden: User \"client\" cannot list pods in the namespace \"kube-system\""

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

This happens after I manually created Helm RBAC and installed Tiller.

I also tried to set "install_tiller=true" before with exactly the same error when Tiller was installed

"kubectl get pods" works without any problems.

What is this user "client" and why it is forbidden from accessing the cluster?

Thanks

Esteban Garcia
  • 2,171
  • 16
  • 24
Meir Tseitlin
  • 1,878
  • 2
  • 17
  • 28
  • When you installed Helm on the Cluster (Tiller) did you specify the `--service-account` flag when running `helm init`? If you want to install Tiller via terraform, you also need to add the `service_account` attribute. – Blokje5 Apr 15 '19 at 13:04
  • I did specified `--service-account` – Meir Tseitlin Apr 15 '19 at 17:21
  • Can you describe the service-account, i.e. `kubectl describe clusterrole ` and add it to your post? – Blokje5 Apr 15 '19 at 18:10

1 Answers1

3

Creating resources for the service account and cluster role binding explicitly works for me:

resource "kubernetes_service_account" "helm_account" {
  depends_on = [
    "google_container_cluster.data-dome-cluster",
  ]
  metadata {
    name      = "${var.helm_account_name}"
    namespace = "kube-system"
  }
}

resource "kubernetes_cluster_role_binding" "helm_role_binding" {
  metadata {
    name = "${kubernetes_service_account.helm_account.metadata.0.name}"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind      = "ClusterRole"
    name      = "cluster-admin"
  }
  subject {
    api_group = ""
    kind      = "ServiceAccount"
    name      = "${kubernetes_service_account.helm_account.metadata.0.name}"
    namespace = "kube-system"
  }
  provisioner "local-exec" {
    command = "sleep 15"
  }
}

provider "helm" {
  service_account = "${kubernetes_service_account.helm_account.metadata.0.name}"
  tiller_image = "gcr.io/kubernetes-helm/tiller:${var.helm_version}"
  #install_tiller = false # Temporary

  kubernetes {
    host                   = "${google_container_cluster.data-dome-cluster.endpoint}"
    token                  = "${data.google_client_config.current.access_token}"

    client_certificate     = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.client_certificate)}"
    client_key             = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.client_key)}"
    cluster_ca_certificate = "${base64decode(google_container_cluster.data-dome-cluster.master_auth.0.cluster_ca_certificate)}"
  }
}

Eric Schoen
  • 668
  • 9
  • 16
  • My plan is complaining about the **service_account**, and **tiller_image** attributes my helm version is **2.6.0** – George Udosen Sep 01 '22 at 15:29
  • Clarification needed. Helm 2.6 or Terraform Helm Provider 2.6? Helm before 3.0.0 used Tiller. Helm from 3.0.0 forwards removed it. The Helm Provider hasn't supported tiller_image and service_account for a long time (0.10.6? Last one compatible with Helm 2.x?). Terraform Helm Providers since the 1.0.0 release have used Helm 3.x. Simply remove service_account and tiller_image from your provider "Helm" { } statement. – Eric Schoen Sep 02 '22 at 21:09
  • Thanks for the update. I have been able to deploy to gke using helm! – George Udosen Sep 03 '22 at 08:38