1

I'm trying to install Vault on a Kubernetes Cluster by running the Vault Helm chart out of Terraform. For some reason the ingress doesn't get created. When I forward the pods port the ui comes up fine, so I assume everything is working, but the ingress not being available is tripping me up. Edit: There are no errors while running terraform apply. If there is another point where I should look, please tell me. This is my helm_release resource:

  name       = "vault"
  repository = "https://helm.releases.hashicorp.com"
  chart      = "vault"

  namespace        = "vault"
  create_namespace = true

  set {
    name  = "ui.enabled"
    value = "true"
  }

  #Set ingress up to use cert-manager provided secret
  set {
    name  = "ingress.enabled"
    value = "true"
  }

  set {
    name  = "ingress.annotations.cert-manager\\.io/cluster-issuer"
    value = "letsencrypt-cluster-prod"
  }

  set {
    name  = "ingress.annotations.kubernetes\\.io/ingress\\.class"
    value = "nginx"
  }

  set {
    name  = "ingress.tls[0].hosts[0]"
    value = var.vault_hostname
  }

  set {
    name  = "ingress.hosts[0].host"
    value = var.vault_hostname
  }

  set {
    name  = "ingress.hosts[0].paths[0]"
    value = "/"
  }
}

I'm relatively new to all of these techs, having worked with puppet before, so if someone could point me in the right direction, I'd be much obliged.

Marko E
  • 13,362
  • 2
  • 19
  • 28
eviscares
  • 11
  • 4
  • Are there any errors or it just shows nothing? – Marko E Aug 04 '22 at 07:42
  • There are no errors while running terraform apply. If there is another point where I should look, please tell me ^^ – eviscares Aug 04 '22 at 07:47
  • seems like you are provided a wrong value https://github.com/hashicorp/vault-helm/blob/main/templates/server-ingress.yaml#L4. It should be server.ingress.enabled. – The Fool Aug 04 '22 at 08:02

1 Answers1

1

I achieved enabling ingress with a local variable, here is the working example

locals {
  values = {
    server = {
      ingress = {
        enabled = var.server_enabled
        labels = {
          traffic = "external"
        }
        ingressClassName = "nginx"
        annotations = {
          "kubernetes.io/tls-acme"                   = "true"
          "nginx.ingress.kubernetes.io/ssl-redirect" = "true"
        }
        hosts = [{
          host  = vault.example.com
          paths = ["/"]
        }]
        tls = [
          {
            secretName = vault-tls-secret
            hosts      = ["vault.example.com"]
          }
        ]
      }
    }
  }
}

resource "helm_release" "vault" {
  name             = "vault"
  namespace        = "vault"
  repository       = "https://helm.releases.hashicorp.com"
  chart            = "vault"
  version          = "0.19.0"
  create_namespace = true

  # other values to set
  #set { 
  #  name = "server.ha.enabled"
  #  value = "true"
  #}

  values = [
    yamlencode(local.values)
  ]
}
Binh Nguyen
  • 1,891
  • 10
  • 17
Adiii
  • 54,482
  • 7
  • 145
  • 148
  • That's because you have the root key 'server' in the YAML here. For open-source helm charts, artifacthub helps you to find the correct structure of the values.yaml, e.g: https://artifacthub.io/packages/helm/hashicorp/vault?modal=values-schema – GeertPt Aug 04 '22 at 08:33
  • the culprint is that its under server, this has nothing to do with local variables. – The Fool Aug 04 '22 at 08:33
  • question should be closed as typo, imo. – The Fool Aug 04 '22 at 08:34
  • I agree, it has nothing to do with the local variable, but to deal with the nesting nature of the helm value its seems clear – Adiii Aug 04 '22 at 08:50