0

I am getting this error when ever I try to create a persistent claim and volume according this kubernetes_persistent_volume_claim

Error: Post "http://localhost/api/v1/namespaces/default/persistentvolumeclaims": dial tcp [::1]:80: connectex: No connection could be made because the target machine actively refused it.

I have also tried spooling a azure disk and creating a volume through that outlined here Persistent Volume using Azure Managed Disk

My terraform kubernetes provider looks like this:

provider "kubernetes" {
alias                  = "provider_kubernetes"
host                   = module.kubernetes-service.kube_config.0.host
username               = module.kubernetes-service.kube_config.0.username
password               = module.kubernetes-service.kube_config.0.password
client_certificate     = base64decode(module.kubernetes-service.kube_config.0.client_certificate)
client_key             = base64decode(module.kubernetes-service.kube_config.0.client_key)
cluster_ca_certificate = base64decode(module.kubernetes-service.kube_config.0.cluster_ca_certificate)

}

I don't believe its even hitting K8 in my RG. Is there something I am missing or maybe I am not understanding how this works to put it together the right way. I have the RG spooled with the K8 resource in the same terraform which creates fine but when it comes to setting up the persistent storage I can't get past the error.

Alam
  • 63
  • 5

2 Answers2

0

The provider is aliased, so first make sure that all kubernetes resources use the correct provider. You have to specify the aliased provider for each resource.

resource "kubernetes_cluster_role_binding" "current" {
  provider = kubernetes.provider_kubernetes

  # [...]
}

Another possibility is, that the localhost connection error may be, because there is a pending change to the Kubernetes cluster resource which leads to its return attributes being in known-after-apply state.

Try terraform plan --target module.kubernetes-service.kube_config to see if that shows any pending changes to the K8s resource (it presumably depends on). Better, target the Kubernetes cluster resource directly.

If it does, first apply those changes alone: terraform apply --target module.kubernetes-service.kube_config, then run a second apply without --target like this: terraform apply.

If there is no pending change to the cluster resource, check that the module returns correct credentials. Also double check, that the use of base64decode is correct.

pst
  • 1,414
  • 11
  • 22
0

Try terraform plan --target module.kubernetes-service.kube_config to see if >that shows any pending changes to the K8s resource (it presumably depends on). >Better, target the Kubernetes cluster resource directly.

If it does, first apply those changes alone: terraform apply --target >module.kubernetes-service.kube_config, then run a second apply without -->target like this: terraform apply.

In my case it was a conflict in the IAM role definition and assignment which caused the problem. Executing terraform plan --target module.eks (module.eks being the module name used in the terraform code) followed by terraform apply --target module.eks removed the conflicting role definitions. From the terraform output I could see which role policy and role was causing the issue.

Pieter
  • 1