0

Does anyone have any cool ideas on how to handle Terraform provider credentials for AWS given these use cases:

  • Distributed environments (prod/pre/qa/test/dev) with individual AWS accounts
  • S3 backend remote state for all environments in a single AWS account
  • Test Kitchen using InSpec.

My current workflow requires changing the AWS_ACCESS_KEY and AWS_SECRET_KEY depending on the operation:

  • terraform init - requires access to S3 backend remote state
  • terraform plan/apply - requires access to specific environment + remote state
    • Non-functional (a single set of credentials doesn't have access to both the env + remote state)
  • kitchen converge - requires access to test environment + remote state
    • Non-functional (same reason as above)
  • kitchen verify - requires access to test environment.

Ideas

  • I wish I could store the S3 remote state in the respective environment accounts but variables don't seem to be supported in the Terraform backend configuration.
XeonFibre
  • 33
  • 3

1 Answers1

0

You will need the main account to be able to assume a role on each env account to perform the changes, while the remote main account will keep all states. This is a good way to work with terraform worspaces Assuming you have two workspaces, prod and dev, you can try something like this:

variable "workspace_roles" {
  default = {
    dev  = "arn:aws:iam::<dev account id>:role/terra_role"
    prod = "arn:aws:iam::<prodaccount id>:role/terra_role"
  }
}

provider "aws" {
 assume_role = var.workspace_roles[terraform.workspace]
}
Stargazer
  • 1,442
  • 12
  • 19
  • Thanks, I've seen this recommended elsewhere too. It will be part of the solution. I just need to figure out how to handle Kitchen since it uses either the AWS env vars or ~/.aws/credentials. – XeonFibre Oct 20 '20 at 11:11
  • @XeonFibre you can pass `shared_credentials_profile` on kitchen config to specify the profile you want to use. – Stargazer Oct 20 '20 at 17:06