28

I have an infrastructure I'm deploying using Terraform in AWS. This infrastructure can be deployed to different environments, for which I'm using workspaces.

Most of the components in the deployment should be created separately for each workspace, but I have several key components that I wish to be shared between them, primarily:

  • IAM roles and permissions
  • They should use the same API Gateway, but each workspace should deploy to different paths and methods

For example:

resource "aws_iam_role" "lambda_iam_role" {
  name = "LambdaGeneralRole"
  policy = <...>
}

resource "aws_lambda_function" "my_lambda" {
  function_name = "lambda-${terraform.workspace}"
  role = "${aws_iam_role.lambda_iam_role.arn}"
}

The first resource is a IAM role that should be shared across all instances of that Lambda, and shouldn't be recreated more than once.

The second resource is a Lambda function whose name depends on the current workspace, so each workspace will deploy and keep track of the state of a different Lambda.

How can I share resources, and their state, between different Terraform workspaces?

Bastian
  • 5,625
  • 10
  • 44
  • 68
mittelmania
  • 3,393
  • 4
  • 23
  • 48

2 Answers2

29

For the shared resources, I create them in a separate template and then refer to them using terraform_remote_state in the template where I need information about them.

What follows is how I implement this, there are probably other ways to implement it. YMMV

In the shared services template (where you would put your IAM role) I use Terraform backend to store the output data for the shared services template in Consul. You also need to output any information you want to use in other templates.

shared_services template

terraform {
  backend "consul" {
    address = "consul.aa.example.com:8500"
    path    = "terraform/shared_services"
  }
}

resource "aws_iam_role" "lambda_iam_role" {
  name = "LambdaGeneralRole"
  policy = <...>
}

output "lambda_iam_role_arn" {
  value = "${aws_iam_role.lambda_iam_role.arn}"
}

A "backend" in Terraform determines how state is loaded and how an operation such as apply is executed. This abstraction enables non-local file state storage, remote execution, etc.

In the individual template you invoke the backend as a data source using terraform_remote_state and can use the data in that template.

terraform_remote_state:

Retrieves state meta data from a remote backend

individual template

data "terraform_remote_state" "shared_services" {
    backend = "consul"
    config {
        address = "consul.aa.example.com:8500"
        path    = "terraform/shared_services"
    }
}

# This is where you use the terraform_remote_state data source
resource "aws_lambda_function" "my_lambda" {
  function_name = "lambda-${terraform.workspace}"
  role = "${data.terraform_remote_state.shared_services.lambda_iam_role_arn}"
}

References:

https://www.terraform.io/docs/state/remote.html

https://www.terraform.io/docs/backends/

https://www.terraform.io/docs/providers/terraform/d/remote_state.html

kenlukas
  • 3,616
  • 9
  • 25
  • 36
  • I have not tested it but I see no reason why it shouldn't, as long as the `terraform.tfstate` file is available. – kenlukas Feb 26 '19 at 15:05
  • I have just tried and I can confirm `terraform_remote_state` works with `backend = "local"` and even without specifying any backend, which seems to take `local` as a default. – Bastian Feb 28 '19 at 15:10
  • 2
    In my case I need to retrieve a resource that exists only in my production workspace. I have discovered in the docs that you can specify the `workspace = "production"` within `terraform_remote_state`. That's handy. – Bastian Feb 28 '19 at 15:14
  • 1
    The part that I am not sure to understand is the way to use the `output`. I can output the resource from the workspace where that resource exists in order to reference it from the workspace where it does not exist. But then the output does not work in the workspace where that resource does not exist. What am I missing? – Bastian Feb 28 '19 at 16:14
  • 1
    I don't think this actually answers the question when workspaces are involved. It just describes how to retrieve from separate state. The `lambda_iam_role` is still created in each workspace which is what the question (and myself) are trying to avoid. – Andy Shinn Apr 18 '19 at 02:38
  • If a resource is destroyed in the shared template (that has a different state file), doesn't that cause a failed apply when an individual resource is applied that references that destroyed resource? Splitting the state files causes terraform to not be aware that the individual resource should be destroyed as well. – Mario Ishac Jul 15 '21 at 07:43
-1

Resources like aws_iam_role having a name attribute will not create a new instance if the name value matches an already provisioned resource.

So, the following will create a single aws_iam_role named LambdaGeneralRole.

resource "aws_iam_role" "lambda_iam_role" {
  name = "LambdaGeneralRole"
  policy = <...>
}

...

resource "aws_iam_role" "lambda_iam_role_reuse_existing_if_name_is_LambdaGeneralRole" {
  name = "LambdaGeneralRole"
  policy = <...>
}

Similarly, the aws provider will effectively creat one S3 bucket name my-store given the following:

resource "aws_s3_bucket" "store-1" {
  bucket        = "my-store"
  acl           = "public-read"
  force_destroy = true
}

...

resource "aws_s3_bucket" "store-2" {
  bucket        = "my-store"
  acl           = "public-read"
  force_destroy = true
}

This behaviour holds even if the resources were defined different workspaces with their respective separate Terraform state.


To get the best of this approach, define the shared resources as separate configuration. That way, you don't risk destroying a shared resource after running terraform destroy.

Igwe Kalu
  • 14,286
  • 2
  • 29
  • 39
  • Wouldn't this have some side effects, such as calling "terraform destroy" destroying the shared infrastructure used by multiple workspaces? – Oren Sep 27 '19 at 13:53
  • 1
    It would if all your resources are defined in a single configuration. – Igwe Kalu Sep 28 '19 at 20:19