29

I have two subscriptions in Azure. Let's call them sub-dev and sub-prod. Under sub-dev I have resources for development (in a resource group rg-dev) and under sub-prod resources for production (in a resource group rg-prod).

Now, I would like to have only one state-file for both dev and prod. I can do this as I am using Terraform workspaces (dev and prod). There is a Storage Account under sub-dev (rg-dev) named tfsate. It has a container etc. The Azure backend is configured like this:

terraform {
  backend "azurerm" {
    resource_group_name  = "rg-dev"
    storage_account_name = "tfstate"
    container_name       = "tfcontainer"
    key                  = "terraform.tfstate" 
  }
}

If I want to apply to the dev environment I have to switch Az Cli to the sub-dev. Similarly, for production, I would have to use sub-prod. I switch the default subscription with az cli:

az account set -s sub-prod

Problem is that the state's storage account is under sub-dev and not sub-prod. I will get access errors when trying to terraform init (or apply) when the default subscription is set to sub-prod.

Error: Failed to get existing workspaces: Error retrieving keys for Storage Account "tfstate": storage.AccountsClient#ListKeys: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client 'user@example.com' with object id '<redacted>' does not have authorization to perform action 'Microsoft.Storage/storageAccounts/listKeys/action' over scope '/subscriptions/sub-prod/resourceGroups/rg-dev/providers/Microsoft.Storage/storageAccounts/tfstate' or the scope is invalid. If access was recently granted, please refresh your credentials."

I have tried couple of things:

  • I added subscription_id = "sub-dev"
  • I generated a SAS token for the tfstate storage account and added the sas_token config value (removed resource_group_name)

but in vain and getting the same error.

I tried to az logout but terraform requires me to login first. Do I have to tune the permissions in the Azure end somehow (this is hard as the Azure environment is configured by a 3rd party) or does Terraform support this kind of having your state file under different subscription setup at all?

Juho Rutila
  • 2,316
  • 1
  • 25
  • 40
  • You don't want only one state file for dev and prod. If you apply these differently (eg `terraform apply dev` and `terraform apply production` or some equivalent) then you absolutely need two different state files or deploying the second one will overwrite the first, destroying everything in the first one. And you also don't want to apply both dev and production at the same time. – ydaetskcoR Jul 31 '19 at 12:40
  • 1
    I am using workspaces (dev and prod) so I can use single state file. To use different state files should I add some conditional values to the backend definition then? – Juho Rutila Aug 01 '19 at 06:17
  • I don't recommend workspaces for static environments. They add complexity to things and make it harder to see what you have deployed just from a glance at the code/file structure so you miss one of the big benefits of IaC. – ydaetskcoR Aug 01 '19 at 06:49
  • So, do you suggest of having two different directories (dev and prod) with identical tf-files (parametrized resource group name) and different backend configurations? – Juho Rutila Aug 01 '19 at 07:49
  • 1
    Yep. I'd use modules or symlinks to keep things DRY and only change what you need via different tfvars files and provider configuration files. There's a number of other questions and answers about how to structure this already on SO. – ydaetskcoR Aug 01 '19 at 07:55
  • I ended doing it like you suggested @ydaetskcoR. I leave the question open if someone wants to answer it, still. I didn't find any way to do it myself. – Juho Rutila Aug 02 '19 at 10:32

2 Answers2

27

For better or worse (I haven't experimented much with other methods of organising terraform) we use terraform in the exact way you are describing. A state file, in a remote backend, in a different subscription to my resources. Workspaces are created to handle environments for the deployment.

Our state files are specified like this:

terraform {
  required_version = ">= 0.12.6"
  
  backend "azurerm" {
    subscription_id      = "<subscription GUID storage account is in>"
    resource_group_name  = "terraform-rg"
    storage_account_name = "myterraform"
    container_name       = "tfstate"
    key                  = "root.terraform.tfstate"
  }
}

We keep our terraform storage account in a completely different subscription to our deployments but this isn't necessary.

When configuring your state file like so, it authenticates to the remote backend via az CLI, using the context of the person interacting with the CLI. This person needs to have the "Reader & Data Access" role to the storage account in order to dynamically retrieve the storage account keys at runtime.

With the above state file configured, executing Terraform would be

az login
az account set -s "<name of subscription where you want to create resources>"
terraform init
terraform plan
terraform apply
haodeon
  • 286
  • 3
  • 3
  • Can you update your answer to tell which subscription do you use (terraform storage account subscription/resources subscription) when you log in? – Juho Rutila Aug 06 '19 at 08:00
  • 4
    I have to do `az account set --subscription "<>"` instead – severin.julien Jun 18 '20 at 10:14
  • Hi @haodeon. Same scenario here. However I'm running the terraform scripts using a service principal which has permission only on the subscription where I want to create the resources. terraform init is receiving the StatusCode 403, even though I added the access_key for the storage account correctly. – Renato Silva Feb 04 '21 at 19:48
  • This doesn't seem to work for me. I've added the subscription_id property to the azurerm backend block, but it still throws the same error. It's like it is ignoring the subscription_id property. I'm using terraform 0.13 with the azurerm provider version 2.44.0 – Anthony Klotz Feb 10 '21 at 17:48
  • 1
    @RenatoSilva To use a service principal, which does not have RBAC permissions to the storage account, requires the storage access key. I am not sure why in your case it doesn't work. I've tested it myself and it works fine, I use it via environment variables. – haodeon Feb 17 '21 at 07:36
9

There's another way to do that. You can use the Access Key associated with the Storage Account on the other subscription(the one you want to have the state files on) and export it as an environment variable. Bash:

export ARM_ACCESS_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP_NAME --account-name $STORAGE_ACCOUNT_NAME --query '[0].value' -o tsv)

Powershell:

$env:ARM_ACCESS_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP_NAME --account-name $STORAGE_ACCOUNT_NAME --query '[0].value' -o tsv)

Then switch to the subscription you want to deploy to and deploy.

ehsan khodadadi
  • 123
  • 1
  • 3