2

i am planning to use remote backend as postgres instead of s3 as enterprise standard.

terraform {
  backend "pg" {
   conn_str = "postgres://user:pass@db.example.com/schema_name"
}
}

When we use postgres remote backend, when we run terraform init, we have to provide schema which is specific to that terraform folder, as backend supports only one table and new record will be created with workspace name.

I am stuck now, as i have 50 projects and each have 2 tiers which is maintained in different folders, then we need to create 100 schemas in postgres. Also it is difficult to handle so many schemas in automated provisioning.

Can we handle something in similar to S3, where we have one bucket for all projects and multiple entries in same bucket with different key which specified in each terraform script. Can we use single schema for all projects and multiple tables/records based on key provide in backend configuration of each terraform folder.

SASI
  • 475
  • 2
  • 7
  • 16

1 Answers1

1

You can use a single database and the pg provider will automatically create a specified schema.

Something like this:

terraform {
  backend "pg" {
    conn_str = "postgres://user:pass@db.example.com/terraform_backend"
    schema   = "fooapp"
  }
}

This keeps the projects unique, at least. You could append a tier to that, too, or use Terraform Workspaces.

If you specify the config on the command line (aka partial configuration), as the provider recommends, it might make it easier to dynamically set for your use case:

terraform init \
  -backend-config="conn_str=postgres://user:pass@db.example.com/terraform_backend" \
  -backend-config="schema=fooapp-prod"

This works pretty well in my scenario similar to yours. Each project has a unique schema in a shared database and no tasks beyond the initial creation/configuration of the database is needed - the provider creates the schema as specified.

Dharman
  • 30,962
  • 25
  • 85
  • 135
Josh B
  • 11
  • 2