23

My simple terraform file is:

provider "aws" {
  region = "region"
  access_key = "key" 
  secret_key = "secret_key"
}

terraform {
  backend "s3" {
    # Replace this with your bucket name!
    bucket         = "great-name-terraform-state-2"
    key            = "global/s3/terraform.tfstate"
    region         = "eu-central-1"
    # Replace this with your DynamoDB table name!
    dynamodb_table = "great-name-locks-2"
    encrypt        = true
  }
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "great-name-terraform-state-2"
  # Enable versioning so we can see the full revision history of our
  # state files
  versioning {
    enabled = true
  }
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

resource "aws_dynamodb_table" "terraform_locks" {
  name         = "great-name-locks-2"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"
  attribute {
    name = "LockID"
    type = "S"
    }
}

All I am trying to do is to replace my backend from local to be store at S3. I am doing the following:

  1. terraform init ( when the terrafrom{} block is comment )

  2. terrafrom apply - I can see in my AWS that the bucket was created and the Dynmpo table as well.

  3. now I am un commenting the terrafrom block and again terraform init and i get the following error:

Error loading state:
    AccessDenied: Access Denied
        status code: 403, request id: xxx, host id: xxxx

My IAM has administer access I am using Terraform v0.12.24 as one can observe, I am directly writing my AWS key and secret in the file

What am i doing wrong?

I appreciate any help!

EricSchaefer
  • 25,272
  • 21
  • 67
  • 103
helpper
  • 2,058
  • 4
  • 13
  • 32

13 Answers13

24

I encountered this before. Following are the steps that will help you overcome that error-

  1. Delete the .terraform directory
  2. Place the access_key and secret_key under the backend block. like below given code
  3. Run terraform init
  backend "s3" {
    bucket = "great-name-terraform-state-2"
    key    = "global/s3/terraform.tfstate"
    region = "eu-central-1"
    access_key = "<access-key>"
    secret_key = "<secret-key>"
  }
}

The error should be gone.

Mintu
  • 462
  • 3
  • 7
  • 6
    You can also set the AWS profile name instead of the access and secret keys. – Juancho Feb 26 '21 at 20:33
  • 16
    Best practices would not advise for you to store sensitive material like your access and secret keys in your Terraform files. This is especially true if you also use a code repository like Github. As @Juancho points out, all you need to do is include a line in the backend like this: `profile = your_profile_name_from_the_aws_credentials_file` Also, deleting your `.terraform` directory is entirely unnecessary. – eatsfood Apr 15 '21 at 22:34
  • Aditionallly you can use shared_credentials_file to point to a different credentials file on other location than ~/.aws/credentials if needed. – Juancho Apr 16 '21 at 23:16
  • I confirm that the only thing needed is to add the `profile` property. Don't delete the .terraform dir and ideally don't put the `access_key` or `secret_key` in there, use the profile instead. – Edeph Sep 08 '21 at 10:01
10

I knew that my credentials were fine by running terraform init on other projects that shared the same S3 bucket for their Terraform backend.

What worked for me:

rm -rf .terraform/

Edit

Make sure to run terraform init again after deleting your local .terraform directory to ensure you installed the required packages.

Blair Nangle
  • 1,221
  • 12
  • 18
5

I also faced the same issue. Then I manually remove the state file from my local system. You can find the terraform.tfstate file under .terraform/ directory and run init again. in case you had multiple profiles configured in aws cli. not mentioning profile under aws provider configuration will make terraform use default profile.

RVndra Singh
  • 146
  • 1
  • 5
4

For better security, you may use shared_credentials_file and profile like so;

provider "aws" {
  region = "region"
  shared_credentials_file = "$HOME/.aws/credentials # default
  profile = "default" # you may change to desired profile
}

terraform {
  backend "s3" {
    profile = "default" # change to desired profile
    # Replace this with your bucket name!
    bucket         = "great-name-terraform-state-2"
    key            = "global/s3/terraform.tfstate"
    region         = "eu-central-1"
    # Replace this with your DynamoDB table name!
    dynamodb_table = "great-name-locks-2"
    encrypt        = true
  }
}
Mekky_Mayata
  • 197
  • 2
  • 11
3

I googled arround but nothing help. Hope this will solve your problem. My case: I was migrating the state from local to AWS S3 bucket.

  1. Comment out terraform scope
provider "aws" {
  region = "region"
  access_key = "key" 
  secret_key = "secret_key"
}

#terraform {
#  backend "s3" {
#    # Replace this with your bucket name!
#    bucket         = "great-name-terraform-state-2"
#    key            = "global/s3/terraform.tfstate"
#    region         = "eu-central-1"
#    # Replace this with your DynamoDB table name!
#    dynamodb_table = "great-name-locks-2"
#    encrypt        = true
#  }
#}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "great-name-terraform-state-2"
  # Enable versioning so we can see the full revision history of our
  # state files
  versioning {
    enabled = true
  }
  server_side_encryption_configuration {
    rule {
      apply_server_side_encryption_by_default {
        sse_algorithm = "AES256"
      }
    }
  }
}

resource "aws_dynamodb_table" "terraform_locks" {
  name         = "great-name-locks-2"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"
  attribute {
    name = "LockID"
    type = "S"
    }
}
  1. Run
terraform init
terraform plan -out test.tfplan
terraform apply "test.tfplan"

to create resources (S3 bucket and DynamoDb)

  1. Then uncomment terraform scope, run
AWS_PROFILE=REPLACE_IT_WITH_YOUR  TF_LOG=DEBUG   terraform init

If you get errors, just search for X-Amz-Bucket-Region:

-----------------------------------------------------
2020/08/14 15:54:38 [DEBUG] [aws-sdk-go] DEBUG: Response s3/ListObjects Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 403 Forbidden
Connection: close
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Fri, 14 Aug 2020 08:54:37 GMT
Server: AmazonS3
X-Amz-Bucket-Region: eu-central-1
X-Amz-Id-2: REMOVED
X-Amz-Request-Id: REMOVED

Copy the value of X-Amz-Bucket-Region, my case is eu-central-1.

  1. Change your region in terraform backend configuration to the corresponding value.
terraform {
  backend "s3" {
    # Replace this with your bucket name!
    bucket         = "great-name-terraform-state-2"
    key            = "global/s3/terraform.tfstate"
    region         = "eu-central-1"
    # Replace this with your DynamoDB table name!
    dynamodb_table = "great-name-locks-2"
    encrypt        = true
  }
}
2

As Mintu said, we need to include the credentials in the backend configuration. One other way to do that (not to include creds) is:

  backend "s3" {
    bucket = "great-name-terraform-state-2"
    key    = "global/s3/terraform.tfstate"
    region = "eu-central-1"
    profile = "AWS_PROFILE"
  }
}

Not that, the AWS profile needs to be configured in the machine:

aws configure

or

nano .aws/credentials

One thing here to watch out, when you need to apply terraform from inside an EC2 instance, you may have an IAM Role assigned to the instance, and that may produce conflict in the permissions.

Khadjiev
  • 25
  • 7
1

i had the same issue my IAM role didnt have the correct premissions to do List on the bucket to check use:

aws s3 ls

and see if you have access. If not add the proper IAM role

0

It's not possible to create the S3 bucket that you are planning to use as remote state storage within the same terraform project. You will have to create another terraform project where you provision your state buckets (+ lock tables) or just create the bucket manually.

For a more detailed answer please read this

DerPauli
  • 63
  • 1
  • 11
  • created another project to use the previous bucket and the dynamo table, made myself the folder system as it is in the key when made ```terraform init``` got ```Successfully configured the backend "s3"! Terraform will automatically use this backend unless the backend configuration changes. Error refreshing state: AccessDenied: Access Denied status code: 403, request id: xxx, host id: xxx``` – helpper May 18 '20 at 06:08
  • In the most cases it is easier to just create it by hand, especially when you don't have to do it often. What I meant by "create another TF project" is: Image you are working in a DevOps Team and you have to create new dynamic terraform projects on the fly to provide to your team. Then, instead of creating the state bucket manually, you could write a simple terraform file which has a local state and provisions an s3 bucket and a dynamo db table. Afterwards you take these two components and reference them by name in your `terraform { backend "s3" {} }` block. – DerPauli May 18 '20 at 08:28
  • I would be interested to see what output you get when you create the bucket by hand. – DerPauli May 18 '20 at 08:28
  • sorry for the late replay, nothing works, I try to make the bucket and table from a different project - didnt work . as well tried to create manually always the same Error – helpper May 25 '20 at 12:24
  • You can try to debug the terraform init command with: `TF_LOG=DEBUG terraform init`. Maybe its worth having a look at your ~/.aws/credentials file (or your environment variables `echo $AWS_ACCESS_KEY_ID` ,`echo $AWS_SECRET_ACCESS_KEY ` and `echo $AWS_SESSION_TOKEN `) if there are some different credentials which may override your set credentials. – DerPauli May 25 '20 at 21:03
  • The best bet would be to look at the `TF_LOG=DEBUG`. Maybe also have a look at [this github issue](https://github.com/hashicorp/terraform/issues/18801) for more information. – DerPauli May 25 '20 at 21:06
0

I was getting the same issue after running terraform apply; terraform init worked fine. None of the suggestions here worked but switching my shell from zsh to bash solved it.

0

This happened to me and the problem was that I was trying to create a bucket with a name that already exists!

Simon Crane
  • 151
  • 1
  • 7
0

What works for me was the answer for the topic: "Error refreshing state: state data in S3 does not have the expected content" from @Exequiel Barriero (Case 2).

Link: Answer from @Exequiel Barriero

But also a different reason why you will get this error and is not related to the backend is if you try to create a lambda function with a layer and you pass a wrong ARN, in my case one extra character in the ARN caused me this headache, so please review your ARN carefully.

Esau Reyes
  • 31
  • 5
0

mostly we do as like comment out the s3 and dynamodb table configuration or else check the buckets and dynamodb table values sometimes those values were miss matched that time also we are facing issues.

0

There might be a wrong aws_access_key_id or aws_secret_access_key in the .aws/config file. When I erased two lines from the file it worked!