14

Here is a terraform script I lifted from this repo

provider "aws" {
  region  = "${var.aws_region}"
  profile = "${var.aws_profile}"
}

##----------------------------
#     Get VPC Variables
##----------------------------

#-- Get VPC ID
data "aws_vpc" "selected" {
  tags = {
    Name = "${var.name_tag}"
  }
}

#-- Get Public Subnet List
data "aws_subnet_ids" "selected" {
  vpc_id = "${data.aws_vpc.selected.id}"

  tags = {
    Tier = "public"
  }
}

#--- Gets Security group with tag specified by var.name_tag
data "aws_security_group" "selected" {
  tags = {
    Name = "${var.name_tag}*"
  }
}

#--- Creates SSH key to provision server
module "ssh_key_pair" {
  source                = "git::https://github.com/cloudposse/terraform-aws-key-pair.git?ref=tags/0.3.2"
  namespace             = "example"
  stage                 = "dev"
  name                  = "${var.key_name}"
  ssh_public_key_path   = "${path.module}/secret"
  generate_ssh_key      = "true"
  private_key_extension = ".pem"
  public_key_extension  = ".pub"
}

#-- Grab the latest AMI built with packer - widows2016.json
data "aws_ami" "Windows_2016" {
  owners = [ "amazon", "microsoft" ]
  filter {
    name   = "is-public"
    values = ["false"]
  }

  filter {
    name   = "name"
    values = ["windows2016Server*"]
  }

  most_recent = true
}

#-- sets the user data script
data "template_file" "user_data" {
  template = "/scripts/user_data.ps1"
}


#---- Test Development Server
resource "aws_instance" "this" {
  ami                  = "${data.aws_ami.Windows_2016.image_id}"
  instance_type        = "${var.instance}"
  key_name             = "${module.ssh_key_pair.key_name}"
  subnet_id            = "${data.aws_subnet_ids.selected.ids[01]}"
  security_groups      = ["${data.aws_security_group.selected.id}"]
  user_data            = "${data.template_file.user_data.rendered}"
  iam_instance_profile = "${var.iam_role}"
  get_password_data    = "true"

  root_block_device {
    volume_type           = "${var.volume_type}"
    volume_size           = "${var.volume_size}"
    delete_on_termination = "true"
  }

  tags {
    "Name"    = "NEW_windows2016"
    "Role"    = "Dev"
  }

  #--- Copy ssh keys to S3 Bucket
  provisioner "local-exec" {
    command = "aws s3 cp ${path.module}/secret s3://PATHTOKEYPAIR/ --recursive"
  }

  #--- Deletes keys on destroy
  provisioner "local-exec" {
    when    = "destroy"
    command = "aws s3 rm 3://PATHTOKEYPAIR/${module.ssh_key_pair.key_name}.pem"
  }

  provisioner "local-exec" {
    when    = "destroy"
    command = "aws s3 rm s3://PATHTOKEYPAIR/${module.ssh_key_pair.key_name}.pub"
  }
}

When I tun terraform plan I got this error message:

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.template_file.user_data: Refreshing state...

Error: Error refreshing state: 1 error(s) occurred:

* provider.aws: error validating provider credentials: error calling sts:GetCallerIdentity: NoCredentialProviders: no valid providers in chain. Deprecated.
    For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Anthony Kong
  • 3,288
  • 11
  • 57
  • 96

4 Answers4

9

Double check the format of your ~/.aws/credentials file.

In my case, the credentials used the following format :

[profile]
AWS_ACCESS_KEY_ID=xxxx
AWS_SECRET_ACCESS_KEY=yyyy

changing it to the following fixed the issue :

[profile]
aws_access_key_id = xxxx
aws_secret_access_key = yyyy
Orabîg
  • 230
  • 4
  • 11
  • 2
    You've got to be kidding me. This fixed it. Good job to whoever wrote the aws module... not... what a joke. – Rob Jun 06 '22 at 23:16
  • I think that the issue comes from the fact that the specifications for this file are not very clear, and thus they were interpreted differently by different implementations. So, sometimes users hit the wall when a valid file is suddenly not accepted. – Orabîg Jun 14 '22 at 20:05
  • This should be the accepted answer. – Esa Jokinen Aug 30 '23 at 05:29
7

I think you missed access and secret keys. Try something like below. If you are not passing import as variable.

provider "aws" {
  region  = "${var.region}"
  profile = "${var.profile}"   
  access_key=********
  secret_key=********
}
asktyagi
  • 2,860
  • 2
  • 8
  • 25
  • 1
    I use `export AWS_ACCESS_KEY_ID=xxx` and `export AWS_SECRET_ACCESS_KEY=yyy` instead. But it fixes the issue. Thanks! – Anthony Kong Jul 04 '19 at 06:27
  • 6
    Don't store your keys in the terraform files. Static Credentials Warning: Hard-coding credentials into any Terraform configuration is not recommended, and risks secret leakage should this file ever be committed to a public version control system. -from Hashicorp documentation. – jorfus Nov 05 '20 at 01:27
1

In my case I forgot to assign the attribute session-name while I was using a role to assume for the terraform backend. ‍️

terraform {
  backend "s3" {
    bucket   = "terraform-bucket-xxxx"
    key      = "state.tfstate"
    region   = "us-east-1"
    role_arn = "arn:aws:iam::xxxxxx:role/xxxx"
    session_name = "terraform"
  }
}
1

With the coming Announcing an update to IAM role trust policy behavior there is another possibility: you may have already assumed your target role (e.g., via export AWS_PROFILE=... (emphasis mine):

Therefore, beginning today, for any role that has not used the identity-based behavior since June 30, 2022, a role trust policy must explicitly grant permission to all principals, including the role itself, that need to assume it under the specified conditions.

If you have an AWS_PROFILE environment variable set and can run aws sts get-caller-identity and see the desired role in your backend config, you have two options:

  1. You can unset AWS_PROFILE before running Terraform (assuming that your default IAM role can assume the role_arn in your backend config)
  2. You can update your target IAM role to trust itself (an example of how to do this is included in the article announcing the change)
Gordon Fogus
  • 111
  • 2