7

I am currently migrating my config management on AWS to Terraform to make it more pluggable. What I like is the possibility to manage rolling updates to an Autoscaling Group where Terraform waits until the new instances are in service before it destroys the old infrastructure. This works fine with the "bare" infrastructure. But I ran into a problem when update the actual app instances. The code is deployed via AWS CodeDeploy and I can tell Terraform to use the generated name of the new Autoscaling Group as deployment target but it doesn't deploy the code to the new instances on startup. When I manually select "deploy changes to the deployment group" the deployment starts successfully. Any ideas how to automate this step?

kgorskowski
  • 745
  • 5
  • 12

2 Answers2

2

https://www.terraform.io/docs/provisioners/local-exec.html might be able to do this. Couple assumptions

Once your code has been posted, you would just add a

resource "something" "some_name" {
    # Whatever config you've setup for the resource
    provisioner "local-exec" {
        command = "aws deploy create-deployment"
  }
}

FYI the aws deploy create-deployment command is not complete, so you'll have to play with that in your environment till you've got the values needed to trigger the rollout but hopefully this is enough to get you started.

Paul
  • 1,108
  • 9
  • 14
  • Thanks for the input. Yes, I know about the local exec option in Terraform and as of now that seems to be the only option to implement this "feature". As this is not really "transportable/stateless" enough for our usecase (multiple AWS accounts, aws cli not mandantory, different Roles et cetera) we changed our workflow (again). – kgorskowski Jun 27 '17 at 12:08
1

You can trigger the deployment directly in your user-data in the

resource "aws_launch_configuration" "my-application" {
   name                 = "my-application"
   ...
   user_data            = "${data.template_file.node-init.rendered}"
}

data "template_file" "node-init" {
   template = "${file("${path.module}/node-init.yaml")}"
}

Content of my node-init.yaml, following recommendations of this documentation: https://aws.amazon.com/premiumsupport/knowledge-center/codedeploy-agent-launch-configuration/

write_files:
 - path: /root/configure.sh
content: |
    #!/usr/bin/env bash
    REGION=$(curl 169.254.169.254/latest/meta-data/placement/availability-zone/ | sed 's/[a-z]$//')
    yum update -y
    yum install ruby wget -y
    cd /home/ec2-user
    wget https://aws-codedeploy-$REGION.s3.amazonaws.com/latest/install
    chmod +x ./install
    ./install auto
    # Add the following line for your node to update itself
    aws deploy create-deployment --application-name=<my-application> --region=ap-southeast-2 --deployment-group-name=<my-deployment-group> --update-outdated-instances-only
runcmd:
  - bash /root/configure.sh

In this implementation the node is responsible for triggering the deployment itself. This is working perfectly so far for me but can result in deployment fails if the ASG is creating several instances at the same time (in that case the failed instances will be terminated quickly because not healthy).

Of course, you need to add the sufficient permissions to the role associated to your nodes to trigger the deployment.

This is still a workaround and if someone knows solution behaving the same way as cfn-init, I am interested.