0

I'm using ECS containers with CodeBuild and CodeDeploy (for the blue/green deployment) stages within a CodePipeline. I got an appspec.yml file in the root of my application code with task definition arn and the container name. All this is working good in one-environment-scenario. In my case when I have a separate AWS account for Dev, Test, and Prod, I need CodeDeploy to swap task definition arn based on environment context. Is there a way to pass parameters and modify the appspec.yml file something like we have for buildspec.yml file and custom environment variables? If not, what would be the best solution in a cross-account deployment using appspec.yml file?

UPDATE

Kudos to Ronan Cunningham for the Python script--see his code example below--which allows to generate the appspec.json file as an artifact of the CodeBuild stage and pass it down as input to CodeDeploy stage. You can call the script from the buildspec.yml file and pass custom environment variables as script parameters which will define your appspec.json based on AWS environment context. The script should be placed in the root of the app along the buildspec.yml and Dockerfile.

create_appspec_json.py script:

#!/usr/bin/python
import json
from sys import argv


def print_obj_to_disk(obj, file_type):
    if file_type == 'app_spec':
        file_name = 'appspec.json'
    if file_type == 'task_def':
        file_name = 'taskdef.json'

    print('Writing {}:'.format(file_name))
    print(obj)
    print(json.dumps(obj, indent=2))
    with open(file_name, 'w') as outfile:
        json.dump(obj, outfile, indent=2)


def return_appspec(container_name, container_port, task_definition_arn):
    appsec_obj = {
        "version": 0.0,
        "Resources": [
            {
                "TargetService": {
                    "Type": "AWS::ECS::Service",
                    "Properties": {
                        "TaskDefinition": task_definition_arn,
                        "LoadBalancerInfo": {
                            "ContainerName": container_name,
                            "ContainerPort": container_port
                        },
                    }
                }
            }
        ]
    }
    return appsec_obj


appspec_obj = return_appspec(argv[1], argv[2], argv[3])

print_obj_to_disk(appspec_obj, 'app_spec')

Call the script from buildspec.yml and pass the environment variables as parameters.

artifacts: 
  files: 
    - appspec.json
phases:
  install:
    commands:
      - pip install boto3
  build: 
    commands: 
      - python create_appspec_json.py $CONTAINER_NAME $CONTAINER_PORT $TASK_DEFINITION_ARN
      ...
  post_build: 
    commands: 
      ...
  pre_build: 
    commands:
      ...
version: 0.2
GGG
  • 125
  • 4
  • 12

1 Answers1

1

my approach (on what sounds like the same pipeline setup) was to have a stage with a CodeBuild action before the stage with the CodeDeployToECS action. The job of the former was to generate the appspec and task definition programmatically with it's OutputArtifact an InputArtifact to the latter.
Any required params are passed down to the CodeBuild project via the pipeline action Configuration.

Updated:

The basic approach is something like the following:

The codebuild project before codedeploy runs a python script to generate the appspec and task definition, looking up values as required and writing them to disk as json files. This is not the full script, just the app_spec portion as an example.

def print_obj_to_disk(obj,type):

    if type=='app_spec':
        file_name='appspec.json'
    if type=='task_def':
        file_name='taskdef.json'

    print('Writing {}:'.format(file_name))
    print(obj)
    print(json.dumps(obj, indent=2))
    with open(file_name, 'w') as outfile:
        json.dump(obj, outfile,indent=2)


def return_appspec(container_name,container_port,AfterAllowTestTrafficHookFunctionName):
 
    appsec_obj= {
                "version": 0.0,
                "Resources": [
                    {
                    "TargetService": {
                        "Type": "AWS::ECS::Service",
                        "Properties": {
                        "TaskDefinition": "<TASK_DEFINITION>",
                        "LoadBalancerInfo": {
                            "ContainerName": container_name,
                            "ContainerPort": container_port
                        },
                        
                        }
                    }
                    }
                ],
                "Hooks": [

                        {
                            "AfterAllowTestTraffic": AfterAllowTestTrafficHookFunctionName
                        }

                ]
    }
    return appsec_obj


appspec_obj=return_appspec('my_container',8080,'FunctionName')

print_obj_to_disk(appspec_obj,'app_spec')

This project is in the 'PrepareCodeDeploy' stage in the sample section of the pipeline definition below. The json files are stored as pipeline artifacts and then become inputs to the next stage.


            - Name: PrepareCodeDeploy
              ActionTypeId:
                Category: Build
                Owner: AWS
                Provider: CodeBuild
                Version: "1"
              Configuration:
                ProjectName: !ImportValue CodeDeployPrepareProjectName
                EnvironmentVariables: !Sub "[{\"name\":\"Environment\",\"value\":\"Testing\",\"type\":\"PLAINTEXT\"},{\"name\":\"Account_Id\",\"value\":\"${TestingAccountId}\",\"type\":\"PLAINTEXT\"},{\"name\":\"Region\",\"value\":\"${DeployRegion1}\",\"type\":\"PLAINTEXT\"},{\"name\":\"BucketKmsKey\",\"value\":\"${TestingAccountKmsKeyId}\",\"type\":\"PLAINTEXT\"}]"
              InputArtifacts:
                - Name: !Ref SourceArtifactName
              OutputArtifacts:
                - Name: PrepareCodeDeployOutputTesting
              RunOrder: 3

            - Name: BlueGreenDeploy
              InputArtifacts:
              - Name: BuildDockerOutput
              - Name: PrepareCodeDeployOutputTesting
              Region: !Ref DeployRegion1
              ActionTypeId:
                Category: Deploy
                Owner: AWS
                Version: '1'
                Provider: CodeDeployToECS
              RoleArn: !Sub arn:aws:iam::${TestingAccountId}:role/whatever/CrossAccountsDeploymentRole
              Configuration:

                AppSpecTemplateArtifact: PrepareCodeDeployOutputTesting
                AppSpecTemplatePath: appspec.json

                ApplicationName: !Ref ApplicationName
                DeploymentGroupName: !Ref ApplicationName
                
                TaskDefinitionTemplateArtifact: PrepareCodeDeployOutputTesting
                TaskDefinitionTemplatePath: taskdef.json

                Image1ArtifactName: BuildDockerOutput
                Image1ContainerName: "IMAGE1_NAME"
              RunOrder: 4

Hopefully that will give you some ideas as to how you might get it working in a way that suits you. I've just decided to write a blog post on my approach that while not perfect, does work and follows a path to create a pipeline and all the required components in a linear fashion. The official documentation seems to go off in a lot of directions and I've seen a lot of people struggle with this.

Any further questions, just ask.

  • Mine is also deploying in a cross account setup - you dont say how or if that particular aspect is causing issues. – Ronan Cunningham Jun 08 '21 at 15:11
  • I was also thinking about producing the appspec.yml inside the CodeBuild stage. Something like `- printf '[{"name":"imageName","imageUri":"%s"}]' $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG > imagedefinitions.json` Only this example is for JSON format and a different content type. Then converting it into a .yml file. Maybe there is a better way, though. By any chance, do you have an example of how you did it? – GGG Jun 08 '21 at 23:54
  • 1
    Just saw this - of course - I'll dig it out and send you a link. If I recall I use python with the taskdef and appspec initially as dictionaries. I update them with the relevant values, convert to json, and write to disk. No need to convert to yaml. – Ronan Cunningham Jun 09 '21 at 15:22
  • Ronan, thanks a lot for this example! It would be great to read your blog post on this subject. Thanks once more. – GGG Jun 09 '21 at 17:43