6

The desired behavior is as follows:

  • Push code change
  • Run unit tests for each Serverless component
  • Provided all tests are successful, deploy the components into Staging environment and mark build as successful
  • Listen to this change and run acceptance tests suite using Gherkin
  • Provided all tests are successful, deploy the components into UAT/Prod environment and mark build as successful

The desired solution would have two pipelines, the second one triggered by the first one's success.

If you have any other ideas, I'd be delighted to hear!

Thanks in advance

Gena Verdel
  • 588
  • 5
  • 21

3 Answers3

9

Assuming both CodePipelines are running in the same account. You can add "post_build" phase in your buildspec.yml.

In the post_build phase you can trigger the second CodePipeline using AWS SDK commands.

  build:
        commands:
            # npm pack --dry-run is not needed but helps show what is going to be published
            - npm publish
    post_build:
        commands:
            - aws codepipeline start-pipeline-execution --name <codepipeline_name>
Amin
  • 91
  • 1
  • 2
4

The solution I propose for a second pipeline trigger would be the following:

  • Have the second pipelines source as S3 (not CodeCommit). This will ensure that only when a specifically named file (object key) is pushed to Amazon S3 will this pipeline start.
  • At the end of the first CodePipeline add a Lambda function, by this point everything must have been successful to have triggered this.
  • Have that Lambda copy the artifact you build for your first pipeline and place it in the bucket with the key referenced in the second buckets source.

To keep things clean use a seperate bucket for each pipeline.

Chris Williams
  • 32,215
  • 4
  • 30
  • 68
  • I will try this approach and accept the answer if it works. Thanks! – Gena Verdel Jul 12 '20 at 07:39
  • OK great, let me know if you need any clarification :) – Chris Williams Jul 12 '20 at 07:41
  • @GenaVerdel so, how did it turn out? – blahblah Nov 04 '20 at 11:09
  • 1
    Why can't you use separate stages of the same pipeline to achieve the same result? – berimbolo Apr 02 '21 at 07:48
  • 2
    @berimbolo Probably to fully isolate production from developer/preprod environments. A lot of people might have access to your tools account and trigger the potentially malicious pipeline there. – blahblah Aug 30 '21 at 06:12
  • What do you mean by tools account? A management/control plane where all your pipelines run? All the AWS projects I worked on used this account to run all pipelines to dev or prod but different roles on codebuilds to determine which account they could deploy to and only admins could manually start a pipeline in that account. – berimbolo Aug 31 '21 at 19:30
  • I thought this was a common practice but seems it could lead to bad practice as I have seen many pipelines that deploy to multiple accounts, usually with manual approval between stages. – berimbolo Aug 31 '21 at 19:43
0

I used Amin's answer in this thread as it is a very simple solution for specific use cases.

- aws codepipeline start-pipeline-execution --name <codepipeline_name>

Adding to that answer, you may have to add pipeline execution permission in IAM for your codebuild role that is trying to trigger the desired pipeline.

Sample IAM Policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "codepipeline:StartPipelineExecution"
            ],
            "Resource": "arn:aws:codepipeline:<region>:<account-id>:<pipeline-name>"
        }
    ]
}