1

I have a cross-account pipeline running in an account CI deploying resources via CloudFormation in another account DEV. After deploying I save the artifact outputs as a JSON file and want to access it in another pipeline action via CodeBuild. CodeBuild fails in the phase DOWNLOAD_SOURCE with the following messsage:

CLIENT_ERROR: AccessDenied: Access Denied status code: 403, request id: 123456789, host id: xxxxx/yyyy/zzzz/xxxx= for primary source and source version arn:aws:s3:::my-bucket/my-pipeline/DeployArti/XcUNqOP

The problem is likely that the CloudFormation, when executed in a different account, encrypt the artifacts with a different key than the pipeline itself.

Is it possible to give the CloudFormation an explicit KMS key to encrypt the artifacts with, or any other way how to access those artifacts back in the pipeline?

Everything works when executed from within a single account.

Here is my code snippet (deployed in the CI account):

  MyCodeBuild:
    Type: AWS::CodeBuild::Project
    Properties:
      Artifacts:
        Type: CODEPIPELINE
      Environment: ...
      Name: !Sub "my-codebuild"
      ServiceRole: !Ref CodeBuildRole
      EncryptionKey: !GetAtt KMSKey.Arn
      Source:
        Type: CODEPIPELINE
        BuildSpec: ...

  CrossAccountCodePipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      Name: "my-pipeline"
      RoleArn: !GetAtt CodePipelineRole.Arn
      Stages:
      - Name: Source
        ...
      - Name: StagingDev
        Actions:
        - Name: create-stack-in-DEV-account
          InputArtifacts:
          - Name: SourceArtifact
          OutputArtifacts:
          - Name: DeployArtifact
          ActionTypeId:
            Category: Deploy
            Owner: AWS
            Version: "1"
            Provider: CloudFormation
          Configuration:
            StackName: "my-dev-stack"
            ChangeSetName: !Sub "my-changeset"
            ActionMode: CREATE_UPDATE
            Capabilities: CAPABILITY_NAMED_IAM
            # this is the artifact I want to access from the next action 
            # within this CI account pipeline
            OutputFileName: "my-DEV-output.json"   
            TemplatePath: !Sub "SourceArtifact::stack/my-stack.yml"
            RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cloudformation-role"
          RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cross-account-role"
          RunOrder: 1
        - Name: process-DEV-outputs
          InputArtifacts:
          - Name: DeployArtifact
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: "1"
            Provider: CodeBuild
          Configuration:
            ProjectName: !Ref MyCodeBuild
          RunOrder: 2
      ArtifactStore:
        Type: S3
        Location: !Ref S3ArtifactBucket
        EncryptionKey:
          Id: !GetAtt KMSKey.Arn
          Type: KMS
ttulka
  • 10,309
  • 7
  • 41
  • 52

6 Answers6

2

CloudFormation generates output artifact, zips it and then uploads the file to S3. It does not add ACL, which grants access to the bucket owner. So, you get a 403 when you try to use the CloudFormation output artifact further down the pipeline.

workaround is to have one more action in your pipeline immediately after CLoudFormation action for ex: Lambda function that can assume the target account role and update the object acl ex: bucket-owner-full-control.

mockora
  • 106
  • 2
0

mockora's answer is correct. Here is an example Lambda function in Python that fixes the issue, which you can configure as an Invoke action immediately after your cross account CloudFormation deployment.

In this example, you configure the Lambda invoke action user parameters setting as the ARN of the role you want the Lambda function to assume in remote account to fix the S3 object ACL. Obviously your Lambda function will need sts:AssumeRole permissions for that role, and the remote account role will need s3:PutObjectAcl permissions on the pipeline bucket artifact(s).

import os
import logging, datetime, json
import boto3
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all

# X-Ray
patch_all()

# Configure logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(os.environ.get('LOG_LEVEL','INFO'))
def format_json(data):
  return json.dumps(data, default=lambda d: d.isoformat() if isinstance(d, datetime.datetime) else str(d))

# Boto3 Client
client = boto3.client
codepipeline = client('codepipeline')
sts = client('sts')

# S3 Object ACLs Handler
def s3_acl_handler(event, context):
  log.info(f'Received event: {format_json(event)}')
  # Get Job
  jobId = event['CodePipeline.job']['id']
  jobData = event['CodePipeline.job']['data']
  # Ensure we return a success or failure result
  try:
    # Assume IAM role from user parameters
    credentials = sts.assume_role(
      RoleArn=jobData['actionConfiguration']['configuration']['UserParameters'],
      RoleSessionName='codepipeline',
      DurationSeconds=900
    )['Credentials']
    # Create S3 client from assumed role credentials
    s3 = client('s3',
      aws_access_key_id=credentials['AccessKeyId'],
      aws_secret_access_key=credentials['SecretAccessKey'],
      aws_session_token=credentials['SessionToken']
    )
    # Set S3 object ACL for each input artifact
    for inputArtifact in jobData['inputArtifacts']:
      s3.put_object_acl(
        ACL='bucket-owner-full-control',
        Bucket=inputArtifact['location']['s3Location']['bucketName'],
        Key=inputArtifact['location']['s3Location']['objectKey']
      )
    codepipeline.put_job_success_result(jobId=jobId)
  except Exception as e:
    logging.exception('An exception occurred')
    codepipeline.put_job_failure_result(
      jobId=jobId,
      failureDetails={'type': 'JobFailed','message': getattr(e, 'message', repr(e))}
    )
mixja
  • 6,977
  • 3
  • 32
  • 34
0

I've been using CodePipeline for cross account deployments for a couple of years now. I even have a GitHub project around simplifying the process using organizations. There are a couple of key elements to it.

  1. Make sure your S3 bucket is using a CMK, not the default encryption key.
  2. Make sure you grant access to that key to the accounts to which you are deploying. When you have a CloudFormation template, for example, that runs on a different account than where the template lives, the role that is being used on that account needs to have permissions to access the key (and the S3 bucket).

It's certainly more complex than that, but at no point do I run a lambda to change the object owner of the artifacts. Create a pipeline in CodePipeline that uses resources from another AWS account has detailed information on what you need to do to make it work.

Jason Wadsworth
  • 8,059
  • 19
  • 32
0

This question is very old. But today i faced the same exact issue I spend hours and hours trying to fix that.

as @mockora mentioned CloudFormation generates output artifact, zips it and then uploads the file to S3. It does not add ACL, which grants access to the bucket owner. So, you get a 403 when you try to use the CloudFormation output artifact further down the pipeline.

To solve it, all you need to do is to enforce the object ownership on your s3 bucket (where cloudformation save the artifacts)

Cloudformation example of enforc a bucket ownership

0

We are using Cloudformation template (yml) and we needed to add the following to resolve:

enter image description here

user1653042
  • 355
  • 3
  • 7
-1

CloudFormation should use the KMS encryption key provided in the artifact store definition of your pipeline: https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ArtifactStore.html#CodePipeline-Type-ArtifactStore-encryptionKey

Therefore, so long as you give it a custom key there and allow the other account to use that key too it should work.

This is mostly covered in this doc: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html

TimB
  • 1,457
  • 8
  • 10
  • This is already implemted in the code I posted, and doesn't work in this case... CF doesn't use the pipeline's KMS and there's no property to force it to do so. – ttulka Jan 15 '19 at 14:51
  • I checked the actual implementation of the CloudFormation action and it looks like it should use the CodePipeline key. I can't tell from the snippet you posted what KMS permissions are granted on the KMSKey policy. I believe S3 will return an access denied message like you're seeing if it doesn't have permissions to access the KMS key. – TimB Jan 15 '19 at 21:42
  • All permissions are granted. As I look at the S3 object (the CF output - JSON) I don't see the expected KMS (as I see in all the others artifacts). It seems only the CF output artifacts doesn't work in the expected way... :-( – ttulka Jan 16 '19 at 08:40
  • Just to be clear: all the other artifacts work. For example, the CodeBuild can access the SourceArtifact without trouble. Only the CF deploy outputs atifact doesn't work, because the pipeline's KMS is obviously not used as it is by the others... – ttulka Jan 16 '19 at 08:44
  • 1
    Your use-case should be supported. I filed an internal ticket with the CloudFormation team to investigate for you. – TimB Jan 16 '19 at 17:34
  • Thank you! For now I use a workaround: CodeBuild runs in the DEV account. It's conceptually wrong, but... – ttulka Jan 16 '19 at 19:49
  • I have encountered the exact same problem. CloudFormation output from a different account is saved to the artifact bucket with no owner information. So the pipeline is not able to read the output file. The output is also saved with no server side encryption but that's not what caused the access denied here. – Patrick Li Feb 06 '19 at 01:12