1

I had quite a hard time setting up an automization with Beanstalk and Codepipeline...

I finally got it running, the main issue was the S3 Cloudwatch event to trigger the start of the Codepipeline. I missed the Cloudtrail part which is necessary and I couldn't find that in any documentation.

So the current Setup is: S3 file gets uploaded -> a CloudWatch Event triggers the Codepipeline -> Codepipeline deploys to ElasticBeanstalk env.

As I said to get the CloudWatch Event trigger you need a Cloudtrail trail like:

resource "aws_cloudtrail" "example" {
  # ... other configuration ...
  name = "codepipeline-source-trail" #"codepipeline-${var.project_name}-trail"
  is_multi_region_trail = true
  s3_bucket_name = "codepipeline-cloudtrail-placeholder-bucket-eu-west-1"
  event_selector {
    read_write_type           = "WriteOnly"
    include_management_events = true

    data_resource {
      type = "AWS::S3::Object"

      values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_name}/file.zip"]
    }
  }
}

But this is only to create a new trail. The problem is that AWS only allows 5 trails max. On the AWS console you can add multiple data events to one trail, but I couldn't manage to do this in terraform. I tried to use the same name, but this just raises an error

"Error creating CloudTrail: TrailAlreadyExistsException: Trail codepipeline-source-trail already exists for customer: XXXX"

I tried my best to explain my problem. Not sure if it is understandable. In a nutshell: I want to add a data events:S3 in an existing cloudtrail trail with terraform.

Thx for help, Daniel

samtoddler
  • 8,463
  • 2
  • 26
  • 21
can-I-do
  • 57
  • 2
  • 7

2 Answers2

2

As I said to get the CloudWatch Event trigger you need a Cloudtrail trail like:

You do not need multiple CloudTrail to invoke a CloudWatch Event. You can create service-specific rules as well.

Create a CloudWatch Events rule for an Amazon S3 source (console)

From CloudWatch event rule to invoke CodePipeline as a target. Let's say you created this event rule

{
  "source": [
    "aws.s3"
  ],
  "detail-type": [
    "AWS API Call via CloudTrail"
  ],
  "detail": {
    "eventSource": [
      "s3.amazonaws.com"
    ],
    "eventName": [
      "PutObject"
    ]
  }
}

You add CodePipeline as a target for this rule and eventually, Codepipeline deploys to ElasticBeanstalk env.

samtoddler
  • 8,463
  • 2
  • 26
  • 21
  • Thx for your fast response, that was exactly the problem i faced :) Withouth the cloudtrail part the event will not get triggered, as it relies on the logging for S3. So you need both to have the final trigger work! – can-I-do Feb 03 '21 at 07:19
  • Maybe a workaround would be to have only one cloudtrail trail on the entire S3 Bucekt and the Cloudwatch event on the specific file... – can-I-do Feb 03 '21 at 07:32
  • @can-I-do you just need one cloudtrail enabled, not multiple. In general always have one cloudtrail and then use that for catching API calls via CloudWatch. – samtoddler Feb 03 '21 at 08:00
1

Have you tried to add multiple data_resources to your current trail instead of adding a new trail with the same name:

resource "aws_cloudtrail" "example" {
  # ... other configuration ...
  name = "codepipeline-source-trail" #"codepipeline-${var.project_name}-trail"
  is_multi_region_trail = true
  s3_bucket_name = "codepipeline-cloudtrail-placeholder-bucket-eu-west-1"
  event_selector {
    read_write_type           = "WriteOnly"
    include_management_events = true

    data_resource {
      type = "AWS::S3::Object"

      values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_A}/file.zip"]
    }

    data_resource {
      type = "AWS::S3::Object"

      values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_B}/fileB.zip"]
    }
  }
}

You should be able to add up to 250 data resources (across all event selectors in a trail), and up to 5 event selectors to your current trail (CloudTrail quota limits)

Nick
  • 1,203
  • 5
  • 8
  • Thx for the quick answer. Yes, I guess that would be possible. The problem here is that you still create a new Trail every time you deploy a new stack. I could modify the terraform files to add multiple beanstalks and codepipelines and than just add a data resource everytime. but if you deploy one stack for each project at a time you can not add another data source to an existing trail... – can-I-do Feb 03 '21 at 07:35
  • As I wrote in the other comment, maybe the only way is to create a cloudtrail trail for the entire bucket and than go more spefic in multiple cloudwatch events for each file. – can-I-do Feb 03 '21 at 07:37
  • Yeah, explore the second option where you have a dedicated trail for your beanstalks and then play around with the filters on the Event Rule. If there is no way to filter the data you need (as you go a bit deep to the exact file that you upload), you may want to take a look of an option where you have: Dedicated trail —> CW Event —> triggers a Lambda with some logic to parse which file it was —> Lambda triggers the Pipeline in question based on the filename key. – Nick Feb 03 '21 at 07:55
  • Hm it still feels so wrong :) But I guess I do just create one trail "log all S3 Buckets and all Events" and than create Cloudwatch Event specified on each file which will trigger different codepipelines... thx for your help! – can-I-do Feb 03 '21 at 08:52