1

We have Blue/green deployment configured on AWS so that when we release to production, we have the old instance running while a replacement instance is being set-up.

Each release creates a new autoscaling group and therefore new instances. If a release goes well, a replacement instances is created, code is deployed there and traffic is shifted over.

If a release fails, however, the instances created still remain, and so if we have multiple failures, we can have a very large number of heavy instances running without reason. Is there a way we can easily clean up these instances, or prevent it altogether perhaps? Am I doing something wrong?

  • I think you need to provide more information on how you are doing your deployment, include code/script/templates. – Tim Bassett Feb 24 '22 at 13:33
  • @TimBasset So we're just uploading bash scripts (using s3) to the instances and executing them using user_data. For everything else, we're using AWS CodePipeline. – Dhaval Anjaria Mar 11 '22 at 08:08

1 Answers1

0

CodeDeploy leaves the green fleet there in case you need to trouble shoot why the deployment to green failed. CodeDeploy currently does not have an out-of-box solution to terminate green fleet automatically. However, you can listen to the failure of fleet-wide deployments via CloudWatch events or SNS event notifications, so you can take actions to terminate the green when you get notified. Make sure your event listener checks the deployment creator and the deployment status.

Kaiwen Sun
  • 140
  • 2
  • 5