In a nutshell we have a platform which comprises several applications/servers. Terraform is used to manage both the AWS Infrastructure (VPC, Subnet, IGW, Security Groups, ...) and Applications deployment (utilizing Ansible as provisioner from Terraform). For each deployment Packer will build all AMIs, tag them with appropriate name so from Terraform latest AMIs will be picked up.
The process in general works but we face a dilemma when we want to deploy some small hotfixes, that could happen quite frequently as after each deployment and testing from QA some regressions could happen. So for each application that needs to be hot-fixed (may be not all apps need to be fixed), we create a hotfix branch, build the artifact (could be jar or deb pkg) - then there're 2 cases:
- Either triggering Packer to build new image, tag it with the appropriate hotfix and run terraform apply.
- Or, run an Ansible job to hot-deploying the application package, restart the service/application if needed.
With the first approach, we assure the Immutable Infra idea is followed, unfortunately it also caused some downsides as any small changes in Terraform configuration or Infra would case a change in terraform plan, for example we may have some changes in security group which is out of terraform state (i.e: it might be from some features regarding whitelisting some IPs), and applying tf would cancel all changes. The whole process of building AMI and run Terraform apply also quite heavy.
We're leaning more to the second approach, which is easy, but still wonder if it's a good practice?