We are running into an interesting argument and are falling into two camps. I'm interested in any particular problems with either idea or gotchas we might be missing. Really, anything that can help us make a decision or point out things we are not accounting for. I know this skirts the "no opinion" rule a bit closely but I hope it's still an acceptable question. Sorry for the length as well, there is a fair bit of nuance.
1) One side (mine - I am not without bias) finds the immutable server model very interesting for cloud systems. To that end, we prototyped moving all components of our infrastructure into Docker. Our custom applications build via Jenkins directly into Docker images that deploy to a local Docker Registry. Then we created a large set of Ansible roles and a playbook that is capable of reaching out to an empty server, install Docker and then tell Docker to install all the containers as needed. After a couple of minutes, the entire app and all of it's supporting infrastructure is wired up and working - logging, monitoring, database creation/population, etc. The finished machine is a self-contained QA or dev environment with an exact copy of the application. Our plan to scale this out would be to make new Playbooks to build new AWS servers from a base trusted AMI (probably a very bare image), do rolling deploys of the production application to handle configuration management and releases and generally never edit servers again - just make them anew. I'm not concerned about getting what I described working in practice - just if it's a reasonable model.
2) The other camp wants to use Puppet to handle configuration management, Ansible to deploy our custom applications that are tarballs generated from our build process, Foreman to handle the triggering and management of the process as a whole and Katello to do some amount of base image management. Releases would involve Puppet changing configuration as needed and Ansible deploying updated components with some amount of Foreman coordination. Servers would be built reasonably quickly if we needed new ones but the intent is not to make them disposable as part of the standard process. This is closer to the phoenix server model though with a long life.
So my question really comes down to this: is the immutable server model with the tools as I've described them above actually as realistic as it appears? I love the idea that our staging process can literally be building an entire clone of the applications in live, let QA hammer it, then just flipping the database storage and some DNS settings to make it live.
Or does the immutable server model fail in practice? We have a good deal of experience with both AWS and cloud environments so that's not really the concern - more a matter of how to get a reasonably sophisticated app deployed reliably going forward. This is of particular interest as we release quite frequently.
We have Ansible doing most things needed except actually creating EC2 servers for us and that's not hard. I'm having trouble understanding why you actually NEED Puppet/Foreman/Katello in this model at all. Docker is vastly cleaner and simpler than custom deploy scripts in really any tool out there that I can tell. Ansible seems far simpler to use than Puppet when you stop worrying about having to configure them in-situ and simply build them again with the new configuration. I'm a fan of the KISS principal - particularly in automation where Murphy's Law runs rampant. The less machinery the better IMO.
Any thoughts/comments or suggestions on the approach would be greatly appreciated!