I'm a very young systems engineer/contractor sysadmin with a bandwidth-heavy workload that just moved to an area where I get 2mbps download and 20 upload on my internet connection. I'm moving my Debian 10 workstation to AWS because of that, and I need some advice.
I want to manage my workstation setup with terraform (99% done), packer, and ansible. I want to create a script to run every time I change my ansible setup that will create a new AMI with packer+ansible and deploy it with Terraform.
I want to keep my home directory the same in all versions of my infrastructure since I have a ton of vagrant boxes and pip venvs populated with packages that I want to have consistent no matter what the base system configuration is. The home directory will be slightly managed as I add ssh keys, create a hotwired vagrant setup (which is worthy of another post), etc. That is the main roadblock right now.
I was thinking about using a second EBS volume and mounting it at /home/. The issue with that is when you create an AMI from another instance, it will copy the EBS, and get its own EBS ID. The next time I create an AMI with Packer, I will need to snapshot, attach, and mount the EBS volume on the running instance and I don't see a clear way to do that with Packer or Ansible. EFS is too slow to consider.
Basically, what is a way to keep data from the production instance when moving to a new instance in AWS without creating an ami from the older instance, while avoiding slow or expensive components such as EFS. I can't just ami-create a new instance because I want my AMI's to be generated from the base Debian 10 image to prevent configuration drift.