1

I am launching Apache, MySQL, and memcached docker containers from AWS ECR into an ECS instance. Engineers are able to browse around and make changes as they see fit. These containers expire after a set period of time but they are wanting to save their database changes for use in future containers.

I am looking into seeing if there's a solution I can automate this process to occur before the containers terminate, either with Lambda, aws-cli, or some other utility.

I am looking for a solution that would take the mysql container and create a new image from it. I saw this question and it's mostly what I want: How to create a new docker image from a running container on Amazon?

But you have to run docker commit from the ECS instance as well as perform the login and push from there. There doesn't appear to be a way to have the committed image pushed to the ECR without having to login with aws ecr get-login --no-include-email and running the output for docker to get the token.

The issue I have with that is if we get to a point where we have multiple ECS instances running it would be difficult to know where the container the engineer is running from, SSHing into that server, and running the docker commit, docker tag, aws ecr login, and docker push commands. To me, that seems kind of hacky and prone to error.

I have the MySQL containers rebuilt and repushed to the ECR every hour so that they have the latest content updates. To launch the containers I am using a combination of ecs-cli and aws-cli to use a docker-compose.yml file to create a task in ECS.

Is there some functionality I can use to commit a running container to ECR with a new name/tag?

The other option I was looking into was starting the MySQL container with persistent storage (EBS/EFS) but am still trying to see if that's doable since I would have to somehow tag the persistent storage so that it will only be used when the engineer launches it that way. Essentially, I would have a unique docker-compose.yml file that is specific to persistent volumes and it would either launch a new container with fresh mysql data or use an existing one if it exists, given a specific name.

  • When I define the volume in the docker-compose.yml file it seems to not persist and it also adds some extra data to the name of volume. I guess I'll try to use volumes in the ecs-params,yml but I wanted to avoid putting all 3 containers on the same storage since it'd be a waste. – Kurt Knudsen May 02 '19 at 20:11
  • You might see if RDS (and "database snapshot" as an object in the AWS API) better meets your needs here. [It's difficult to create a MySQL image that contains data already](https://stackoverflow.com/questions/27572453/mysql-docker-container-is-not-saving-data-to-new-image) and `docker commit` in general tends to be disrecommended (what exactly did your developers do via `docker exec` in those containers and how can we reproduce it?). – David Maze May 02 '19 at 22:09
  • I don't know if RDS is viable if we have several dozen engineering spinning up environments. The engineers don't have direct access to the containers, it's all accessed via web browsers, basically functionality testing. They just want to be able to retain the test data over longer periods of time without having to redo it each time they spin up an instance. Moving to RDS might be way out of scope for the project and, in the end, may never be approved since we're already containerized and functional. I appreciate the feedback and will continue to search for solutions. – Kurt Knudsen May 02 '19 at 23:26
  • What I ended up doing for this is creating an empty mysql Docker container and when the main instance spins up, it gets the database from S3 and imports it. If it's scheduled to save the database, it does a dump, gzip, and uploads it to S3 before terminating the instance. This is done by the cleanup script directly SSHing into the Docker container when it has the SSH port mapped to be accessible from our network. It's not ideal but it works and seems stable enough for our use. – Kurt Knudsen May 23 '19 at 13:59

0 Answers0