I require a fair bit of RAM and disk for a Docker container that will run infrequently as a task on ECS. My workflow:
- Start EC2 instance for ECS
- Run task on ECS
- Terminate EC2 instance
I terminate the instance between runs because these resources are relatively expensive, and I don't want them running when not in use. Fargate is not appropriate due to its resource limitations, so I'm running ECS on EC2.
I cannot get more than 30GB disk for the image without a lot of human intervention. I can attach arbitrary EBS data volumes (/dev/xvdcz
), but AWS still always creates a 30GB root volume /dev/xvda
which is used for the container itself.
How do I use a larger than 30GB volume for the Docker container itself?
What I've tried:
- Creating an Auto Scaling Group with a launch configuration where the root volume is larger. This does create an instance with a larger root volume, but there is no way to attach this group to a cluster, or link its created EC2 instance with the cluster. Cluster creation seems to be tied to an auto scaling group and instance.
- Using an instance with a large dedicated SSD rather than an EBS volume, again the 30GB partition is created for the container.
- Mounting
dev/xvdcz
to the container. This does add the space, but requires me to rewrite my code to only use this folder. - Using the AWS ECS cli to modify disk after creation. This is described in a similar issue. However, as my EC2 instance terminates after task completion, the ID does not persist between runs, and
aws ecs describe-clusters
does not specify the underlying EC2 instance, so this cannot be automated. A human needs to boot up the instance, look at the ID, and then the volume size can be modified via the CLI.
This issue was brought up on Github back in 2016 but marked as not important and closed, the discussion there is not very helpful.