0

I'm working on a VOIP project using Asterisk on Linux. Our current goal is to have several EC2 machines running an Asterisk container on each of them, and we want to be able to have development, staging and production environments. To do this, I'm writing a CloudFormation template to use AWS-ECS. My problem is that I can't find the proper way to map AWS-S3 buckets into container volumes. I want to use 2 different buckets. One for injecting Asterisk config files into all containers. Another one to save voice messages and logs of all containers.

Thanks,

P.S. I've pushed my Asterisk image on AWS-ECR and referenced to it in a TaskDefenition. I see MountPoints and Volumes there, but they doesn't seem to be my solution.

2 Answers2

1

Could you try using environment variable in your task definations?

in CF template it would be like this:

"DefJob":{
     "Type":"AWS::ECS::TaskDefinition",
     "Properties":{
        "ContainerDefinitions":[
           {
              "Name":"integration-jobs",
              "Cpu":"3096",
              "Essential":"true",
              "Image":"828387064194.dkr.ecr.us-east-1.amazonaws.com/poblano:integration",
              "Memory":"6483",
              "Environment":[
                 {
                    "Name":"S3_REGION",
                    "Value":"us-east-1"
                 },
                 {
                     "Name":"S3_BUCKET"
                     "Value":"Name-of-S3"
                  }
                  ........

And then reference these environment variable in your containers to use these S3 buckets. You'll have to make sure that your instance has access to use these S3 buckets.

Thanks, Manish

Manish Joshi
  • 3,550
  • 2
  • 21
  • 29
  • Hi, I'm gonna mount the ECS volume to a s3 bucket, in order to have access to the logs without logging to the server, I'm wondering how to reference these environment variable in the containers? – Matrix Apr 30 '18 at 11:00
0

I know it doesn't answer exactly to this issue, this is more basic than Manish's solution, but a raw way to achieve shared storage between the ECS containers is to rely on Elastic File Systems.

By setting such a script in the User Data of the Docker instances, or auto scaling group launch configuration, the EFS can be mounted on /mnt/efs of every Docker instance, thus sharing volumes set to something like /mnt/efs/something.

For this, this User Data block does the job (we use it with Amazon ECS Optimized images).

Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0

--==BOUNDARY==
MIME-Version: 1.0
Content-Type: text/text/x-shellscript; charset="us-ascii"
#!/bin/bash
yum install -y nfs-utils
mkdir "/mnt/efs"
echo "us-east-1a.fs-1234567.efs.us-east-1.amazonaws.com:/ /mnt/efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0" >> /etc/fstab
mount -a
/etc/init.d/docker restart
docker start ecs-agent
--==BOUNDARY==--

Docker is restarted at the end, otherwise it doesn't see the mounted volume (necesary only on instance creation).

Of course, for this to work, the security groups must be set to allow the instances and EFS to communicate through the NFS network port (2049).

arvymetal
  • 2,787
  • 1
  • 30
  • 39