0

I have created an ECS cluster backed by an EC2 auto-scaling group and launched a service in it that uses EFS for NFS storage. The service is running in awsvpc network mode so that I'm able to control traffic to and from it. There is a security group that allows access to TCP2049/NFS4 from itself and (for troubleshooting) from 0.0.0.0/0, and it is attached to both the EFS mountpoint and the ECS service. EFS and the ECS/EC2 machines all are in the same VPC and same three subnets.

However, the service fails to deploy the tasks in ECS - the tasks crash with this error:

Error response from daemon: create ecs-service-1-images-d6b491fbece8ddc34b00: VolumeDriver.Create: mounting volume failed:
Mount attempt 1/3 failed due to timeout after 15 sec, wait 0 sec before next attempt.
Mount attempt 2/3 failed due to timeout after 15 sec, wait 0 sec before next attempt.
'mount.nfs4: Connection reset by peer'

Mounting the EFS volume on the EC2 ECS host itself works though:

[ec2-user@ip-100-xxx ~]$ sudo mount -t efs fs-05dexxxxxxxx /mnt/efs
[ec2-user@ip-100-xxx ~]$ mount | grep fs-05de
fs-05dexxxxxxx.efs.eu-central-1.amazonaws.com:/ on /mnt/efs type nfs4

What causes this behavior? All of the resources are in Terraform:

resource "aws_autoscaling_group" "ecs-infrastructure-asg" {
  name = "ecs-infrastructure-asg"
  vpc_zone_identifier = [
    data.aws_subnet.PrivateA.id,
    data.aws_subnet.PrivateB.id,
    data.aws_subnet.PrivateC.id
  ]
}
resource "aws_ecs_capacity_provider" "ecs-infrastructure-cp" {
  name = "infrastructure-cp"
  auto_scaling_group_provider {
    auto_scaling_group_arn         = aws_autoscaling_group.ecs-infrastructure-asg.arn

  }
}
resource "aws_ecs_cluster" "infrastructure" {
  name = "infrastructure"
}
resource "aws_ecs_cluster_capacity_providers" "infrastructure-ccp" {
  cluster_name = aws_ecs_cluster.infrastructure.name
  capacity_providers = [aws_ecs_capacity_provider.ecs-infrastructure-cp.name]
  default_capacity_provider_strategy {
    capacity_provider = aws_ecs_capacity_provider.ecs-infrastructure-cp.name
  }
}

resource "aws_security_group" "passbolt-allow-nfs-inbound" {
  name        = "passbolt-allow-nfs-inbound"
  vpc_id      = data.aws_vpc.VPC01.id
  ingress {
    from_port        = 2049
    to_port          = 2049
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
  ingress {
    from_port = 2049
    to_port   = 2049
    protocol  = "tcp"
    self      = true
  }
}
resource "aws_efs_file_system" "passbolt-efs-fs" {
}
resource "aws_efs_mount_target" "passbolt-efs-mt-priva" {
  file_system_id  = aws_efs_file_system.passbolt-efs-fs.id
  subnet_id       = data.aws_subnet.PrivateA.id
  security_groups = [aws_security_group.passbolt-allow-nfs-inbound.id]
}
resource "aws_ecs_task_definition" "passbolt-task" {
  family       = "service"
  network_mode = "awsvpc"
  container_definitions = jsonencode([
    {
      name      = "passbolt-app"
      mountPoints = [
        {
          sourceVolume  = "images"
          containerPath = "/usr/share/php/passbolt/webroot/img/public"
          readOnly      = false
        }      ]
    },
  ])
  volume {
    name = "images"
    efs_volume_configuration {
      file_system_id     = aws_efs_file_system.passbolt-efs-fs.id
      root_directory     = "/images"
    }
  }
}

resource "aws_ecs_service" "infrastructure-passbolt" {
  name            = "infrastructure-passbolt"
  cluster         = aws_ecs_cluster.infrastructure.id
  task_definition = aws_ecs_task_definition.passbolt-task.arn
  desired_count   = 1
  capacity_provider_strategy {
    capacity_provider = aws_ecs_capacity_provider.ecs-infrastructure-cp.name
    weight            = 100
  }
  network_configuration {
    subnets = [
      data.aws_subnet.PrivateA.id,
      data.aws_subnet.PrivateB.id,
      data.aws_subnet.PrivateC.id
    ]
    security_groups = [
      aws_security_group.passbolt-allow-nfs-inbound.id,
    ]
  }
}
user1933738
  • 247
  • 1
  • 6
  • Is there a route to the public internet, particularly the EFS endpoint? If not try adding an EFS VPC endpoint. – Tim Nov 16 '22 at 08:44

1 Answers1

0

Found the cause - as the security group was created by Terraform, there was no egress rule present. Adding one that allows egress to 0.0.0.0/0 fixed the connectivity.

user1933738
  • 247
  • 1
  • 6