6

I've followed https://docs.gitlab.com/runner/configuration/runner_autoscale_aws_fargate/ to create a custom runner which has a public IP attached and sits in a VPC alongside "private" resources. The runner is used to apply migrations using gitlab ci/cd.

ALLOW 22 0.0.0.0/0 has been applied within the security group; but it's wide open to attacks. What IP range do I need to add to only allow gitlab ci/cd runners access via SSH? I've removed that rule for the moment so we're getting connection errors, but the IPs connecting on port 22 all come from AWS (assuming gitlab runners are also on AWS).

Is there something I'm missing or not understanding?

steadweb
  • 15,364
  • 3
  • 33
  • 47
  • If the whole setup is within a VPC, you can basically allow all the traffic for VPC CIDR for a particular port in this case ssh(22). If you want to go specifics then you can further shorten it to the subnets as well. – samtoddler Mar 19 '21 at 10:28

2 Answers2

2

I had a look at the tutorial. you should only allow EC2 instances to be able to ssh into the Fargate tasks.

One way to do that is, You could define EC2 instance's security group as the source in the Fargate task's security group instead of using an ip address(or CIDR block). You don't have to explicitly mention any ip ranges. This is my preferred approach.

When you specify a security group as the source for a rule, traffic is allowed from the network interfaces that are associated with the source security group for the specified protocol and port. Incoming traffic is allowed based on the private IP addresses of the network interfaces that are associated with the source security group (and not the public IP or Elastic IP addresses).specify a security group as the source

Second approach is, As @samtoddler mentioned, you can allow the entire VPC network, or you can restrict it to a subnet.

Arun Kamalanathan
  • 8,107
  • 4
  • 23
  • 39
  • Thanks. I've configured this as well. It doesn't directly answer my question as my runner was accessible publicly over SSH, which I needed to lock down. Turns out, gitlab doesn't need to talk over SSH; which was my misunderstanding. So I moved the runner into a private subnet behind which no longer has SSH access. Applying these rules (above) only allows EC2 to talk over SSH to ECS fargate tasks. – steadweb Mar 19 '21 at 11:44
  • 1
    Ya if it helps in anyway, thats good. Also I can clearly see that 10,000 - 9920 = 80. So I don't have any issues :). good luck – Arun Kamalanathan Mar 19 '21 at 12:22
  • 1
    This now meets my security requirements for my client, thank you Arun. After moving the instance into a private subnet and removing assign public IP within the SG, and only allowing the EC2 instance to SSH into fargate tasks, it's working as expected without having a gaping SSH hole :D – steadweb Mar 19 '21 at 16:30
2

I was misunderstood; gitlab-runner talks to gitlab, not the other way round, my understanding was gitlab talks to runners over SSH.

My immediate solution was 2 things:

  • Move the EC2 instance into a private subnet
  • As per @Aruk Ks answer, only allow EC2 to communicate over SSH to ECS Fargate tasks

This answered my question as well https://forum.gitlab.com/t/gitlab-runner-on-private-ip/19673

steadweb
  • 15,364
  • 3
  • 33
  • 47
  • Thanks, I also mistakenly thought the SSH connection went directly from 'gitlab' to the executor task. With the Fargate executor, you need `EnablePublicIP = false` in the TOML config so that it doesn't attach an EIP, and then a NAT gateway etc. to have the task's subnet able to reach the internet gateway in your public subnet. – OJFord Jul 07 '22 at 19:12