0

It is considered bad practice to place machines that shouldn't be accessible from the internet in a public subnet, because such topology, other than being logically wrong (private instance in an internet-facing subnet) also exposes the machines on the public internet.

If we were talking about an on-premise environment, this fits perfectly. Machines that are public should be in a DMZ while private machines should be in a separate private network and protected by a firewall.

But, in a cloud environment such as AWS, we have Security Groups. So, while of course it is logically wrong to have private instances in a public subnet, are there any objective reasons not to do so in a cloud like AWS? Is it somehow more insecure to use Security Groups to isolate a machine?

I am asking this because there are various advantages in having all the machines in public subnets in a smaller environment, such as:

  • Lower costs, you can save money on a NAT gateway needed for a private subnet by using the already existing Internet Gateway
  • It is far easier to allow traffic from the internet to specific instances. Say, if you wanted to allow access from some external vendor to only one machine, you could simply add a rule for a specific IP in the Security Group and then use a CNAME pointing at the public DNS of the machine through some automation, or use dynamic DNS.
  • The traffic from machine to machine is still inside AWS, so I still get the advantages of lower costs.
  • I can also still do VPC peering, so if I wanted to connect to a VPC in another account, it would still be possible to do without going through the public internet.
  • I can have more specific per-instance security

So, while it is logically wrong, are there any objective reasons not to do so?

We were doing this conversation with a colleague and, when asked the actual reason why an EC2 instance shouldn't be public, I simply replied that it is logically wrong and tends to do more confusion, but I remained with the doubt about any objective reasons why this shouldn't be done and is actually wrong practice.

Here are a few downsides that I was able to find:

  • Security groups are limited to 5 per network interface
  • Public IPs are not paid if they are non-static, but are bound to the number of machines you have (only 1 ip per machine). This limit can be increased though.
  • IP rebounds have a cost. It seems the first 100 IPs are free, but once you go above that limit it's $0.20 per IP refresh.

Some correlated threads:

AWS VPC - why have a private subnet at all?

Is it worth setting up a private subnet in Amazon EC2 (VPC)

AWS EC2 instance: security groups and firewalls

EC2 - should security groups be specialized and stacked?

The above threads elaborate on the topic but don't provide enough objective reasons as to why it is wrong, aside from a security implication (again, ignoring the fact you would configure security groups correctly with Terraform to avoid even more any misconfigurations).

  • 1
    One possible reason: the limited IPv4 address space. Why would you assign relatively scarce and increasingly more expensive globally unique and routable IPv4 addresses to resources that are not intended to be publicly accessible in the first place and/or which will only be accessed via a load balancer? – HBruijn Jun 11 '23 at 13:04
  • Good point @HBruijn . It is a waste and would make the IPv4 shortages worse. At the same time, it's also true we could (potentially) only use IPv6 addresses (AFAIK Amazon doesn't have this option now but could in the future). – F. Alessandro Jun 12 '23 at 20:13

1 Answers1

0

One reason to use private subnets is defense in depth.

For example, say you have an EC2 server sitting in a public subnet. A junior administrator tries to give the instance outgoing access on port 22 so it can pull code from github, but accidentally opens up port 22 incoming. Now you have a private server on the internet protected by a certificate or username and password. If the instance was in a private subnet it doesn't matter because it's not accessible from the internet. Say this was port 443 instead of port 22, an internal application may now be on the internet.

In summary, humans make mistakes, and having multiple protections against misconfigurations is good practice.

The role of security groups

On-premises servers were often isolated using subnets. An enterprise could have hundreds or thousands of subnets. In AWS I don't believe that's necessary, because security groups can fulfill parts of that role. I still believe public and private subnets are important, but I tend to use security groups for tiering / roles in the could.

For example, for a three tier system I will have something like the following:

  • Load balancer is in a public subnet, with its own security group. It only has access to the web server SG.
  • Web server is in a private subnet, with its own security group. It allows access from the load balancer SG, and it has access to the application server SG.
  • App server is in a private subnet, with its own security group. It allows access from the web server SG, and it has access to the DB SG.
  • DB server is in a private subnet, with its own security group. It allows access from the app server SG, and has no outgoing access.

In this setup I would have six subnets - three public and three private.

Tim
  • 31,888
  • 7
  • 52
  • 78
  • Hi @Tim , thank you for your answer. Essentially, as you said, "having multiple protections against misconfiguration is good practice". I like the example of the "Tiering system". In a smaller environment (easier to introspect secure), you could save money on a NAT Gateway as well as avoiding to mess with multiple subnets and their own configurations. It is a cloud environment after all, so keeping it [KISS](https://en.wikipedia.org/wiki/KISS_principle) makes sense. As of now this seems the best/most objective reason, but I'll wait a bit more to see if someone els can thin of othr point :) – F. Alessandro Jun 12 '23 at 20:42
  • someone else can think of other points :)* (hit the character limit there) – F. Alessandro Jun 12 '23 at 20:43
  • You need to consider your workload. My personal web server that runs 6 websites sits in an EC2 public subnet, no NAT. If it goes down, oh well, it's backed up. On the other hand some corporate sites have RTO of 99.9% and have health or financial information in them, for those $30 a month plus the bandwidth charges for a NAT gateway is the tip of the iceberg. Enterprise AWS bills can have a breathtaking number of zeros. – Tim Jun 12 '23 at 21:00