6

I setup the production environment for my service in an Amazon VPC in Oregon:

  • 2 availability zones
  • 1 public subnet (including bastion, nat, and ELBs) and 3 private subnets (database, web servers and configuration/supervision) in each availability zone.
  • 11 security groups

There are about 25 VMs, for now, and hopefully it will grow.

Now I'm going to setup the staging environment, but I'm not sure where to put it:

  1. should I simply put the staging instances next to the production instances? Basically, simply reuse the same Amazon region, the same availability zones, the same subnets and the same security groups? I would just need to create new ELBs pointing to staging instances, and that's it. Simple.
  2. or should I put the staging instances in their own subnets, but still in the same region/availability zones? The public subnets would have to be the same though, because you cannot have two public subnets in a single availability zone. Having separate subnets might make it easier to manage things, and I could have dedicated routing rules, to go through different nat instances and possibly a different bastion as well. More complex, but tighter security. I think I might not need to double the security groups though, because I could have a single overall network ACL forbidding traffic between production and staging subnets.
  3. or should I duplicate the whole setup in a different VPC? Since I can only have one VPC per Amazon Region, I would have to do this in a separate region.

The whole point of the staging environment is to be identical to the production environment (or as close as possible). So setting up staging environment in a different Amazon Region just feels wrong: this rules out option 3, doesn't it?

Option 1 is closest to the target of being as close to production as possible. But having staging and production environments in the same subnets feels a bit like a potential security issue, right? So I'm somewhat leaning towards option 2, but I wonder if the potential security issues are serious enough to justify having twice as many subnets to manage?

And what about the testing environment? It should resemble production as well, but it does not need to match it as closely: everything can fit on a few instances, no need for ELBs and everything. Perhaps this environment could all fit in a single dedicated subnet in the same VPC? Being in the same VPC, it would have simple access to the git repository and chef server and supervision tools and openvpn access, etc.

I'm sure many people have been through these considerations? What's your take on this?

Thanks.

MiniQuark
  • 3,875
  • 2
  • 21
  • 23

3 Answers3

2

I would say option 3 is best isolation level and prevent your production environment being affected by changes in the staging environment. Also your assumption of one VPC per region seems to be wrong. You can create multiple VPCs within a region as far as I know.

"Since I can only have one VPC per Amazon Region, I would have to do this in a separate region."

antimatter
  • 229
  • 1
  • 7
  • Oh you're right, I don't know where I got this wrong assumption. Thanks. – MiniQuark Apr 21 '13 at 17:12
  • Ok, I get it: there used to be a limit of 1 VPC per region per AWS account, but that was during beta, and this limit was lifted. Here are the current VPC limits: http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_vpc – MiniQuark Apr 21 '13 at 17:28
  • 1
    Hi, there are a few services that are currently common to prod, staging and testing : dns, git, chef server, ntp. If I setup different VPCs, then I will have to either duplicate them (for git and chef server, that would be a real pain), or set them up in one VPC and make them available to the other VPCs through Internet. Configuring everything in one VPC avoids this issue. I'm still hesitating. I guess there's no absolute answer. I wish Amazon would enable secure cross-VPC communications (or I could setup an OpenVPN connection between VPCs?). – MiniQuark Apr 23 '13 at 08:08
  • DNS - For local? Better to have one in each environment? – antimatter Dec 12 '13 at 14:51
  • Git - You can use github and save the trouble of setting up your own git server? Chef - I do not use it and cannot comment much. NTP - If you have NAT server in each VPC, you could route your NTP traffic to Internet and sync the time from the ntp.org server pool. That also applies to servers in private network. – antimatter Dec 12 '13 at 15:03
0

I would suggest go with Option 2. As your AWS infra grows, you will need Directory Services (Name Servers, User Directory, VM Directory, Lookup services etc.). If you have two VPCs, sharing the Directory Services will not be easy. Also if you need Code Repository (e.g. GitHub) or Build tools (e.g. Jenkins) having three separate VPCs for DEV, Staging and Production will make things really complicated.

Saqib Ali
  • 428
  • 2
  • 7
  • 21
0

I would suggest a separate vpc for each environment. All the shared resources can be placed in a shared vpc and have peering between each of the environments to the shared vpc

Nitin AB
  • 101