4

I asked this on StackOverflow when serverfault was in beta. Giving it a second shot.

I have 4 Servers in two physically separate locations. Each server is running Windows Server 2003 Standard and Microsoft SQL Server 2005 standard edition.

  • Server A1 and A2 in datacenter A
  • Server B1 and B2 in datacenter B

These servers are connected to each other with a VPN over the internet.

What would be the best way to provide High Availability between both datacenters?

I have considered the following ideas.

Scenario A

  • DNS Round robin between Datacenter A and B
  • Mirroring Databases on Servers A1 and B1 over the VPN
  • Servers A2 and B2 will have logs shipped to them as Warm Standbys for disaster recovery.
  • All web servers will point database traffic over the VPN to Server A1 until failover in which they will direct requests to B1
  • The servers will be running Microsoft Load Balancer software.

Scenario B

  • Mirror Databases on Servers A1 and A2
  • A1 and A2 will be running Microsoft Load Balancer software
  • B1 and B2 will be running Microsoft Load Balancer software.
  • Log ship to servers B1 and B2 as warm standbys for manual switchover between datacenters If Datacenter A goes down change DNS record to point to Datacenter B.

I am a C# programmer not a system designer so I'm at a loss for ideas. I'm trying to work with the hardware and data center configuration we have. If necessary we can purchase hardware load balancers.

Dennis Williamson
  • 62,149
  • 16
  • 116
  • 151

3 Answers3

1

I think if you want to do something this reliably and without a massive transit bill you need a dedicated connection between the two data centres (we do this and have a backup vpn route over the internet with a higher routing cost in case there's a problem with the dedicated link).

Really the starting point for this is to lay out what the business cost for downtime will be - you need to consider whether the increase in cost for a cold vs warm vs hot failover is warranted. Without knowing more about that it's hard to give good guidance on the different options available.

Whisk
  • 1,883
  • 1
  • 16
  • 21
  • If you take that option then you also need to consider how it breaks when the interconnecting link goes down but each net link is up. – JamesRyan Jun 23 '10 at 11:54
  • Yep - that's why you setup a tunnel through the internet with a higher cost that'll get used if the main connection goes down. – Whisk Jun 23 '10 at 13:44
  • +1 for calculating the business cost of downtime before planning a solution. – Element Oct 25 '10 at 02:47
0

The cheapest solution we setup for our customers is multiple read-only databases (log shipping is often sufficient) - Then for any updates to the database send those to the primary server - with a "flag" in the code that the primary server is down (which is automatically triggered on an outage). If that flag is set we disallow writes in some user friendly way.

Sometimes a setup like "admin.example.com" can be used on a single server / single sql server for all updates - Especially in a scenario where you have a customer base who does updates, and a "user" base who basically views everything the customers published (real estate listings for example this model fits fairly well).

As to dedicated link instead of VPN - If you are in two well connected data centers there's likely little concern over the traffic between them - and that traffic will be less than a dedicated link. But you will see a significant decrease in performance from using the VPN link instead of a LAN, especially if you have a "chatty" sql application (lots of small queries). Dedicated or VPN will both have this decrease, although VPNs tend to use TCP instead of UDP which roughly doubles latency.

As to DNS: Set up DNS servers at each location so you have redundant DNS, you can have each DNS point only to the local servers instead of Round Robin (that way if one dns server can't be reached you assume the web server at that location is also down). Otherwise with just basic round robin you could report the "A" record for the down location.

These things aren't just a simple setup though, you need to spend some $ on planning to have it all work properly. A second location usually I tell people to budget at least 3x a single location cost - especially in the planning phase - Plus programmers time to implement necessary changes.

We have some better options where we split a customer between two buildings in the same vicinity (in Equinix Ashburn (DC) there are several buildings). We run our backbone connections split between buildings - that gives us failover between buildings yet all the advantages of a LAN for planning. If you just need 2-8 servers find someone like us to set it up and assist in the planning is going to be much less costly than trying to do it yourself.

Splitting East Coast / West Coast data centers is beyond people with 4-8 servers budget / complexity almost always.

Good luck.

sysadmin1138
  • 133,124
  • 18
  • 176
  • 300
0

What about loadbalancing to EC2, would that cost less than a second datacenter?

Alex
  • 259
  • 1
  • 9