3

I have several branch offices with a single server at each site. These servers each functions as domain controller, DNS, DHCP, file and print server, and ConfigMgr distribution point. They currently run windows server 2003. These are all small 5-15 user offices with crappy cable internet, so short WAN and power failures are pretty frequent.

I am replacing the servers with new hardware and upgrading to server 2012.

I intend to use hyperv to make future migrations and hardware changes less painful - so hardware upgrades can be as simple as live migrating the VM. But I am not sure what the best setup would be. There are a few options I have come up with:

Option 1

  • Physical machine: Server 2012r2 core, domain joined, only HyperV role.

  • VM1: 2012r2 core, Domain controller (possibly RODC), DHCP, DNS

  • VM2: 2012r2 full, File, print and ConfigMgr.

Option 2

  • Same as option 1, but host not joined to the domain.

Option 3

  • Physical machine: Server 2012r2 core, Hyper-V, Domain controller (again possibly RODC), DNS, DHCP.

  • VM: 2012r2 full, file and print server and ConfigMgr.

My concern here is that after extended power outages (which happen every few months) the host server will start back up before their VPN link is ready and not be able to talk to any domain controllers. This has at larger offices caused us problems - though that was on a HyperV cluster with CSV storage, which I think has more authentication issues to worry about.

Option 2 eliminates the host's reliance on AD, at the cost of being more work to manage, and harder to do things like VM migrations to other hosts.

Option 3 looks like it would be the most resilient to WAN failures, but violates the best practice of having HyperV hosts just do hyperv. This is the one I am leaning towards - though I would prefer Option 1 if I could be fairly certain there wouldnt be issues with the server booting while the WAN is down.

Any concerns or other options I'm missing? What would be the best route to take here?

Grant
  • 17,859
  • 14
  • 72
  • 103

1 Answers1

3

I'd use option #1, more host manageability.

I'd focus other efforts on making your remote sites more resilient...

  • Power is not difficult to protect in the branch office scenario. Buy batteries with extended-run capacity. E.g. my customers in volatile Los Angeles power conditions have between 60 and 240 minutes of battery runtime.

  • In the case of a site-to-site VPN outage, the tunnel (and networking equipment) would likely be restored before a server boots.

  • For ISP redundancy, consider a link balancer like the Elfiq and bring a diverse connection to each remote site.

ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • Excellent points, thanks. They do have decent UPS units that last about 30-45 minutes which takes care of most outages.Unfortunately some are almost rural locations that get 3-4 hour outages every few months (seemingly always on a holiday weekend when I'm on call). Extending their runtime may help. The network gear does come up first - but their crappy cable modems sometimes take 20 minutes to reconnect. Redundant ISPs means falling back on dialup - no other high speed options (yay monopolies), but might be an option. – Grant Jan 04 '15 at 04:42
  • You don't need high-speed failover. T1 or DSL are adequate. The Elfiq units I linked can also take 3G/4G USB cards. – ewwhite Jan 04 '15 at 04:45