3

Company owner wants high availability as even a 1 hour downtime kills billable revenue for 20 accountants.

I've got a one year old Dell PE T110-II running 2012 Standard and an ancient PE 840 running 2003 Standard. Loads on these servers are absolutely minimal... DNS, AD, remote access (not terminal services), file and print services... no apps, no SQL, no web.

My plan was to dump the 2003 server, buy a second PE T110-II, then create a virtualized failover cluster with Hyper-V. Shared storage would be provided with some redundant RAID 1 Synology boxes.

I'm getting so much chaff from migration consultants I've talked to. One guy says I need a PE T430 for dual power supplies and other redundant goodies but it's total overkill powerwise, and anyway the whole point of the cluster is to be able to keep one server up.

Another guy says "it's too complicated" but from the many articles I have read, setting up clustering and virtualization is pretty straightforward, especially for someone with Server know how.

And a third guy says, "Well maybe a good idea but the T110's are too puny.

Seems to me that two clustered servers plus two redundant Synology boxes gives me high availability for both the servers and storage.

Am I missing something here? TIA.

  • RTO/RPO, what homemade recipe can you whip up that meets both? – jscott Nov 24 '15 at 03:06
  • 3
    ... how often do you get 1 hour downtime with your current setup? What's the nature of the downtime that kills billable revenue? It can't really be AD access because you have two servers (both domain controllers?) and credentials are cached. If it's printing, you can share printers from both servers or from a desktop in a pinch. Both servers can be DNS servers, if it's file access - Offline Files or a replica on both servers might help. I'm not saying Hyper-V is wrong, but it's a big jump in complexity for ... what specific gain? – TessellatingHeckler Nov 24 '15 at 03:26
  • 1
    Can you wait? WIndows Server 2016 will add replication to disc storage, taking the Synolog out of the price equation. – TomTom Nov 24 '15 at 18:22
  • The failure rate for my old PE 840 has been increasing. It failed three times in two weeks, was stable for more than two months, then failed again last week. Each time, I have avoided downtime for the staff because I am the guy that goes in at 11 at night or 4 in the morning to get the thing back up. Boss actually wants me to stop doing that by buying new equipment. – StrongEagle Nov 25 '15 at 18:35
  • @TessellatingHeckler - I have two servers but set up in the most gummed up way possible. The 2003 box handles AD, DNS, and remote access... DHCP is handled by the firewall (which I will change). Most printers in the place are local to the workstation. What I don't understand (my knowledge of Windows server operation has many gaps) is that if I set up two AD servers, two DNS, etc, what the users will have to do in case of a server failure. With clustering, I understand they do nothing. If two servers without clustering work, I'm all ears. – StrongEagle Nov 25 '15 at 18:58
  • @TessellatingHeckler - what kills billable revenue is inability for the accountants in the firm to access files from the file server... excel transaction files, tax returns, compliance documents. All the actual work is performed on each user workstation... no server based apps. From my earlier description you can see that if the file server (2012) dies, no files. If the domain controller dies, no access, especially remotely. – StrongEagle Nov 25 '15 at 19:03
  • Each service handles it differently. Two domain controllers - it's very transparent, they replicate to each other, and computers will talk to one, if it doesn't work, they look for another. As long as they have DNS. Two DNS servers is quite easy - they need to both be AD integrated DNS, configured to serve the same records. But then in DHCP you can give out 2x DNS server addresses to all clients, and they try the first one, then the second one. File shares are more involved, you need Distributed File System (DFS) to present an SMB share with a server independent name, and replicate data. – TessellatingHeckler Nov 25 '15 at 19:07
  • Redundant printing, I've never tried, and remote access would be interesting. Either way, while a HA cluster is a fairly simple concept - the implementation is still intricate - especially if you follow the full recommendations for isolated, redundant networks, multipath storage links and so on. You're going to have to learn and setup some new things whichever way you choose, and you kinda need solid DC and DNS servers to build a cluster on top of, anyway. – TessellatingHeckler Nov 25 '15 at 19:11

2 Answers2

8

I've got two PE T110 II servers running my home virtualization lab, and while I might ordinarily suggest using different servers for an Enterprise implementation, I think in this scenario that the T110 II is probably OK if you're ultimately not going to deploy more than a handful of virtual machines. As far as the Synology storage is concerned, I'd take a look at the Synology website and see if the particular model you want to use has been "certified" for Hyper-V or vSphere before you move forward with using them.

Take note, that at it's most basic, a high availability cluster for virtual machines will offer you high availability at the virtual machine level, meaning if one of the virtualization hosts goes down the virtual machines can be instantiated and run on the remaining host. So you're protected against host failures BUT you have no application or guest OS level high availability, which means if the OS or applications crash inside the virtual machines you'll have no high availability for them. You need to account for that with your proposed solution.

Additionally, the question Is Server Clustering Right for a 25 Person Business? really isn't about the number of employees, it's about the financial cost to the business if your LOB applications and services are unavailable. How much revenue would be lost if those LOB applications and services were unavailable for an hour/day/week and how does a high availability virtual machine cluster address that potential loss?

joeqwerty
  • 109,901
  • 6
  • 81
  • 172
2

I would say sure why not - downtime is a productivity killer. I think you are on the right path using virtualization, however, keep in mind that your shared storage is still a single a point of failure and you are adding two more - your switch and your network. So, you need to take this in consideration.

I have a similar setup in my company and I use 3 Vmware ESX nodes sharing NFS storage. I'm now exploring an option of using local storage of each ESX node and using Vmware storage clustering feature but license cost is substantial.

Essentially, if you want to do it right you need to have extra money in your budget allocated specifically for high availability. Doing it ad hoc won't take you far.

dtoubelis
  • 4,677
  • 1
  • 29
  • 32