This was one of those thoughts that were tickling the back of my mind.
I'm working on a home testbed of a high-availability cluster consisting of just computers, not a SAN or NAS for storage, just "if I wanted a server or two that were available even if the hardware failed and I had some old machines around to do it on, can I make it?" thing. Think RAID-1 at a system hardware level.
I was thinking of trying to do it by installing a Linux distro, installing DRBD in primary/primary mode with Pacemaker/STONITH, then installing Xen to virtualize the server(s) that would actually provide the systems to replicate.
Recent setups at work with VMWare ESXi had me wondering if there could be some kind of advantage to instead using ESXi to install Linux VM's on a couple machines, then use DRBD and Pacemaker/STONITH to replicate the server services between virtual machines on two VMWare ESXi systems (and remove Xen from the equation since I could spin up other VM's).
At the time I think I was liking the management interface's more or less straightforward way of giving stats on performance, disk use, etc. on the VM side, while I've seen nothing regarding management of Xen or DRBD other than the command line (although I hate having to use a Windows system to monitor the VMWare server).
Second thoughts told me that it would be an added layer of complexity and probably difficulty with networking, since I could probably more easily run Linux/DRBD replication with the dedicated hardware (each machine would have one NIC for the switch, one NIC to crossover to each other for disk I/O) and I wanted to find out what I could do to create such a cluster for "free"...and VMWare's solutions beyond ESXi are definitely not cheap.
Has anyone else tried something like this configuration, virtualizing machine running DRBD in the VM's instead of bare metal hardware? Are there configuration advantages to this beyond just performance/management monitoring with the free vSphere client (or "free" virtualization of choice)?