4

so we have a life server Farm, conventional stuff not virtualized. This is to stay that way for the time beeing. The servers we are talking about are: Loadbalancer, DB master, DB Slaves (2x), Webservers (2x), CMS server. All in all 7 servers per farm.

We want to have one to many rather exact "copies" of the life structure for inhouse purposes.

  • Testing/Staging: to find bugs before live that might slip through if we do not use the same software-configuration
  • Debugging/Benchmarking: Same like testing but with additional tools. debug flags and isolated from Tetsing/Staging to not interfeer with QA. Intended to get a deeper insight into how things are on our live-systems
  • Development/Experimental: Changing System Components, Software-Versions, Libraries and Configuration to improve performance, developer effectivness, future-proofing our systems etc.

All in all 3 virtual "farms" a 7 virtual servers each, neatly isolated.

Now I know this can be achieved with many different flavors of virtualization. The Questions is, what is the best? VMware not beeing open source is nothing we would like to use on a strategic point of view. Looking at XEN and KVM, the two biggest players in open source virtualization I would love to hear some advise on what/how to select. The Web seems undecided.

Additional Info:

  • "life time" of the server hardware that we purchase will be 2-3 years
  • There is the possibility that we might migrate systems into cloud-environments later, if that is a factor to consider in the selection of the virtualization-technology
  • Isolation of Server-Farms/Servers in High-Load scenarios is important. QA should not suffer when someone wrecks havoc in the experimental farm
  • Efficent ressource usage is of course appreciated (memory overcommitment/shared pages? Automaticly Shared objects on hard-drive like the linux-vserver hashify?)
  • Regular maintenance requierements and easy of management

I know things are in the flow but I would appreciate if you would tell me your opinion on what to choose right now seeing we want to live with the technique for at least 3 years, build up and reuse know-how. Also maybe in the end we can all realize that neither one, XEN or KVM is deciding the battle but other factors. Enlightenment in that regards is even more valuable to us right now.

p.s.: and lets not flame :D

4 Answers4

8

We use Xen currently, but I think that in 2011 we will be migrating to KVM. There are some reasons why:

  • KVM development is more integrated with the Linux kernel than Xen's development.
  • KVM vms run as processes to the linux kernel. That has many implications, like scheduling, memory management and etc. That also permits KVM to do overcommit of memory (it's simply swapping the vm/process memory) and all that is using tested and proven code from the kernel. Xen uses it's own code to do all that, and while it's not bad, it's not as tested and proven as the linux kernel.
  • Full virt seems to work better on KVM.

The main point against KVM is performance, but last reports using VirtIO driver on Linux and Windows VMs seems to make that point less and less proehminent.

As for managing the machines, I use ganeti. Ganeti is a cluster virtual server management system where you add your nodes and can do all the operations with the VMs on those nodes like creating, starting, rebooting, migrating and etc. It also supports the creation of DRBD instances that have mirrored disk images on two nodes providing automatic failover/migration if one of the nodes fail. It supports KVM or Xen, but not mixed clusters. Ganeti is text-based but there's a web interface project that is doing pretty well. Using ganeti + debootstrap we can deploy/clone various types of vms very fast, do LVM snapshots from disks for tests and so on, so I think it will get you covered on that.

Remember only that no matter what one you choose, you should never virtualise high IO machines like file servers or DBs and expect they will perform the same. In some cases, high IO VMs can even degrade all other VMs on a host. Not everything can be safely virtualised.

EDIT: Since you mentioned testing and development, read this article. I am thinking in assembling something very close to that.

coredump
  • 12,713
  • 2
  • 36
  • 56
  • I think I just hallucinated... +1 and a unicorn just gave me a thumbs-up. – nedm Apr 01 '11 at 03:44
  • April's fool! :) – coredump Apr 01 '11 at 03:44
  • AS I WAS SAYING, +1 for KVM -- dead simple to get going and performance with virtio is much better than it used to be without. To the extent that we do virtualize some things (email, for instance) that we didn't before. That said, "not everything can safely be virtualized" is absolutely true. – nedm Apr 01 '11 at 03:49
  • Now I have the urge to dole out a downvote just to see what happens. – nedm Apr 01 '11 at 03:51
  • Just noticed that this question was asked in August. Even got a badge for that answer. – coredump Apr 01 '11 at 04:11
  • Thank you Coredump. You earned it :). Next would be Necromancer :D – Christoph Strasen Apr 12 '11 at 10:46
0

Putting a love for open-source before the business's need for a reliable and highly-available solution is dangerous. There's a reason the free ESXi and Hyper-V Server are so popular. In addition to paid support being relatively cheaper, there's a lot more experts walking around.

That said, XenServer is supported by Cirtix; and KVM/QEmu is supported by Red Hat (and possibly others, I'm not that up on it); but only in certain configurations, so that's something to watch. Xen supports the VHD format natively, and the VHD file can be trasfered between Hyper-V, Virtual Server, Virtual PC, and Xen without modification (size limitations apply to VPC). KVM uses the qcow2 format; which nobody else uses, though it can be converted to other formats with 3rd party utilities. There's also less management tools available for these servers.

Chris S
  • 77,945
  • 11
  • 124
  • 216
  • nitpicking: KVM, like Xen, uses any format supported by qemu. qcow2 is only popular for test VMs on desktop machines. for anything else you use block devices (LVM, iSCSI/FC LUNs, etc). that's true on either system – Javier Aug 26 '10 at 16:28
  • @Javier, very true, and thank you for pointing that out. All of them support direct block devices and you'll get the highest performance from those. And QEmu supports several file formats; though you'd rare use anything except block devices and qcow2. One other possible benefit of KVM/QEmu is that it runs on FreeBSD (and NetBSD IIRC) in addition to Linux; and QEmu w/o kernel virtualization runs on Windows too. – Chris S Aug 26 '10 at 17:38
  • We use qcow2 images for (some) VMs with low IO requirements under KVM just for the flexibility and portability -- it's often handy to be able to shut it down, move to different server or storage, and fire back up quickly. – nedm Apr 01 '11 at 03:58
0

If you are using only Linux, and are sure that it'll be that way for a long time (i.e., for as long as you're working there or as long as another strategic decision migrates your servers to the Cloud or wherever), you can try KVM or Xen Open Source. Further, if you use a distribution that ships a xen-ready (i.e., paravirtualized) kernel (Debian/Ubuntu, at least), Xen is a great choice. Probably KVM too, but I recommend Xen because that's what I'm more familiar with. YMMV. Combining Xen and Pacemaker for clusters of VMs is something that we've done a few times with good results.

Now, if you will have several/many Windows systems, I'd spend some bucks and go for XenServer. The paid version gives you (IIRC; please, check to confirm) high availability and XenCenter, the Windows-based console that makes everything easier. You could run Windows in Xen Open Source too, but it's a bit trickier. Also, there's the free version of XenServer, but I find its command line obscure and difficult. You could master it, but it'll take some time.

Remember that when using XenServer you'll need the "guest utilities", or "PV drivers". Without them, the performance will suffer.

rsuarez
  • 384
  • 5
  • 11
0

XEN-hy try this out we are using last 2 year as such no issues till now! we using old version

Rajat
  • 3,349
  • 22
  • 29