4

I have been doing research for implementing virtualization for a server running 3 guests - two linux based and one windows. After trying my hands on Xenserver, I am impressed with the architecture and wanted to use the opensource XEN, which is when I am hearing a lot more about KVM, about how good it is and it's the future etc. So, could anyone here please help me answer some of my queries, between KVM and XEN.

  1. Based on my requirement of three VMs on one server, which is better for performance - KVM or XEN, considering one the linux vm's will works a file-server, one as a mailserver and the third one a Windows server?

  2. Is KVM stable? What about upgrades.. What about XEN, I cannot find support for it Ubuntu?

  3. Are there any published benchmarks on both Xen and KVM? I cannot seem to find any.

  4. If I go with Xen, will it possible to move to KVM later or vice versa?

In summary, I am looking for real answers on which one I should use.. Xen or KVM?

  • 1
    Why did you discard VMware. There is a free version of the ESXi hypervisor, and VMware is more mature and widely used than KVM. If the choice is based on principle, and you only want to use open source software, that consider I didn't say anything :) – Prof. Moriarty Apr 05 '10 at 17:33

4 Answers4

5

Red Hat is moving from Xen to KVM. That's certainly swaying my choice for running it under an existing Linux install. On the other hand, there isn't anything like XenServer for KVM.

Converting between the two is possible but not easy.

Bill Weiss
  • 10,979
  • 3
  • 38
  • 66
  • 1. It has already moved and released two versions of RHEV 2. RHEV is a full scale management and virtualization suite for KVM. – dyasny Apr 05 '10 at 19:07
  • 1
    Red Hat owns KVM. They bought the company that developed it (Qumranet) in 2008 and put all their effort into KVM since at least then. Red Hat also sponsor the development of Libvirt (the management library for virtualization infrastructure) and related projects like Virtual Machine Manager (virt-manager). Combine that with the fact that Ubuntu (another extremely popular distro) does virtualization based on KVM and the question "KVM or Xen?" becomes moot. At least if you run any kind of productive virtualization environment and cannot afford to roll your own kernels or apply unofficial patches. – daff Apr 09 '10 at 01:39
3

Xen is a technological deadend, a point which has been discussed many times in all sorts of forums. That is why all the major players are leaving it behind.

If you want a supported and manageable setup with KVM under the hood, look at RHEV. There are also alternatives - libvirt, proxmox etc.

dyasny
  • 18,802
  • 6
  • 49
  • 64
  • Oracle-VM is a RedHat Clone, coupled with XEN 4.0. SLES10 and 11 use XEN (11 SP2 uses XEN 4.1). Even Debian still supports XEN. RedHat is a major player - but others stayed with XEN. – Nils Sep 01 '12 at 21:45
  • It's not really about who stayed with what, it's about the fact that with extra complications, as a system evolves, it gets more and more rigid, and harder to move ahead. Basically, an extra kernel codebase has to be maintained and kept up to date and compatible with the rest of the host. – dyasny Sep 01 '12 at 21:59
2

I find XEN's handling of mapping block devices to domU vm's far easier to manage, and far more flexable than KVM's. specifically, I manage + create LV's (w/ LVM2) in the dom0, and map them directly to the '/dev/sda1' in the domU.

With KVM, (As far as I know) I have to export whole partitioned disks. Which means, I have to use partx on the dom0 to 'attach' and 'detach' them.

I also like, that for lower performance requirements, XEN works on older hardware that doesn't have the VT bit. As far as I know, KVM requires special processor support.

Unfortunately, I have seen the writing on the wall: RedHat + Ubuntu seem to favor KVM @ this point. W/o Xen in the main kernel tree, and Citrix shipping their own Xenserver product, there doesn't seem to be much momentum behind getting it back into the tree.

Jason
  • 1,885
  • 1
  • 13
  • 12
  • Oh. After your response, I searched for some solutions on it and found this http://www.linux-kvm.com/content/xen-kvm-disk-management-issues. I am not sure if that works or not. What about using DRBD with KVM? –  Apr 06 '10 at 04:39
  • I don't think there's anything preventing you from using DRBD either in the VM or the Bare-Metal Host. Xen is currently my forte, but this is how migration works w/o a fancy SAN: you back the block device on DRBD, and when you have to take out the primary Node, the secondary node can pick right up w/o any service interruption. I"m sure it's on KVM's drawing board, but i'm not sure if it does migration yet. – Jason Apr 06 '10 at 17:32
  • I don't understand what you are talking about. Management of block devices is extremely easy with KVM, especially if you are using Libvirt for management. I simply create an LV (LVM) on the virtualization host (there is no "dom0" in KVM-land) and pass it on to the guest that needs it (add three lines in the guests XML definition file). Using virt-manager I can even use a pointy-clicky interface to do that. Restart the guest and be done with it. The guest sees the new block device as /dev/vdb, /dev/vbc, etc. Couldn't be easier. Or did I misunderstand your post? – daff Apr 09 '10 at 01:30
  • In XEN, I can map an LV to just about any device name I want in the domU. If i wanted to make /dev/chicken, I could define it in my dom.cfg file, and map it to a block device on the dom0 side. (I don't make /dev/chicken, i usually make /dev/sd[abc][123]) I tend to lvcreate -n foo ... ; mount /dev/vg/foo /mnt/t; debootstrap lenny /mnt/t; and *poof* have a more or less complete host to boot up in the vm. w/ KVM (at least my understanding of it) I'd have to create a partition table & MBR on /dev/vg/foo, right? (or have they made that easier to work with?) – Jason Apr 09 '10 at 03:26
  • there has never been any requirement for a partition. When you define a virtual disk, you can point it at a file, an LV, a disk, a LUN - whatever. The VM will get a block device that looks like /dev/vdX which you can format however you like. – dyasny Dec 07 '14 at 01:55
  • @dyasny the boot block device with Qemu KVM is usually partitioned, and there's apparently nothing like PyGrub or PV-GRUB, just direct kernel boot where the kernel and initrd are accessed from inside the host machine, cf. https://libvirt.org/formatdomain.html#elementsOSKernel – Josip Rodin Sep 05 '16 at 12:11
0

Xen is better for performance (modifying the kernel for paravirtualization avoids all the instruction traps that must be done in order to make hardware virtualization work), but requires a kernel that can be modified. If you need to run Windows as a guest then you'll have to go KVM.

Distro support for Xen is dwindling since the patches cannot keep up with the pace of Linux kernel development, whereas the kernel bits of KVM are already fully integrated into the Linux kernel, and the user bits can evolve at their own pace.

Ignacio Vazquez-Abrams
  • 45,939
  • 6
  • 79
  • 84
  • Xen domU code is in the kernel as well. The dom0 was supposed to to be in the kernel by now, but Xen is not Linux... – Dan Andreatta Apr 06 '10 at 10:27
  • KVM has paravirtualized network and block I/O drivers (virtio) which perform extremely well. I did some non-scientific tests (iperf, dd) on Xen and KVM systems running on identical hardware (HP DL380 G6) and if anything KVM performed a little better, network I/O-wise. The most compelling argument is that at the current state of affairs there is simply no way we will ever see Xen integrated in mainline Linux. Suse are the last big distro to build their virtualization infrastructure on Xen and they struggle forward porting the Xen patches to "current" (2.6.27) Linux versions. – daff Apr 09 '10 at 01:24
  • The part about distro patches is long obsolete information, Xen support was merged into the upstream Linux kernel since versions 2.6.37 and 3.0, released in 2011, cf. https://wiki.xenproject.org/wiki/XenParavirtOps#Current_state_in_Linux_kernel_and_in_distributions – Josip Rodin Sep 05 '16 at 12:08