Ubuntu Picks KVM Over Xen for virtualization

Heading in a different direction from its main rivals, Ubuntu Linux will use KVM as its primary virtualization software. Red Hat Enterprise Linux and Novell’s Suse Linux Enterprise Server both use the Xen virtualization software, a ‘hypervisor’ layer that lets multiple operating systems run on the same computer. In contrast, the KVM software runs on top of a version of Linux, the ‘host’ operating system that provides a foundation for other ‘guest’ operating systems to run in a virtual mode.

I will guess — and that’s all it is, an educated guess — that it has something to do with Xen’s close relationship with Microsoft.

An educated guess? What do you know about Open Source, Almafeta?

I think a better guess would be that ubuntu consider KVM to be easier or less cumbersome for their users. Ubuntu values easy of use a lot. That’s why they choosed AppArmor instead of SELinux for example.

If you’re trying to use it to run unmodified operating systems (such as Windows), Xen has no advantages over KVM at all. For Ubuntu, this is the most likely use case. Both KVM and Xen require hardware virtualization in this case, and they both use a modified version of Qemu for hardware emulation. In both cases, the resulting VM feels slow, because neither one has paravirtualized video drivers like VMWare does.

For paravirtualized OSes, Xen has a reasonably performance advantage (if you have an OS that’s been ported to Xen). So it’s faster at running other free OSes on Linux. Not something Ubuntu is going to be interested in.

For running Linux on Linux, something like OpenVZ (similar to FreeBSD’s zones or Solaris’ containers) is a much better bet than Xen. Again, not something that Ubuntu is going to be interested in.

I think the Ubuntu developers want to focus on something their users are actually going to use. The fact that KVM is already included in the kernel, and doesn’t require any system modifications to work is an added bonus, since it means they can focus on making it work nicely.

Red Hat supports XEN, but AFAIK they are only using it until KVM is ready…they like it, the kernel people likes it, and KVM programmers want to be the linux kernel virtualization solution, unlike XEN, which isn’t planning to submit a lot of their code in mainline.

Eventually KVM will be the preferred linux virtualization solution just because there’re more people behing it and it’ll be more integrated.

It makes sense – the Xensource people are apparently not being that forthcoming helping distros iron out all the issues (especially in SLES/RHEL). Most of the support is done by Redhat/SuSE guys in-house, and it looks like they’re getting sick of it. Take a look at the Fedora 9 roadmap that mentions a switch to the pv_ops port of Xen: http://fedoraproject.org/wiki/Features/XenPvops . It seems apparent from the tone that they really are not happy with it and would love to switch to a KVM-only customer solution.

All in all, good news for everyone: less kernels to test, less backporting (try getting the Xensource people to care about modern developments in NUMA), and a more visible development process.

The same article summary, sans-spin. If people enjoy this, I may do this for other articles.

“Ubuntu integrates KVM virtualization”

“Ubuntu Linux will use KVM as its primary virtualization software. Red Hat Enterprise Linux and Novell’s Suse Linux Enterprise Server both use the Xen virtualization software, a ‘hypervisor’ layer that lets multiple operating systems run on the same computer. The KVM software runs on top of a version of Linux, the ‘host’ operating system that provides a foundation for other ‘guest’ operating systems to run in a virtual mode.”

I understand that KVM relies on hardware virtualisation capabilities in the processor (AMD’s SVM or Intel’s VT)

Without such, it falls back to the much slower QEmu based software virtualisation. The Debian KVM maintainers even recommend using QEmu over KVM if you don’t have a processor with virtualisation support built in. see http://packages.debian.org/sid/kvm

I understand that, by contrast, XEN’s near native performance does not depend on specific processor-side virtualisation capabilities.

Since processors on end-user machines are far less likely to have virtualisation hard-wired into them than server hardware, I would conclude that KVM is the better solution for servers, not for end-users.

Whatever happened to Libvirt making it easy to choose between different virtualisation methods – I thought this was being added to Hardy?

Yet again openVZ gets ignored in these articles and discussions. OpenVZ is making it in to the mainline Kernel slowly but surely, and gives native performance without the modern hardware requirement. This is my bet for the best desktop virtualisation package.

Yet again openVZ gets ignored in these articles and discussions. OpenVZ is making it in to the mainline Kernel slowly but surely, and gives native performance without the modern hardware requirement. This is my bet for the best desktop virtualisation package.

OpenVZ doesn’t provide you with virtual machines where you can run different operating systems. All the containers use the same kernel (from the host). This is closer to FreeBSD’s jails or Solaris’ zones.

Since processors on end-user machines are far less likely to have virtualisation hard-wired into them than server hardware, I would conclude that KVM is the better solution for servers, not for end-users.

Even my one-year-old cheap-ass HP laptop has hardware virtualization support so I wouldn’t think that’s a problem.

Since processors on end-user machines are far less likely to have virtualisation hard-wired into them than server hardware, I would conclude that KVM is the better solution for servers, not for end-users.

Even my one-year-old cheap-ass HP laptop has hardware virtualization support so I wouldn’t think that’s a problem.

My less than half a year old Lenovo laptop does not. So I would think that there /is/ a problem.

You can still buy new computers without hardware virtualization. The few Celeron-Ds out there don’t have it, nor do the Core-based Celerons and Pentiums, nor the E4xxx-series C2Ds. Apparently Intel sees it as one of those market-segmentation things.

FWIW, hardware-virt is why I paid extra to get an E6300 C2D in April last year.

Using KVM over Xen is a good choice. The Xen hypervisor is basically a kernel itself (and has it’s own drivers, which are mostly grabbed from the Linux kernel source).

The upside of this is that other OS’s can use the Xen hypervisor as well. It is not tied to the host OS. OpenSolaris and FreeBSD (iirc) have host support for the Xen hypervisor as well.

The downside is that it has to be maintained out of the Linux kernel tree, which duplicates a lot of work.

And from a commercial standpoint: XenSource is now owned by Citrix, which is direct competition for Canonical. Both Redhat and Novell started using Xen long before the acquisition by Citrix, if they would have to choose today they would probably pick KVM as well.

Back to the technical reasons: In contrast, KVM can take direct advantage of advances in the Linux kernel. All virtual machines are basically just another process. A much cleaner solution and better to maintain.

Yes, it requires hardware virtualization support. But by the time KVM is mature everyone will have a PC with hardware virtualization support anyway. If you don’t have this now you can just use Qemu. For good Qemu performance you can use the Qemu kernel accelerator module, which was closed source until recently but is now released as GPL and makes your stuff run a lot faster than just plain Qemu.

KVM basically reuses all Qemu stuff (the I/O layer, USB virtualization and so on). This is a good thing, it consolidates the work and thus decreases duplication of effort.

Regarding performance: Most desktop users will want to virtualize Windows, so they can do stuff they can’t with Linux (like updating or backing up firmware on your iPhone or Nokia phone). Both Xen and KVM (and Qemu ofcourse) are still miles behind in performance compared to VMware, because they don’t provide paravirtualized drivers for disk I/O. And there are still a lot of problems with USB virtualization as well.

I tried out both KVM and Xen recently and I wasn’t really impressed by the I/O performance. VMware runs helluva lot faster, even without hardware virtualization, because it offers I/O paravirtualization drivers which speed up things a LOT.

So I still use VMware player for all my virtualization needs. KVM has a bright future, but is not there yet for desktop virtualization. But if you look further into the future, you can see that it is the right way to go.

Is that a smart decision? Windows 2003 does not work properly (crashes KVM on every Windows boot) and Ubuntu 7.04 and 7.10 cannot be installed without modifying the ISO image. KVM needs a lot of love before you can use it in production.

I had to create some VM’s at work in some RHEL5.1 AP Servers I had just built I am not that impressed with it at all. It is clunky at best in my opinion, on the other hand the VMware servers I had built and configured went smooth as glass but the licensing cost a lot of money.

Just my opinion, if you were wanting a solution for the Enterprise (where I work) the VMware appears to more refined and mature. In due time I am sure it will be perfected but my time with it was not very fun after having to open a ticket with Red Hat on it. The Red Hat guys are sharp, and they are willing to help with a good attitude. So I believe paying for support from a vendor pays for itself in being able to get a problem resolved quickly than wasting time on Google or searching online endlessly for a solution that does not exist…

Don’t get me wrong, searching online can solve a lot of problems but there is a time to open a ticket and let someone help you out instead of wasting company money and time trying to be a one man show. As there is no ‘I’ in a Team…

The future will belong to KVM. That is why Ubuntu picked it and why Red Hat and Novell will change to it. Xen is just a stop gap and neither KVM nor Xen are production ready at this point. Xen requires you to run a special kernel, which always crashed when I tried it, and to modify the guest OS. Guess what? No one will support an OS that you modified, especially Microsoft. So for Enterprise use, its pretty much just VMware. As for the virtualization stuff in the CPU, it doesnt make much difference in VMware at all at this point.

I’m running the 32 Bit version of 2003 R2 on KVM under Ubuntu Gutsy just fine. No crashes etc. I am using KVM_AMD and also boot up with the “-no-acpi -std-vga” if that helps. Haven’t tried the 2003 64 bit version yet.

For me KVM works well.

I like Xen and am “Xen Certified” meaning I attended a class on it and can therefore pretend to be knowledgeable. For the OpenSource minded it seems that there are too many proprietary hooks (the paravirtualized drivers etc, management interface) to make it a good fit with Ubuntu.. In addition Citrix has not finalized their plans for Xen which leaves some confusion at least for me as to future direction.