Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

AlexGr sends us to an excellent article on the state of Xen by Jeff Gould (Peerstone Research). He concludes that the virtualization technology has some maturing to do and will face increasing competition for the privilege of taking on VMWare. Quoting: "What's going on with Xen, the open source hypervisor that was supposed to give VMware a run for its money? I can't remember how many IT trade press articles, blog posts and vendor white papers I've read about Xen in the last few years... The vast majority of those articles — including a few I've written myself — take it as an article of faith that Xen's paravirtualizing technical approach and open source business model are inherently superior to the closed source alternatives from VMware or Microsoft."

It seems that VirtualBox.org's product, fully virtualizing a copy of XP on my non-VT machine under a linux host OS, totally runs circles around Xen even on VT hardware as far as performance is concerned. Integration into the host enviroment is also quite beautiful. Why is there seldom a mention of VirtualBox in this arena?

virtual box is basically QEMU with a much better KQEMU component that they developed on their own. This isn't very interesting because this is the same thing as VMWare or any other closed source Ring0-in-Ring1 emulation using polymorphic code.

I can't comment on why VirtualBox doesn't get more press, but I can confirm that I've had very good results using VirtualBox 1.3.x on fairly low powered machines. My guess is that it gets lumped under QEMU when comparisons are made.

I just installed virtualbox on Ubuntu Feisty to see what your fuss was about. I tried to install Windows 2000 in a VM and VirtualBox wouldn't let me type F8 to accept the license. No idea whose fault that is, but speaking for myself I can say only that vmware server 'just works' and thus I have no reason to use virtualbox, which does the same things but not as well.

Not just for xen - but if you are interested in virtualization in general. Lots of links to many other products - open and closed. So if you aren't into xen, but still want to know about what is going on in this space (to some extent - they don't even touch the stuff IBM is doing really) then it's worth the time.

you can't run Windows using Xen as Microsoft won't let you recompile the windows kernel... and this ability of being able to run windows on Linux is one of the things Microsoft allows to be done with the blessed versions of Linux for corporate customers. Normal mortals can't do it and will never be able to do it.

Personally, I don't give a flying fig about being able to run Windows or windows programs on Linux... there isn't anything I want to do on windows that I can't do on Linux... (note the emphasis, I

If you have VT-capable hardware then you can run Windows under Xen. You do need the hardware to support it though, and that is a problem for some home users. Recent AMD and Intel chips have slightly differing VT support but both work.

I run Xen at home along with xen-tools [xen-tools.org] (which I wrote) to easily create new Debian guests on demand. These are used for software testing, hacking, and general service isolation.

I think Xen is just now reaching "mainstream" in the sense that you don't have to be an early adoptor or major tinkerer to get it working. Now that distributions are including Xen kernels in their newer releases it really us available for all.

I've been messing with XEN for a week or two, thought it would help me out with debugging and maintaining multiple systems without the need for them. Picked up an AMD SVM CPU and tried it, running into a wall. The XEN list serv wasn't much help nor were the log files. In comparison Virtual PC 07 / VMWare on Windows worked flawlessly on the first try... I'd still prefer using XEN and might take a stab at KVM but wasting a week just to get to the boot screen without success was a little painful. FYI I tried C

For most of us, there is no point in running Linux under Xen. We already gave Linux the native hardware. I guess somebody might want to run a Linux guest on Windows, but that'd be Wrong and is anyway unsupported.

When I want to run a Linux app, I just run it. No problem. When I want to run a Windows app, I need virtualization.

Of course!! Why would somebody want to buy big hardware like a Sunfire X4600 M2 with 16 cores, 256GB RAM, 4GbE and multiple I/O slots to run multiple instances of Linux and/or Windows and/or BSD, and/or Solaris x86 when they could just run 1 instance and let the computing power of their server be use damn inefficiently?

Nobody wants to make their data center streamlined and efficient for use of power and cooling, which in many ways costs more than the initial hardware purchase over a 4yr refresh cycle....

Any experience with Xen, NetBSD as Dom0 and Windows? I've got a NetBSD server that I'd like to be able to run Windows on, but VMWare on NetBSD seems out of date. I suppose I could use Linux for Dom0, run my server stuff on NetBSD as a DomU, but getting familiar with YAOS seems a pain...

I've tried them and the performance still isn't great - xen seems to have bottlenecks on its network and disk I/O that are a result of using qemu to do it in software... the maximum net throughput even on PV is a fraction of a 100mb link let alone a gigabit one, and my old 486 firewall does faster disk access.

If I have to maintain two separate OS's, I'd rather have the outermost OS (host OS) be the one that has the best drivers, the most hardware support. Also, since very few virtualization solutions work with 3D gaming (and even the one that does, it still has large overheads I think), you want your host OS to be the one that has all the games. So, for my purposes anyway, I need Windows as the host OS, and Linux as the guest OS. Xen doesn't run under Windows, only Linux. So that leaves me with either commercial virtualization software, or a few open source projects that haven't matured yet (eg. coLinux).

(granted, having Windows on the outside makes your machine much less secure than the other way around, but personally, I'm more interested in having all my peripherals work the day they're released, and having all my games available)

I think you have that slightly wrong - when you say "Usually people run Windows in a VM", I think you meant to say "Usually people run Windows".

Some of them will take the plunge into the uncharted waters by running Linux in a VM so it won't trash their desktop settings, apps etc, ut make no illusion that the majority of people using computers are using Windows, not Linux.

Sorry to rain on your parade, but what I meant was that the reason people (who aren't on Windows) run Windows in a VM is because they need it for some app. Obviously people who already are on Windows have no need to run it in a VM.

Why not use a live-cd if one wants to test it without committing? The inherent slowness of VMs makes it difficult to ascertain how well Linux would run, and the lack of 3d-acceleration means that none of the shiny things that draw in new people will function.

that can hardly be the problem as most Linux apps are OSS and thus portable to Windows

True. However, if you work with a large number of open source apps, or even just a lot of Perl modules... Usually these were designed from the start to work under Linux. Yes, the more popular ones compile under both, but sometimes it's a pain, and the less popular ones simply won't compile without extra work.

Also, I just prefer Unix streams/forking/filesystem semantics over Windows. And sure, I use Cygwin/MinGW/e

My situation is different, but with the same requirement for a Windows host OS. VirtualBox fills this need quite nicely. The latest release enables the use of VMware disk images, but I haven't tried that. You may wish to try out VirtualBox.

I'd prefer to be able to dual-boot directly into Windows or Linux (for when I want the fastest performance in Linux, and give it 100% of the RAM), and also be able to run that Linux installation inside of Windows. However, this requires the VM to support booting off a separate partition, and apparently VirtualBox doesn't support that [virtualbox.org]. (yes, booting the same Linux setup under two very different sets of "hardware" has its challenges [vmware.com], but it is possible [wikia.com])

Full virtualization, as used in Xen, VMware and VirtualBox, has performance issues that are not yet well understood, but thought to revolve around dramatically increased L2 cache misses. I am not aware that any changes are in the works to fully resolve this.

Operating system virtualization, as used for instance in OpenVZ has far better performance characteristics. This is the way to go at the moment for efficient and low cost data center support of Linux. The problem is that all virtual environments m

Full virtualization, as used in Xen, VMware and VirtualBox, has performance issues that are not yet well understood, but thought to revolve around dramatically increased L2 cache misses. I am not aware that any changes are in the works to fully resolve this.

Hmmmm. No, not really.

Performance problems with VMWare are almost universally associated with the added latency of the multiplexing/demultiplexing code that needs to be run to talk with shared I/O devices. This added latency in turn impacts bandwidth. "L

For CPU we used SPEC CPU 2006 and score about 5-6ish % on VMWare as the same test done on those blades in hard metal. Xen is undiscernably different to the subjective eye than hard metal. I would have to break out large batch testing methodology and run the results through inferential statistics to conclude that there was a difference at all.

I/O is a different story.

The Xen performance claims and the VZ performance claims aren't really useful. They're theoretical. As in, "theoretically, we can stack 100 operating systems on this blade efficiently." Think about that. That's just plain nuts. I can't think of a real use case for that.

BTW, if you like OpenVZ, and have the right use case, the commercial Virtuozzo product ranks as the "best virtualization technology that no one has ever heard of" in my book. They really have their IT management story down pat.

You can choose to believe the hype or not, as you wish, but I'm using Xen in my production environment, and it's simply fantastic. I've got friends with companies who are doing it as well, and it really changes how you think about administration.

Of course, there are some learning curves. For example, how you manage 3-7 servers is completely different from how you manage 20-30, even if they are all virtual. There's a lot more emphasis on system images, isolating functionality, reproducing configurations. On the other hand, dev environments are so much easier to build-up and tear down.

I just wish the OpenBSD port was in a usable state. The mercurial servers hosting it are often down, and even when they're up, I haven't been able to get a working kernel compiled from the sources (even after doing some of my own bugfixes). And last I saw on the Xen lists, Christoph Egger (the guy doing the OpenBSD port) submitted a security patch related to stack slamming, and the Xen guys were kind of like, "meh, security's not really a priority..."... Oh well, here's to keeping my fingers crossed

I have a VPS ("slice") at Slichehost [slicehost.com] and it's the best thing since... well, you know.

Seriously, it may not be right for all applications, and things like Solaris' zones/containers are quite awesome (much more control over IO, fair share scheduling, etc.), I have one of those too at Joyent, but (like many things Linux) it seems to work, be fast, and get the job done at a great value.

I have seen people complain when they have an app that's IO bound and there's another slice with heavy IO needs--looks like IO

Not only that, but I've been running it in a production environment for about a year and I'm about to deploy a HUGE set of servers as VMs using it. Xen beats VMware in one arena: price. If you use the open source version (which I'm doing) it's free. Only VMWare's ESX can compare to Xen. And unlike some people here have been saying, you DON'T need a special processor for Xen unless you plan to virtualize Windows. In my environment, I'm only virtualizing Linux, so I can use regular x86 CPUs dating back to 1998 for Xen. The only exception is the deployment of Zimbra I'm going to do. It requires Redhat Enterprise Linux 4 and NPTL, so I can't run it paravirtualized, it must run HVM which requires the special processors. However, who today isn't getting new hardware with HVM support?

Currently my two Xen servers here at work serve out about four VMs (all paravirtualized on older hardware) for critical and I/O intensive tasks like proxy servers for nearly 1000 machines, or the firewall syslog server for a dual T3 link with about 5000 users behind it sucking the bandwidth dry. So you can't claim it doesn't perform either. Now, if you want point and click administration and an easy set up, then yeah, Xen is behind the times. But performance wise it's leaps and bounds above VMWare. Trust me, I was a VMWare fan before you were in virtualization diapers. And I still am for some applications. But for places where I need something to be cost effective AND give me the features of VMWare ESX, Xen is the ONLY answer.

"Just Works" is overstating it quite a bit. Based on my experience, and looking at the other comments here, it's more like Xen "mostly works, after a great deal of learning, googling, and experimenting". Maybe once you've ramped up on it it works well. But, saying it "Just Works" is clearly not the case.The Xen experience has improved a lot. In Fedora 7, I just had to select the Xen kernel+apps for a package install, and the Xen infrastructure was pretty easily installed. But, getting client VMs ru

I agree. It doesn't "just work" by any stretch of the imagination. When it does work, it's great, but there's a whole mess of shell scripts working in the background which don't handle error conditions very well and you often get presented with very cryptic and often quite misleading error messages ("Backend scripts not working" is one of my favorites).

It depends on what I'm doing. If you weren't trying to be cute, I'd say you were trolling. In reality, it's very common practice to use LVM to clone a filesystem, make some changes to the various files that set IP and hostname as well as other unique host settings and bring up alternate "Test" VMs on a Xen box. So some days I might be running three VMs other days eight or ten. It all depends on what I need to do.

As an aside, I forgot to mention that there are NO other products other than VMWare ESX that offer "live migration" of a running VM from one hardware host to another. That's right... you can take a VM that is running with many users actively using it and move it from one physical box to another with only a few milliseconds down time. The users NEVER notice. The free VMWare server can't do that. Micrsoft's Virtual Server can't do that until they have a hypervisor. And there really isn't anything else that can.

You can also use global network block devices with a Linux box as your storage server. In my case here at work we've got a SAN, but we als have budget cuts and we're a non-profit... So I can't afford ESX. At home, well... I just like having enterprise functionality without the cost.:)

ESX pricing is in the multiple-thousands of dollars per machine. Which, if you're putting together a collection of $20k-$30k machines probably isn't that bad.

But it's horribly overpriced for the smaller market. Picture a small company with 4-12 servers in the $3k-$5k range and a $10k SAN unit. They'd like to be able to pool their servers so that if one box goes down due to hardware failure, services continue to be available.

You're very right about VMWare ESX. We use it in production for a couple thousand users, and I'm still in awe that I can push running VMs from one physical box to another with less than a second of downtime.

Another question hanging over Xen performance concerns the availability of paravirtualized drivers for Windows.

This isn't true completely. The problem is you cannot get these drivers by downloading the OpenSource Xen. You MUST buy the XenSource version. If you run Windows on the *complete* open source version, your network throughput is going to suck like you would not believe. You have to use the XenSource version to get the paravirtualized drivers that bring the network performance closer to what it should be. Virtual Iron has a set of drivers also. (which I believe are better than Xen's, but don't hold me to that)

I found a lot of great insight about virtualizing from Xen to VMWare to Virtual Iron and others on this site. http://ian.blenke.com/xen [blenke.com]

I've tried both and VMWare is just better. I respect Linux/GPL and the OSS movement, though the main reason I use linux is not just those reasons it's because IT WORKS. So when it comes down to Xen or VMware I use vmware because it works better.

Xen is FOSS so there is potential for them to catch up and with the nature of FOSS new ideas can be tossed in easier. So when that day comes I'll gladly switch over, it's just not there yet.

If you want to get a colorful thread of comments started on slashdot, there are 3 ways to do it with guaranteed results:

1) Say something bad about linux (or about Apple).

2) Say something good about Microsoft (or about Apple).

3) Throw a grenade in the room about Open Source software like this:

The vast majority of those articles -- including a few I've written myself -- take it as an article of faith that Xen's paravirtualizing technical approach and open source business model are inherently superior to the closed source alternatives from VMware or Microsoft.

I'm getting a Macbook soon, and I want to play around with virtual machines on it.

Is it possible to install e.g. Debian as my host OS, apt-get install xen, and then install Mac OS X inside a Xen virtual machine? This computer has a C2D processor, which supports the Intel VT instructions. I'll also do the same with Windows XP and Vista, and Ubuntu.

If it will work, how well? Will it be a transparent install so that X can directly access the 3D acceleration hardware?

It's also a virtual machine.I remember, when I first got WinXP on this machine (free, courtesy of my school), I installed it in Qemu. I then tried to install it directly on the hardware, and it insisted that it was a different computer, and I would have to re-activate it. I called MS, and I had to explain to a woman with a thick Indian accent that it was the same machine. (She didn't have a clue about virtualization.)

The problem is, legally, is a VM the same computer, or a different one? It's one of those t

My company is currently using Xen on something like 40 "virtual machines" on 6 "real machines." Works almost flawlessly. Runs heavily-used multi-gigabyte MySQL databases and Java web apps without complaining. You can move virtual machines between real machines while they're under load, with a 6ms delay. If a developer wants to try something weird, go ahead. If you hose the system, I'll just re-image it and have you going again in 5 minutes. There's nothing wrong with Xen at all, if it's done right. It

You can move virtual machines between real machines while they're under load, with a 6ms delay.

That's really interesting. See, the authors of Xen say live migration isn't ready and that it's unstable. I have deployments of redhat, suse, and open source xen that prove that with swarm-cloud migration testing (just put all the vms into flight constantly, ping-ponging around to various hypervisors). Meanwhile, Xen Enterprise does not today feature live migration. Why? Did I say that it's not ready, and that th

Well, I've done numerous tests of live migration, and it works for me. Do I know something they don't? I don't think you can say either way. Perhaps on somebody else's network, it might not work. But I've done migration under heavy load and not had any failures, zombies, or crashing of Dom0. I had to clarify the 6ms delay in another post. Here's the way I put it, simplified because I screwed up the formatting...ping...64 bytes from xxxx...5 msping...64 bytes from xxxx...5 msping...64 bytes from xxxx..

I understand the latency of the switchover. It will be dependent on the size of the volatile set of memory that needs to be transferred between the save/restore cycles. I.e., this will be virtual machine-dependent, and tend to increase linearly with the virtual machine's memory footprint and memory utilization."Data center readiness," to me, does not mean a few servers running Xen. It means many, many servers, taken from at least superset of servers taken from all the mainstream enterprise server vendors, i

YMMV, depending on usage during the time of the switch, but Xen starts migrations by copying over memory *while the original VM is running*. Then, the original VM is suspended, checked one last time for data consistancy (the delay), then the VM is brought back up by the new host.

He concludes that the virtualization technology has some maturing to do...

I RTFA and it says very little about the maturity of the actual Xen technology. The article is more a point about several non-related factors;

1.) There is a lack of pretty management interfaces.

True, but these are in the works from Red Hat, Novell, XenSource, and various other ends. Already some of them look pretty promising, but if you are a real admin you don't need them in the first place. There is nothing wrong with using the command line tools to manage your Xen virtual guest environment.

2.) There is a lack of references for companies using Xen.

How does this relate to the viability of the Xen virtualization? Yeah it makes management feel nice and fuzzy that others are using something, but this does not relate to how well the Xen technology performs. I also suspect that like many open source projects, there are many people using it that do not report it. Novell has personally contacted me and my company to ask us to assist in their new paravirtualized Windows drivers initiative and then be a reference for the technology. It seems that at least some companies are moving to address this, at any rate.

3.) There aren't many benchmarks about Xen versus VMWare.

VMWare does not allow benchmarks they do not approve of. It's in that draconian EULA you agreed to by using it.

4.) It's awkward to paravirtualize Windows.

Yes, it is. Novell signed the soul sapping agreement with MS and as such is pushing some paravirtualized drivers for Windows. The article continually talks about woes with Xen on Red Hat. Red Hat didn't sign the agreement and will require some much more intelligent coding to make this happen. It might never happen, so for Windows it's full virtualization with VT (or AMD's equivalent) or bust. Sorry, use SUSE for it or use full virtualization. It's an MS issue not a Xen issue.

5.) MS's new Viridan Virtualization Platform is using paravirtualization as well.

Yep, that should be a testament to the approach versus VMWare. Though it is interesting that VMWare now has a Linux kernel virtualization implementation similar to KVM. It seems VMWare is headed to paravirtualization as well. Obviously Xen did something right.

6.) There is a lot of competition.

True. How again is this relating to Xen as a virtualization technology.

Again, I'm not saying Xen is perfect. It definitely has issues and room to grow. I'm just saying that the article makes little, if any, relevant points to Xen's virtualization technology.

True, but these are in the works from Red Hat, Novell, XenSource, and various other ends. Already some of them look pretty promising, but if you are a real admin you don't need them in the first place. There is nothing wrong with using the command line tools to manage your Xen virtual guest environment.

It's not just about pretty pictures, it's about usability. For example, Xen is a bitch to setup with any sort of non-trivial networking environment (eg: multiple vlans, bonded interfaces, etc). You freque

You frequently have to write your own scripts to make it work in such situations, but this requires a very good - arguably completely unnecessary - understanding of what's going on behind the scenes.

You seem to be complaining that the Xen hypervisor doesn't do anything with networking. That's not what a hypervisor is for. Setting up virtual networks is outside the scope of a hypervisor; that's why you need other tools. It would be like complaining 10 years ago that Linux worked fine for process management, but that it was a pain to set up networking, that you needed to edit all kinds of scripts and nonsense. That's got nothing to do with the kernel, but with the lack of supporting tools to make c

For example, Xen is a bitch to setup with any sort of non-trivial networking environment (eg: multiple vlans, bonded interfaces, etc).

I'll agree with this, although it isn't the hypervisor's fault - it's the userland stuff that's at fault. For example, Xen doesn't appear to support IPv6 *at all* in routed mode, I had to hack up my own scripts to do it (and I'm seriously considering moving over to bridged mode in an effort to simplify and standardise my system). But I'm curious - do other virtualisation sy

This seems to be one of his main disappointments, in fact -- specifically, that RHEL 5 doesn't have pretty management interfaces. (The only mention of XenSource and Virtual Iron's management interfaces seems to be to re-emphasize that RHEL's GUI is really bad.)

The Xen hypervisor is an engine, not a car. Xen is in some ways similar to where Linux was in the late '90s -- the kernel worked great, but the GUI was way behind. And the fact is that corporate customers need a complete solution, not just a gre

I'm primarily a VMware VI3 user, but I've been starting to do more with Xen lately. I have to say, Xen is very impressive in what it accomplishes. It's very stable, and has the capability to do some really advanced stuff. That being said, it can be a real pain to get some of those advanced features working. For example, running Xen in CentOS 5, I had a server with two NICs, and I wanted to setup a second bridged interface for the second NIC. It took way more effort than it should have to get that worki

I have been trying to use Xen at home to test it out and compare it to VMWare, which I've used at work. Once you manage to get Xen clients working, it's fine. It does a good job of running VMs, and can be used to partition resources on a powerful machine.

But, the main problem is the steep learning curve for getting Xen running in the first place. The (python based) management GUIs included with Fedora or Ubuntu are weak at best (although, slowly improving.. the UI in Fedora 7 does manage to make setup easier than the command-line alternative). The ongoing management / monitoring of VMs is okay, but weak in comparison to VMWare.

There are also a lot of little quirks in Xen. Installing Win2k in a client VM required a lot of searching for how to attach an ISO image to a running VM (it's not a simple GUI operation like in VMWare/Parallels/VirtualPC, it requires a terminal command with unintuitive options, which never worked for me.. I finally dug out my CD and got the physical CD drive to attach to the VM). Windows VMs have an odd issue where the mouse pointer is offset form the actual pointer (it's a known issue, and is helped by turning off mouse acceleration in Windows preferences, but it is still a problem). Installing client VMs can be challenging.. Ubuntu feisty wouldn't install until I set the VM as a Solaris client, and after a few other tweaks it finally installed and worked fairly well.

Most of the Xen problems are solvable, after playing with command-line tools, figuring out poorly documented parameters, and lots of googling. At the end of the day, it's one of those "Xen is free, if your time has no value" type things. VMWare Server is probably a better option if you just want it to work for home/free uses. For commercial use, VMWare ESX Server is the way to go. It has simple VM setup for many client OS's, excellent management of large groups os Hypervisors and virtual machines.

The commercial alternative from XenSource (free to use, but limited to 4 VMs; or less restricted versions for increasing $$) offer a better management UI, but are too restricted for my taste. The management app is much better, but not as good as VMWare.. If I'm going to pay for one, I'll go for the best option.

"Oh my. Editable XML configuration files, obscure command line interfaces, grayed out options in the GUI? Thanks, but no thanks. This thing doesn't sound like it's ready for prime time in Data Center USA."

I say if you can't use the command line YOU'RE not ready for "prime time in Data Center USA."

Oh my. Editable XML configuration files, obscure command line interfaces, grayed out options in the GUI? Thanks, but no thanks. This thing doesn't sound like it's ready for prime time in Data Center USA.

Are sysadmins at "Data Center USA" morons? "Oh nooo, command line time, I hate that. Oh nooo, my option I want is all grayed out! Help me, help me! Oh I am so sad now."

Deploying vm stuff is not the same as using a word processor. "Data Center USA" is in real trouble if their sysadmins aren't any smarter than regular desktop users.

"Such tools that people want to rely on, oh I don't know, all the time, require good management tools in order to set things up in a straightforward way that can be documented and can be reproduced"Like, uh... a script? I had always problems trying to understand the rationale of "documented an reproductible" and "GUI" in the same sentence. Can you really talk about "can be documented and can be reproduced" on bold face when all you have is a doc document an some screen captures? Can you really talk about

Xen saved my former employer a bunch of money and gained then great flexibility and reliability. They use an AoE (ATA over ethernet) SAN so the compute nodes are totally diskless and all of the data and root filesystems are on the SAN. Now they have email, database, web serving, nearly all of their critical functions in a highly available xen-aoe cluster. I am working with them to release all of the codes and configs in production and we are setting up a website at xenaoe.org (not up yet, but soon) to host the project.

Here is something I wrote up about this architecture for the company when the project went live:

What is Xen?

Xen is a free virtualization system similar to VMware but different. It allows us to run multiple servers/operating systems all on one physical piece of hardware while providing isolation between them.

What is AoE?

AoE is a SAN technology. Similar to Fibrechannel (but far less expensive) or iSCSI (but far simpler and more efficient).

What are the advantages of Xen and AoE for our company?

Xen allows us to more efficiently utilize our hardware resources. The majority of cpu power on your average computer goes unused. Even on servers. They just sit there waiting for something to happen. Even if we get a web request every second the time between one request and the next is an eternity for a cpu running at 2 gigahertz. But powerful cpu's are needed for those short bursts of activity. By using Xen to run multiple servers in their own domains (areas of memory) completely isolated from each other on the same physical hardware we can squeeze more utilization out of our existing CPU's/servers. This means we can get by with fewer CPU's, less rackspace, use less power, and require less air conditioning. By encapsulating the servers into this sort of infrastructure it also allows enhanced management capabilities by allowing the administrator (such as myself) to be able to get console access on the server or restart the server while remote instead of having to drive to the datacenter (which in our case is a 30 minute drive down to Kearny Mesa).

AoE allows us to put a bunch of disk in relatively inexpensive and low CPU powered servers on the network and allow the rest of the servers to access it exactly as if the disk were locally installed in that server. This is advantageous because we can now aggregate all of our disk into one system and treat it like a pool of storage where we can dole out an appropriate amount of disk to each server (often only 10 or 20G is needed) instead of having to put in a dedicated 250G disk which is the minimum you can easily buy these days and waste a lot of disk and power to run it.

The combination of Xen and AoE allows us all of the above plus some interesting fault tolerance abilities. There are now two levels of redundancy in our disk systems and an extra level of redundancy in the cpu's also in that if one cpu fails (or the associated motherboard, RAM, or network card) we can easily switch the servers that were hosted on that machine over to another cpu on the network with either zero or very minimal downtime whereas previously that kind of failure would have required me to drive down to the datacenter and shuffle hardware around or buy new hardware to replace the failed system which all takes time and can result in prolonged downtime.

It is true that Xen requires special hardware to legally run MS Windows. It is also better for performance, generally, to have such hardware. However, there is nothing stopping you from running Xen on pretty much any computer you are likely to own as long as the VMs are Linux based.

However, there is nothing stopping you from running Xen on pretty much any computer you are likely to own as long as the VMs are Linux based

You mean 'Free Software,' rather than 'Linux based.' You can run NetBSD, FreeBSD, Linux, OpenSolaris, Minix or Plan 9 (maybe some others I've forgotten) as paravirtualised domU guests with no special hardware. You can also run OpenSolaris or NetBSD instead of Linux in domain 0, if you don't feel like running Linux. Which makes me wonder slightly what this is doing in the 'Linux' category...

If the VMs are all Linux-based, a single-kernel approach like Virtuozzo/OpenVZ makes more sense than Xen.

Xen is nice if you need to run many different kernels or different OSs tailored for the Xen way and sure keep things more isolated in case of a security breach, but the added walls are somewhat inconvenient for managing the virtual servers. I encountered many inconveniences in managing an 8 VM setup under Xen, but managing a similar setup under OpenVZ is a breeze.

While it might be nice if all these things are easy and work well for the hobby crowd, the real money in virtualization is in the enterprise space. Most servers in enterprise environments run 15% max and are refreshed every 3-5 years. The special processor matters less in that case, and the competition is between a mature VMWareESX server (not free), a hardware based IBM and Xen. Microsoft is a surprisingly minior player. VMWareESX server is very good for x86 consolidation and saves customers money, but is very expensive. It is still the best option for Intel based consolidation. Xen has deep penatration in enterprise lab environments. It is just getting the enterprise management tools to move into real production. IBM is very good at virtualization and stability, but on proprietary power and mainframe hardware. Xen will be fine, because the market is very immature, but expect more seamless and non-attrusive virtualization on the desktop.

Good points. Hosting companies are leaders in the virtualization market, and most of what I was talking about before was large enterprises consolidating. I have a limited field of vision on this issue, but I just don't see the buzz around the MS offerings with big customers. I think we have some emerging contenters in the virt market, like KVM and VZ and maybe even Sun's containers. What would be great is an overarching enterprise level management tool that takes advantage of the various hypervisors (co

Expensive? Not really, if you compare the costs of actually getting that number of servers. Given the feature set you get, it's pretty modest. If you work in an educational setting, it's even cheaper. You can get VI3 Enterprise and a tier one server - 2U rackmount, dual quad-core system (2.4 ghz) with 16GB of RAM, dual power supply, 6 hour CTR service, an additional NIC and a 4GB fibre channel card and about 512GB of local storage for about $16,000. Depending on the size of the VMs you need to run, you can

Virtual Iron uses Xen but does not do para virtualization. It supports 8 CPUs per guest and is a "bare metal" VM hypervisor (does not require a host OS).

It also comes with the ability to move virtual machines from host to host (based on pools of resources) either by some threshold being met (say, 75% CPU utilization on a Dell PowerEdge server moves it to an HP Proliant which is only 45% utilized ) without the virtual guest OS ever skipping a beat. Vmware does this with VMotion, but that is an add on pa

The benefit are enterprise level features that exist right now that Xen does not have in the F/OSS version. The "Commercial" version will have those features very soon, if not already. Many large companies WILL NOT allow modified kernels or modified guests in general to be released into production. The lack of support for older CPUs that do not have built-in virtualization is the reason I cannot use Virtual Iron just yet for my company, but I am pushing them towards different machines for the future. I

The major things that makes ESX attractive as far as I know all have to do with Enterprise usage - i.e. bare metal hypervisor (self contained "Host" OS) and ability to live transfer a VM from one server to another via shared storage without shutting down.

Right - there is an intersection here of two completely different user groups - I belong to both myself. I use vmware server on my personal/work machine - so that I can have xp available for a couple windows only apps that I must use. In the datacenter we use a mix of aix virtualization on p5s and esx. virtualization in the datacenter is bringing our organization a number of benefits that server can't provide. My understanding is that one feature is moving virtual machines from one host to another on th

ESX is a bitch to get working. We got a demo, set up a machine with hardware entirely on VMWare's 'approved' list - and it wouldn't install because it was missing drivers. VMWare couldn't help because we hadn't parted with any money yet... we went with RHEL w/ the free VMWare server which 'just works' (xen doesn't have the I/O performance to be deployed seriously yet).

Transfering a live server could be useful at time I guess, although like you said, probably not necessary for the home user. Also, I was aware of the "bare metal hypervisor", which I guess probably has quite a few advantages, since just the host OS can eat up a lot of the machine resources. How bare of a Linux machine can you run VMWare server anyway? You can run Linux itself with very little resources, It should be possible to create a distro that's cut down to just the stuff necessary to run VMWare.

If you want to run multiple linux instances on the cheap then xen is the way to go at the moment.

Except that OpenVZ [openvz.org] is a better way to go in that case. If you are only going to run multiple instances of Linux, with OpenVZ you don't need to preallocate a fixed amount of memory for each VM, the root filesystem can be a subdirectory of the root OS instance's filesystem, among many other things. It can do just about everything that XEN can do, including live migration to other physical nodes.

For fairness it should be mentioned that aside from OpenVZ there is also Linux VServer [linux-vserver.org] which does a few things better than OpenVZ (though OpenVZ does some things it does not). Our preference has always been VServer, it's a well-run project with emphasis on quality and well thought through design rather than quantity.

True. This is a major problem IMO. Once upon a time I had hoped that I could run VM's *and* have full 3D support, both in a Windows VM and a Linux VM.Turns out that 3D accelerated is not an option right now, but Xen was at that time working on something that could (given the right hardware, which at the time was only high end IBM mobo's) isolate PCI cards completely.

That way, you could have two graphics boards in your system, and when Xen starts up it could assign one graphics board to, for example, a Windo

IIRC, the latest (i.e., so-called "Direct3D 10 compatible") graphics cards have MMUs, which would (theoretically) allow multiple OSs to share the card in the same way that they can currently share the CPU.

There was some interesting work presented at the XenSummit which should be making it into the main tree eventually on 3D support in Windows. The idea was that the memory layout was adjusted slightly so that Xen and dom0 lived nearer the top, and Windows in an HVM domain lived at the bottom. This allowed Windows to use existing 3D drivers, without having to do any address translation when performing DMAs (not an issue if you have an IOMMU, but hardly anyone does yet).

The problem with giving access to hardware to guests at the moment is that without an IOMMU, any DMA request the driver issues will read or write memory from a physical address indicated by the driver. In a virtual machine, what the driver thinks is a physical address is actually a virtual address. This means a DMA request will read from or write to an arbitrary memory location. By putting the HVM guest at the start of memory, this translation is the identity function, so the driver will work. The only downside is that you lose protection from other domains; a malicious driver can still damage your other VMs or even the hypervisor.

I just think at this stage in the game, as the article implies, the primary focus is the datacenter. And 3d just doesn't matter there in the vast majority of the cases. Certainly not enough to make 3d a top priority.

The problem with Hardware acceleration in VMS is fairly straightforward. The driver sends information such as 'use bitmap located at pos x in memory' The way memory mapping works, the VM might be given a chunk of memory (i.e. positions 100 to 200) and sees this as 000 to 100. for the VM, x = 010. When the card tries to access that memory, it's memory that might be assigned to a different VM, and thus garbage. Unfortunately, this generally requires the cooperation of the drivers.