Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

GPLHost-Thomas writes "The very last components that were needed to run Xen as a dom0 have finally reached kernel.org. The Xen block backend was one major feature missing from 2.6.39 dom0 support, and it's now included. Posts on the Xen blog, at Oracle and at Citrix celebrate this achievement."

Actually you have been able to run newer kernel on EC2 for a long time! Xen domU (guest VM) support has been in upstream Linux kernel since version 2.6.24.
Now upcoming Linux kernel 3.0 adds Xen dom0 support, which is the *host* support, ie. Linux kernel 3.0 can run on Xen hypervisor (xen.gz) as the "management console", providing various backends (virtual networks, virtual disks) allowing you to launch Xen VMs.

... is 16 cores and 32 GB of RAM, and I can recompile the Kernel on Linux, encode an H.264 video on OS X, serve files via Apache HTTPD from OpenBSD, and watch streaming porn videos on Windows all simultaneously on the same machine!

Yeah, the new servers I just received to upgrade our vmware cluster have 128G with only half the slots filled. Still 16 cores (2 x 8) per host, our limiting factor is having enough free RAM available for failover. The linux guests share ram nicely, but the windows guests are pigs.

If you're gonna stream porn on the Windows guest, instead of something useful like original Star Craft/Brood War, keep your clean guest image for reloads. You're better off streaming the porn on a Linux guest, since the embedded malware is much less likely to run.

Yes both intel and AMD sell CPUs that let you put 16+ cores in one machine BUT afaict in both cases the individual cores are substantially slower than you can get in a 12-core (2x6) xeon 56xx machine. The prices are also pretty crazy afaict.

The 12-core X56xx's solutions arent touching the 48-core solutions from AMD as of yet in parallel workloads. The Opteron 6168 solution is cheaper with more performance and the Opteron 6174 route is more expensive but significantly faster over-all, than a pair of X5690 priced at $3300+

I am simply amazed that Intel has not taken its older designs for larger process sizes and simply packed on more cores during a process reduction in order to bre

The 12-core X56xx's solutions arent touching the 48-core solutions from AMD as of yet in parallel workloads

Yeah if you push the core count insanely high you can get to the point where (for some workloads) the number of cores makes up for the low performance of the individual cores but afaict there is no 16-core system on the market that is faster overall than a 12 core 56xx series system.

48 cores (4 sockets with AMD 12 core) and 64GB here and it was only around $10K. There are a lot of much bigger machines around - that's effectively just an overgrown gaming machine these days which is why economies of scale brought the price down to something sane instead of Sun or IBM prices. I've seen people spend as much on two laptops.It's not for virtual machines. The stuff it runs works properly in parallel but runs faster on one machine with shared memory than it can on a cluster.

GP mentioned Windows. The Windows Server license that runs on 16 cores is really, really "out there" for home users. So we can assume that he is talking about a home OS, and for a home PC 16 cores really is "out there".

GP mentioned Windows. The Windows Server license that runs on 16 cores is really, really "out there" for home users. So we can assume that he is talking about a home OS, and for a home PC 16 cores really is "out there".

Well... I was curious. The major cost in a multi-CPU setup is generally the motherboard. Enthusiast boards are typically in the $150-$250 range, dual-CPU boards are generally in the $400-$550 range (Tyan Thunder n3600T). The 2.8GHz Opteron 6-core CPUs are around $310 each, with slightly

Xen Dom0 support has been supported in released versions of NetBSD and Solaris for something like 4 years, while the VMWare lobby on the LKML was requiring the entire paravirtualisation subsystem to be rewritten before they'd accept patches, and Red Hat decided to push KVM as a Xen replacement, in spite of them having very different capabilities.

hmm, since the new kernel dev model(2.5.x basically) I've been running vanilla kernels, or at least distros that do not require the huge custom patch sets... ahh the good old days of redhat 7.2, and suse 6....

Xen support got into NetBSD and Solaris more easily, I think, because influential individuals pushed it in there whereas the Linux community had lots of quibbles over the patches and how they should be done correctly. The debate with VMware was a bit confusing and didn't help things get done quickly. RH and IBM and SuSE and others were behind Xen originally but that has gone a bit quieter subsequently.

Part of all of this, though, is due to the Xen team having different priorities to most of those other or

Just had to reply to this.. Sun forked Xen 3.1 something like 4 years ago, yes. That same fork, Xen version 3.1 is what is still being used today in Solaris and Sun had previously (pre-buyout) said they would not merge to any newer versions of xen.

So while Solaris can claim Xen Dom0 support it is no where near the capabilities of current Xen 4.0 and with no plans to update you're stuck on 3.1 with support only coming from, now, Oracle. Yeah, awesome.

'VMWare lobby', WTF? The real problem were things like this [lkml.org] and this [lkml.org]:

The fact is (and this is a _fact_): Xen is a total mess from a developmentstandpoint. I talked about this in private with Jeremy. Xen pollutes thearchitecture code in ways that NO OTHER subsystem does. And I have neverEVER seen the Xen developers really acknowledge that and try to fix it.

Thomas pointed to patches that add _explicitly_ Xen-related special casesthat aren't even trying to make sense. See the local apic thing.

So quite frankly, I wish some of the Xen people looked themselves in themirror, and then asked themselves "would _I_ merge something ugly likethat, if it was filling my subsystem with totally unrelated hacks for someother crap"?

Seriously.

If it was just the local APIC, fine. But it may be just the local APICcode this time around, next time it will be something else. It's been TLB,it's been entry_*.S, it's been all over. Some of them are performanceissues.

I dunno. I just do know that I pointed out the statistics for howmindlessly incestuous the Xen patches have historically been to Jeremy. Headmitted it. I've not seen _anybody_ say that things will improve.

Xen has been painful. If you give maintainers pain, don't expect them tolove you or respect you.

So I would really suggest that Xen people should look at _why_ they aregiving maintainers so much pain.

Linus

BTW, I have absolutely no doubt that NetBSD and Solaris merged Xen faster than anyone else.

This kind of post lead to a full re-write of the local APIC code in Xen, and many other sub-systems. What you see happening today is the result of the work due to the critics above (and not only this critics, there was others).

Unfortunately when this e-mail was sent, Jeremy was just about the only developer working on upstreaming the dom0 work for quite a while; and Jeremy was, unfortunately, still learning how to interact effectively with the kernel community. This can be largely blamed on a tactical error made by the people in charge of XenSource before Citrix acquired them. They were hoping to force RedHat to work on upstreaming dom0, so they kept the Xen fork of linux (linux-xen) at 2.6.18, and only hired one developer to w

I know, I just thought it was nice that there's now a milestone pegged to the 3.0 release as opposed to "just the normal fixes and new drivers" kinda thing. I understand that it's a complete coincidence.

"Xen inside Xen" is in fact called "nested virtualization", and it's been a long time that Xen is capable of doing that. Even better, now it's possible to run HVM inside HVM, since few patches have reached the xen-devel list. The drawback? Well, there isn't any, because in fact, the nested part is only an illusion (or, let's say, an administrative view), as Xen "sees" the VMs as all being equal.

But in fact, no, it's not about nested virtualization. It's about Linux from kernel.org not having to be patched

My understanding of Xen was that it was a hypervisor, had a dom0 guest VM for administering the hypervisor

dom0 does run under Xen and does the administrative tasks. But dom0 has another purpose: it has drivers for all of the hardware on the system. It doesn't make sense for Xen to try to have drivers for every bit of hardware that's out there -- Linux already does that very well, so there's no point in duplicating effort, especially since device drivers have *nothing* to do with virtualization. So the Xe

What is Xen? Xen is a virtualization project that is run by four of the top five major cloud providers (including Amazon, Rackspace, &c); a commercial version written by Citrix run by thousands of sites worldwide, including large companies like Tesco, SAP, &c. It's also the approved way of running Oracle databases in a virtual machine.

What does that have to do with Linux? The Xen project is focused on virtualization. But Xen still needs to run on systems with all manner of devices. There are several ways they could have handled this. One is to try to put drivers for all of the devices in Xen. This would require a huge amount of work, mostly copying new device drivers and device fixes from Linux and porting them over to Xen. It would be a colossal waste of time: they would be duplicating effort of what Linux already does well, instead of doing what they want to do -- work on virtualization.

So what they do instead is run Xen as the hypervisor, but leverage the device drivers in Linux. They do this by creating a special VM, called "domain 0" or "dom0", which is booted first after Xen boots, that has drivers to control all of the devices. This domain is a version of Linux that is designed to be able to work with Xen to control and drive devices, while allowing Xen to control memory, CPU, and interrupts (the key hardware required to do virtualization).

Xen has been out for years. Why is this just being announced? The Xen project started out of a University research project. As is typical, they were trying to answer the question "what is possible?", and as a result, felt free to completely rip out and rewrite large sections of Linux code. This code was not upstream-able -- changes were made that were (rightly) not acceptable to the Kernel community.

Since that time, the Xen community has maintained branches of Linux with these intrusive, non-upstreamable patches, and used these branches as domain 0. At the same time, they have worked to try to get support for Linux-as-domain-0 into the mainline tree. This has been a long process, and something that has been a sore point for users of Xen for some time.

But as of Linux 3.0, all of the functionality required to use the mainline kernel tree as a basic dom0 with Xen is in. This means that if you install Xen, you'll be able to use the same kernel you booted with natively as the dom0 for Xen. It means that distributions won't have to maintain two separate kernels, one for booting bare metal, and one for booting on Xen. And it means not having to maintain the xen-linux fork, which has been a lot of painful work for the Xen community.

But doesn't that just make Xen the OS with linux becoming an application? I mean, it is the OS's job to manage memory and devices, and to allocate CPU time.

No, Xen is a hypervisor. A process expects a *lot* more from an operating system than an OS expects from a hypervisor. VMs expect raw hardware and know they have to manage most things (like setting up memory, doing filesystems, and so on) themselves. Processes expect an operating system set up memory mapping for them, give them filesystems (not just raw disks), IP addresses and sockets and TCP (not just raw packets), and so on.

In the KVM case, Linux is an operating system to normal processes, but a hypervisor to VMs. Linux gives memory and time to the guest OS, and the guest OS gives memory and time (along with filesystems, TCP, &c) to guest processes. So in that way Xen and KVM (i.e., Linux-as-hypervisor) are the same.

The main difference is that Xen is only a hypervisor, whereas with KVM, Linux tries to be both a hypervisor and an operating system. That has a number of practical implications. Xen has been widely deployed and tested as an enterprise-class hypervisor. I'm not aware of any large-scale enterprise deployments of KVM, so it remains to be seen whether Linux can successfully be both an enterprise-class hypervisor and an operating system at the same time.

AFAIK, on a desktop with two discrete graphics cards, you should be able to run Windows and Linux as guests at the same time, each using one card. I'm not sure about disk access, you might want to add a discrete PCI-E SATA controller for one of the systems to avoid any screwups caused by Windows doing something nasty, but other than that, this seems to be perfectly viable. A recent Sandy Bridge-based Core i7, with 8GB of memory on a good P67-based motherboard should run such a software stack with native per

My question is how soon could someone be able to use Xen and have dom0 autoboot into something like a Windows XP installation (running on the console) and can still manage VMs and manage them in the hypervisor? I would like to leverage a hypervisor for managing VMs on a system with a lot of memory and still want to use it as a workstation with native, or close-to-native graphics acceleration not login through remote desktop/VNC.

The documentation for doing this is confusing and there appears to be a limitati

The xen.org project has mainly been focusing on server-style virtualization, without desktop graphics (although graphics pass-through is obviously a priority for the Intel engineers).

What you describe really needs not just a single piece of software, but the full configuration and integration with a distribution. If you're not opposed to using software that is partially closed-source but free-as-in-beer, you could try XenClient [citrix.com]. It's designed to run on laptops, and specifically tweaked to pass the GPU th

It does now, but slashdot seems really, really mod point starved as of late. Some discussions there looks like there's almost no one to mod, and when they do get mod points it's 5 now compared to 15 before.

I had 15 points just this Wednesday. But it does seem that there is less momoderation lately; 100 comments with all at 1 or less. Maybe all of the mods but me are downmodding? (Of the 15 I had, all but two were upmods)

I noticed recently that some of my posts have been modded down immediately as "overrated". I'm not talking about posts which are potentially controversial, either. I don't know if it's widespread, but I get the feeling there are a number of kids with time on their hands downmodding anything they feel like. Time to spend more effort metamoderating, I suppose. But I'm busy, unlike 20-somethings living in mom's basement, so...tehy win the interwebs?

The whole meta-mod system has been basically non-functional for years that I stopped bothering. It used to be worth doing and you could counteract bad moderation. Then they changed it up, changed how it was presented, etc.

I usually mod controversial comments "interesting" unless it's written in an inflammatory tone or just plain ignorant. Posts that really don't say anything are what I downmod. If you're starting at a 1 it's not likely to hurt your karma; I've been modbombed before with someone using all their mod points on me, and the bombs never had any effect, so one downmod surely won't. Hell, sometimes I';; ask to be downmodded if I stray off topic, since the "no bonus" checkboxes don't seem to work.

Well, the issue is that I couldn't see the submit buttons at the bottom when doing my submission. They were display too much at the bottom of the screen, and I could see only the few top pixels of them. I wanted to click on "Continue editing", but unlucky for me, it was posted without giving me a chance to rectify. So I don't think it's really my fault here... Maybe someone at/. wants to test the submission display so that it's better on Firef ^W Iceweasel 4.0.1 (my own backport running on Squeeze)...

Xen has features that KVM doesn't have (by design). For example Xen "stubdomains" and "driver domains", full memory address space separation between domains, etc.. and of course it's good to have multiple opensource virtualization platforms, competition is a good thing!

thanks, but it still sounds to me like the difference between, say, Linux and BSD and SysV... yeah, different... but, oh so similar (basically they're all kernel+user land). So sounds like Xen is a little more sophisticated. But besides that, besides features, at their core, what really is all that different between KVM and bare iron hypervisors?

Actually the design is pretty different. Take a look at these slides: http://www.slideshare.net/xen_com_mgr/why-xen-slides [slideshare.net] . That should explain the differences. Xen is also multi-OS, ie. you can use also BSD/Solaris in addition to Linux as a Xen host, while KVM is Linux-only as host.

Thanks again. From your link:: "KVM has a very different model - Linux kernel as hypervisor"
Aha! KVM is a hypervisor too? Xen has no kernel? Again... besides the features... the function appears the same to me. Take KVM, remove the drivers, make it tiny, minimalistic... and besides features, the model appears the same to me. Xen is more advanced, more features... but basically, they're both bare iron hypervisors, right?

Xen is a secure baremetal hypervisor (xen.gz), around 2 MB in size, and it's the first thing that boots on your computer from GRUB. After Xen hypervisor has started it boots the "management console" VM, called "Xen dom0", which is most often Linux, but it could also be BSD or Solaris. Upstream Linux kernel v3.0 can run as Xen dom0 without additional patches. Xen dom0 has some special privileges, like direct access to hardware, so you can run device drivers in dom0 (=use native Linux kernel device drivers for disk/net etc), and dom0 then provides virtual networks and virtual disks for other VMs through Xen hypervisor. Xen also has the concept of "driver domains", where you can dedicate a piece of hardware to some VM (with Xen PCI passthru), and run the driver for the hardware in the VM, instead of dom0, adding further separation and security to the system. Xen "Driver domain" VMs can provide virtual network and virtual disk backends for other VMs.
KVM on the other hand is a loadable module for Linux kernel, which turns Linux kernel into a hypervisor. The difference is that in KVM all the processes (sshd, apache, etc) running on the host Linux and the VMs share the same memory address space. So KVM has less separation between the host and the VMs, by design. VMs in KVM are processes on the host Linux, not "true" separated VMs.

Not sure which Xen book you read, but the grandparent makes a lot of errors and I'd be surprised if a book was that inaccurate. Mine [amazon.co.uk] is slightly out of date, but at least was accurate at the time of printing (technical review was done by the original Xen developer).

Let's start at the end. KVM VMs and userspace Linux applications do not share the same address space. This isn't even true if you remove KVM - userspace processes have isolated address spaces. KVM requires the CPU have virtualisation extensions, which means (among other things) nested page tables. This means that there is hardware-enforced separation between the pages. The guest OS sees page tables that map from virtual to pseudophysical address space, but thinks that they map from virtual to physical. The host (Linux) sets the mapping from these pseudophysical pages to real memory pages and the CPU enforces this mapping. Xen uses exactly the same mechanism in HVM mode (it uses some other tricks in paravirtual mode).

The driver domains are correct, but it's worth noting that Xen will use VT-d or equivalent to protect against malicious use. Linux can't give a userspace program direct access to the disk controller, because if it did then a rogue DMA command could compromise the kernel. Xen will use the IOMMU to ensure that each peripheral may only issue DMAs to memory owned by the driver domain. The Solaris VM that you have accessing your block device and exporting virtual disks from ZVOLs, for example, can trample its own address space with rogue DMAs, but it can't touch any memory in other VMs.

This means that Xen (in theory) has a smaller attack profile than KVM. Xen is basically a microkernel, and it enforces low privilege on the services (OS instances) that provide drivers and the management console. With KVM, the entire kernel runs in privileged mode. It's fairly common these days for the management console domain to have either no network access, or highly-restricted access, and be separated from the driver domains. If there is a flaw in the network stack in Linux and an attacker compromises it, then with KVM they now have access to all of your VMs. With Xen, they control that driver domain, and they can inject packets into the other VMs, but they are no more able to compromise them than they would be if they controlled the router one hop away.

KVM recently gained support or live migration (this has been stable in Xen for a long time - they were doing demos of live-migrating a Quake 2 server with clients connected since the early 2000s), but it doesn't have any of the high-availability stuff that Xen 4 includes. This allows you to do things like run two instances of the same VM on different machines and transparently fail-over when one dies.

I'm not following. Are you are suggesting throwing out all NAT firewalls and connecting everything to the net to reduce the "attack surface" area? I don't know how that will work out for you, but I'm certain you will quickly "understand the threat" on your network. Sure hypothetical bugs might exist that allow this, but hypothetical bugs in quantum computing might allow it to become sentient and take over the stock exchange plunging us into the dark ages as our entire financial system crumbles. I'll take m

In either case, NAT offers *some* protection but may not be viable in some IPv6 and other situations. My recommendation would be to use an appliance to both make stateful examinations of conversations in the firewall sense, use/etc/hosts instead of DNS, examine key vulnerable drivers for MD5, and use other methods to vet basic VMs that are used to clone for production activities. Among other steps.

In other words, from a security profile, KVM and Xen and other methods like LXC each have their own implicatio

This why Xen [xen.org] PDF might explain it well. Under Xen, guests are running inside the host operating system. In Xen, the hypervisor starts a special Linux kernel (the dom0) that will only take care of drivers for the guests. The design is really different, and has different features. For example, in Xen, you can have your dom0 to run on 2 cores, leaving the rest for the guests (I'm not sure that is possible in KVM), and if you want to avoid any possible CPU starvation, you can even have the guests to not use the cores that the dom0 is using. The CPU scheduler is also very different (and there's not only one available...).

Just what the hell is the difference between a bare iron hypervisor and KVM?

As far as Linux is concerned, a KVM virtual machine is just another process. So your whole infrastructure-critical server VMs are treated exactly the same as the random daemons that get started up as a matter of course but never used. Worse yet, the same scheduling algortihms are used -- although the VMs have to handle interrupts, while processes don't.

In Xen, there's a scheduler dedicated to scheduling VMs, and the algorithm is

There doesn't have to be a battle -- there's room in the OSS world for two technologies. Xen and KVM are different technologies. For most desktop users, KVM is probably the best option; but on big servers, linux running KVM has to mix scheduling between VMs and processes. Since Xen runs VMs exclusively, it can focus only on algorithms that work well for VMs.

A lot of the Xen developers use KVM. You can run Xen and PV kernels inside KVM, which (apparently) is great for debugging. They're very different tools though. The problem is companies like Red Hat that spread a lot of FUD about Xen and tell everyone to use KVM instead, which makes about as much sense as telling them to use bash instead of vim.

Products such as this aren't going to be used by mainstream mom&pop users, Xen will likely not be available in boxed set at your local computerstore or gameshop. The people using this will likely always come from an IT related background.

And as for windows:- If you run Xen with Windows, the same terminology applies (except it would be run as dom1+ since Windows doesnt support dom0 to my knowledge)- If you open up a MCSE

I Thougth i had a IT background. I Do run virtualisation product on my desktop for development purposes. I Did this even long before this was useful (For just the cool factor of running 2 OS'es at the same time).

But after 2 minutes of reading it still is not clear what Dom0 is, and what the consequences are. In fact the "domain" is not explained.

You might say that I am not expert enough, but the whole problem is that Xen might not be simple enough, failing the KISS principble.

It's partly historical and partly because Xen is structured differently to lots of other virtualisation systems.

"Domain" is to "virtual machine" as "process" is to "program". i.e. it's a running instance of a virtual machine. If you kill a VM and restart it, it's the same VM but a different domain. In practice VM and domain are blurred a bit when people talk, though.

Domain 0 is a bit like the host OS, but for technical reasons it's not exactly.

The question I have is: Can I run Xen with my Linux dom0 and have Windows on dom1 with full GPU support and easily swap between the two so I can run my basic Linux desktop on one hand and have Windows load up and run a game in another. So far no VM solution has real capability to use full video acceleration on "guest" operating systems.

You could have full GPU support in Windows, using the PCI passthrough system (if your hardware is VT-d capable). But, to my knowledge, swapping between a Linux desktop using the GPU and windows using the GPU as well isn't possible. However, you can run in full screen both windows and linux, if you use the SDL driver.

For all this, it might be more easy to use Virtualbox though. Virtualbox is more adapted to the desktop environment, and when you have a Direct-X / OpenGL call in windows, it is translated int

virtualisation is complicated, maybe the article should have just said "Linux now has built in stuff to make it so you can run more than OS!", actually that's probably too complicated for most, how about "Another type of computer you don't use has built in support for running more computers inside it! it's like OSX and windows only it's another one!".

IME (and I freely accept I may be utterly wrong...), all that means is the building blocks are in place to do it.

The F/OSS software for managing virtualisation is still pretty dire - if I'm being honest, it feels like someone read a VMWare feature list and decided to copy it without first ensuring they understood what all the features actually were. So they bang on about how having "feature equivalence" yet close investigation suggests that it's not as simple as that.

yeah, everything but vmware is hard to set up, if you're going to sell VMs getting xen running is probably worthwhile tho. good news is vmware has an open source offering you can install straight out of your package manager (probably). i've actually spent a couple of weekends messing with virtualisation, that last post was me just being sarcastic at the troll.

I'm not sure if you are trolling on purpose, or if you don't understand what this news is all about. But I'll bite.

You see, linux runs on almost any kind of hardware: from embedded systems on toasters to phones, desktop computers, laptops, to big servers. Even most supercomputers to date are running Linux. There is a _lot_ of different users that would use Linux in many different ways.

Xen is a technology that virtualizes machines, mainly intended for the data center and cloud computing environments.

This is NOT intended for users in any way. Your mom does NOT have to know that Xen even exists, just like windows users don't need to know what IIS or Apache is in order to browse the web.

Would you also say that windows and OSX is "is way too complicated for people" because you read slashdot news about some geeky kernel details about windows/OSX ?Surely "no user should need to know, or care about this sort of thing.".

They don't. So do you about Xen. I'm not sure why someone like you is reading and posting on/., because this is usually "news for nerds", as the site indicates.:)

As many slashdotters would say about your reasoning behind your post: "You are doing it wrong.";)

Remember Xen hypervisor is opensource (GPL), just like Linux kernel, so all the Oracle and Citrix code in the hypervisor and in the kernel is opensource.
Citrix uses XenServer as a platform to run their other products, and obviously Xen is the best platform to run those Citrix "windows products".
Novell ships Xen in Suse Linux Enterprise (SLES) 10 and 11. Debian ships Xen in their current version. I heard Ubuntu is going to add Xen back now when the kernel components are included in upstream Linux. Fedora

So what exactly makes this so special? It's a step for one of the many virtualization solutions in the market these days.

I for one wouldn't trust Oracle with any part of my infrastructure if I can help it. Citrix to me still is a company that makes an expensive Xclient for MicroSoft products and a niche product they bought, Xen, with no apparent synergy with their windows products, and who else really cares?

The part that you are missing is that at the hart of Citrix desktop virtualization (they call it VDI), there's Xen running. That's the reason why they bought Xen, and why they are pushing its development. So yes, there's a synergy, and it's also for their Windows stuff...

There's been a developing market for desktop virtualization (VDI) -- meaning not "running a VM inside my desktop", but for corporations to run "desktops" as VMs inside of servers and export them to think clients on people's desks.

Citrix has a ton of capabilities in this area. They have decades of experience with handling remote display technologies, dealing with users, dealing with disk images, and so on. So they were in a perfect position to capitalize on this new