Microsoft showed its new 'Softgrid' technology today that allows apps to run as if they were inside a second copy of Windows. But unlike traditional virtualisation apps, there's no second Windows desktop getting in the way. For example, Microsoft showed Office 2003 and Office 2007 running side-by-side, even though they can't actually be installed on the same copy of Windows. Initially, the technology is for corporate users only, but it has huge obvious benefits for home users as well. First steps towards this?

Cool: http://www.kdedevelopers.org/node/2920
"klik2 brings the era of "application virtualization" to the Linux platform. (In case you do not yet know: this is a topic that is going to be hyped very soon on the proprietary MS Windows platform -- and it indeed does solve quite a few problems which are prominent and widespread there)."

I deploy and work with Softgrid a lot, it's a great product but it has one very big disadvantage.

Let's say you virtualize Office 2003, if you want to use Adobe PDF within Office 2003 you have to virtualize it together with Office.

Softgrid is only used for exotic applications that have no connection to other applications, if it has it becomes pretty useless, so that's why it's never recommended to virtualize applications like office, acrobat reader etc.

In other words, this is Microsoft's somewhat belated OS virtualization solution, like Zones, Jails, and OpenVZ. The author of the article seems to be drawn mostly to the fact that containerized clients share the host window manager. But this is how all existing OS virtualization solutions work.

There is no "Windows within a Windows" going on here. A single Windows instance powers all virtualized clients. Therefore, this can't be used to bridge XP/Vista compatibility issues, for example. The advantage of this setup is isolation and manageability.

The marketing angle is similar to the direction of the rPath project, which was recently featured here:

It drew only 4 comments, one of them being my rant on the improper use of hardware virtualization in place of OS virtualization. So this would be the rPath vision of software delivery implemented in a more technically sensible manner.

Well, sort of... While free software implementations of virtual image creation leverage the existing package management systems, Microsoft's approach appears to use a kind of keylogger for system configuration. It tracks and memorizes the steps taken to setup the software on a fresh Windows install so that it can replay them later to create virtual images.

Since streaming is cooler than downloading, that's how Microsoft intends to deliver the virtual images to clients on a corporate network. It's not clear how well streaming works for a system image as a opposed to sequentially-accessed media. It would seem to make more sense to network-mount (i.e. SMB2) the images and use caching.

All in all, though, I'm glad to see that OS virtualization is being properly employed to solve real RAS problems. Neither the technology nor the use-case is new, and the implementation details include some frightening and strange aspects, but the vertical integration will be best-of-breed. A brief hackathon could bring similar functionality to Solaris or Linux.

Outlook is the application that cannot be installed from multiple versions of Office at the same time. The rest of the Office applications don't have this issue. When you install O2k7 along with O2k3, you have to either leave the 2k3 version or upgrade to Outlook 2k7.

In your "this" link you suggest starting the userland anew and using virtualization to provide backward compatibility.

That is what makes sense these days. Systems are fast and virtualization overhead isn't as much of a concern. A future version of windows may pursue this -- just include a virtualized Vista in the background to maintain compatibility.

IMHO the future is with standalone ("portable") applications that require nothing but the large, standard OS files. In the Windows world there is a growing community of people "Thinstalling" applications for use. klik2 also looks to follow this path.

Meanwhile part of the Linux community is still trying to solve the problem of managing how an arbitrary number of interdependent application files interact with each other (package management).

To me the solution to the packaging and security problems seems obvious: package large standard application libraries (java, mono, ...) with the OS. Build individual applications with their own given versions of the small libraries. This restricts big library compatibility testing to the few big libraries, and small library testing is done by the developer who packages his app with the libraries that he knows work.

Multi-user environments are the only hitch. I'd have each app use its own individual config file unless a switch is flipped for it to enter multiuser mode, which would store the config data elsewhere.

Then you use one of these fancy virtualization techniques to show a copy of the OS to each application. Like with klik2 and the AppArmor profiles, each app is then monitored and eventually only allowed what you know it should be able to do. This can be done before the app gets to the user -- a governing body can make sure that each popular application has only the minimum necessary access.

Ubuntu and Linspire have the program installation and removal GUI down. MacOS has the "single file application" thing down. Every OS has some version of application virtualization. Put it all together, folks.

Build individual applications with their own given versions of the small libraries. This restricts big library compatibility testing to the few big libraries, and small library testing is done by the developer who packages his app with the libraries that he knows work.

That sounds vaguely like the approach taken by PC-BSD. It also seems to obviate the need for the libraries at all. It seems foolish to go to the effort of separating code into DLL files (or SO files) if only one instance is ever going to be loaded at a time. (Maybe that's what you meant by the large, standard OS files.)

"it has me confused but sounds like just an app running in its own enviroment seperated from the main os."

Judging from the article's content, it looks to me like an MICROS~1 like implementation of something similar to UNIX's jail. So it seems to be a bit more than chroot... but I'm not quite sure, I'll have to examine the source code. :-)

that's funny that MS needs special software to make it so you can run multiple versions of a program on the same PC at once. have they ever heard of making them non-reliant on each other in the first place? perhaps letting you choose where you want to install it and actually running 100% from there with no reliance on other rubbish?