From

Thank you

Sorry

Desktop virtualization harks back to the good old mainframe days of centralized computing while upholding the fine desktop tradition of user empowerment. Each user retains his or her own instance of desktop operating system and applications, but that stack runs in a virtual machine on a server -- which users can access through a low-cost thin client similar to an old-fashioned terminal.

The argument in favor of desktop virtualization is powerful: What burns through more hands-on resources or incurs more risk than desktop computers? Even with remote desktop management, admins must invade cubicles and shoo away employees when it's time to upgrade or troubleshoot. And each desktop or laptop provides a fat target for hackers and an opportunity to steal data.

But if you run desktops as virtual machines on a server, you can manage and secure all those desktop user environments in one central location. Patches and other security measures, along with hardware or software upgrades, demand much less overhead. And the risk that users will make mischief or mistakes that breach security drops dramatically.

The argument against desktop virtualization is almost as strong. Overhead costs conserved through central management get cancelled out by the need for powerful servers, virtualization software licenses, and additional network bandwidth. Plus, the cost of client hardware and Microsoft software licenses stays roughly the same, while the user experience -- at least today -- seldom lives up to user expectations. And then the kicker: How are users supposed to compute when they're disconnected from the network?

Decisions about whether or in what form to adopt desktop virtualization become a whole lot easier when you understand the basic variants and technologies. Here's what you need to know:

1. Desktop virtualization really is virtualization

Just like server virtualization, desktop virtualization relies on a thin layer of software known as a hypervisor, which runs on bare-metal server hardware and provides a platform on which administrators deploy and manage virtual machines. With desktop virtualization, each user gets a virtual machine that contains a separate instance of the desktop operating system (almost always Windows) and whatever applications have been installed. To the desktop OS, the applications, and the user, the VM does a pretty good job of impersonating a real desktop machine.

2. Traditional thin client solutions are not desktop virtualization

By far the most popular form of server-based, thin client computing relies on Microsoft Terminal Services (recently renamed Remote Desktop Services), which lets multiple users share the same instance of Windows. Terminal Services is often paired with Citrix XenApp (formerly known as Presentation Server and, before that, MetaFrame), which adds management features and improves performance -- no hypervisors or VMs here. The main drawbacks: Some applications run poorly or not at all in this shared environment, and individuals can't customize their user experience the way they can with virtual machines or real desktops. Nonetheless, people often refer to traditional thin client solutions as desktop virtualization because the basic goal is the same: to consolidate desktop computing at the server.

3. Desktop virtualization and VDI mean pretty much the same thing

VMware was first to promote the VDI (virtual desktop infrastructure) terminology, but Microsoft and Citrix have followed suit, offering VDI solutions of their own based on the Hyper-V and XenServer hypervisors, respectively. Think of it this way: VDI refers to the basic architecture for desktop virtualization, where a VM for each user runs on the server.

The desktop virtualization we're talking about refers to server-based computing. But "desktop virtualization" also refers to running virtual machines on desktop systems, using such desktop virtualization solutions as Microsoft Virtual PC, VMware Fusion, or Parallels Desktop. Probably the most common use of this sort of desktop virtualization is running Windows in a Parallels or Fusion VM on the Mac. In other words, this has nothing to do with server-based computing.

5. No server-based computing solution supports the same range of hardware as a desktop

The Windows folks in Redmond spend half their lives ensuring compatibility with every printer, graphics card, sound card, scanner, and quirky USB device. With thin clients, your support for hardware is going to be pretty generic, and some items won't work at all. Other limitations are introduced by the fact that users interact with their VMs over the network. Multimedia, videos, and Flash apps can be problematic.

Think about it: With VDI, each virtual machine needs its own slice of memory, storage, and processing power to run a user's desktop environment, while in the old-fashioned Terminal Services model, users share almost everything except data files. VDI also means a separate Windows license for each user, while Terminal Services-style setups give you a break with Microsoft Client Access Licenses. Plus, VDI incurs greater network traffic, which may add a network upgrade to the purchase order for beefy server hardware.

In return for that extra cost, along with a better user experience, VDI delivers greater manageability and availability. As with server virtualization, you can migrate virtual machines among servers without bringing down those VMs, perform VM snapshots for quick recovery, run automated load balancing, and more. And if a virtual machine crashes, that doesn't affect other VMs; with Terminal Services, that single instance of Windows is going to bring down every connected user when it barfs.

7. Dynamic VDI solutions improve efficiency

In a standard VDI installation, each user's virtual machine persists from session to session; as the number of users grows, so do storage and administration requirements. In a dynamic VDI architecture, when users log in, virtual desktops assemble themselves on the fly by combining a clone of a master image with user profiles. Users still get a personalized desktop, while administrators have fewer operating system and application instances to store, update, and patch.

8. Application virtualization eases VDI requirements even more

When an application is virtualized, it's "packaged" with all the little operating system files and registry entries necessary for execution, so it can run without having to be installed (that is, no changes need be made to the host operating system).

In a dynamic VDI scenario, admins can set up virtualized applications to be delivered to virtual machines at runtime, rather than adding those apps to the master image cloned by VMs. This reduces the footprint of desktop virtual machines and simplifies application management. If you add application streaming technology, virtualized applications appear to start up faster, as if they were installed in the VM all along.

9. Client hypervisors will let you run virtual machines offlineA client hypervisor installs on an ordinary desktop or laptop so that you can run a "business VM" containing your OS, apps, and personal configuration settings. Talk about full circle: Why would you want all that in a virtual machine instead of installed on the desktop itself? Two reasons: One, it's completely secure and separate from whatever else may be running on that desktop (such as a Trojan some clueless user accidentally downloaded) and two, you get all the virtualization management advantages, including VM snapshots, portability, easy recovery, and so on. Client hypervisors also make VDI more practical. You can run off with your business virtual machine on a laptop and compute without a connection; then when you connect to the network again, the client VM syncs with the server VM.

Client hypervisors point to a future where we bring our own computers to work and download or sync our business virtual machines to start the day. Actually, you could use any computer with a compatible client hypervisor, anywhere. The operative word is "future" -- although Citrix has released a "test kit" version of its client hypervisor, and VMware is expected to release its own early version soon, shipping versions will not arrive before 2011.

The long march to the server sideMeanwhile, a completely different form of server-based computing continues to gain traction: the variant of cloud computing known as SaaS (software as a service), where service providers maintain applications and user data and deliver everything through the browser. A prime example is Google's campaign for Google Docs, encouraging users to forget about upgrading to Office 2010 and adopt Google's suite of productivity apps instead. Plus, Google's Chrome OS promises to create entire desktop environments in the cloud that retain user personalization.

Very likely, no big winner will emerge in server-based computing. Old-style Terminal Services setups will continue to crank along for offices harboring users with narrow, simple needs. True desktop virtualization on the VDI model will make sense where security and manageability are paramount, such as widely distributed organizations that use lots of contractors. And where far-flung collaboration is key, SaaS will flourish, because anyone with a Web browser can join the party. Conventional desktops may never disappear, but one way or another, the old centralized model of computing is making a comeback.