When client hypervisors first arrived on the scene, we had lofty aspirations that they would solve one of the biggest...

problems data center-hosted desktops have: offline access.

Since the dawn of Terminal Services, IT has dealt with the fact that no matter what we do, users can't access centralized desktops without a network and/or Internet connection. We've been more or less excluding mobile users from the equation because there hasn't been a reliable-enough solution to give them access to their apps and data offline.

IT pros thought client hypervisor technology would save the day because as virtual desktop infrastructure (VDI) has begun to take shape, we had visions of being able to run the same hypervisor on our laptops as we did in the data center. We thought that we could come up with a sleek solution that allowed us to boot our desktop image in either location, send down our files and personalization, and -- no matter where we were -- access our desktop.

What's the problem with offline VDI?

Admins have long dreamt of synchronizing information back and forth, meaning you could go to the office one day and sign in to your virtual desktop, then take a laptop out of the office with the same virtual machine (VM) on it for offline use (never mind the fact that the files were still somewhere else and a lot of apps require a connection to the data).

This scenario ended up being quite difficult. Some solutions did manage to synchronize VMs, but only back to the client hypervisor management system, not between VDI and client hypervisors. Some were able to synchronize files and personality, but not the underlying image, which is kind of useful, though not for self-reliant mobile users who need a lot of information with them all the time. Syncing that much information is a complex feat to pull off.

Then there was the notion of checking VMs in and out: taking your VDI VM offline, then returning it to the pool when you arrived back in the office. It sounds plausible, but when you consider that shipping an entire VM (and probably a persistent VM, not a shared VM) across the network isn't exactly a quick process, users would probably just check it out, run it locally and never check it back in. Essentially, we learned that offline VDI was not in the least bit practical.

Don't forget that the first client hypervisors were a little rough around the edges when it comes to hardware support and user experience.

Why the first client hypervisors fell short

Part of the reason that there wasn't much use for client hypervisors at first was that a limited number of machines could give users a reliable, decent experience. Type 1 hypervisors in the server world have a relatively limited hardware base to support. With desktops, though, there are any number of crazy, cheap, garbage pieces of hardware that users can have, not to mention stuff that simply doesn't exist in servers.

More on client hypervisors

Type 1 client hypervisors have to support various USB devices, graphics cards, wireless and Ethernet interfaces, sound cards, mobile chipsets, battery status and lid closures -- to name a few things. Do you know what happens if you install VMware ESX on a laptop and close the lid? Nothing! And there are probably 50 different ways a laptop can tell the system that the lid is closed, each of which needs the hypervisor to support it. This is easy in Windows: Everyone has drivers for their components. For hypervisors, though, this is uncharted territory.

On top of all of that, vendors of client hypervisor technology had to make this all user-friendly. No user wants to know that they're running a VM, and they certainly don't want to have to hit a funky key sequence to drop back to the management VM and connect to a new Wi-Fi network so the virtual network adapter in Windows can pick it up.

Type 2 client hypervisors, on the other hand, have been getting better, to the point where they're blurring the line between offerings -- at least from a user's perspective. The emulation technology is light-years ahead of what it once was, and these hypervisors are easier to deploy because client hypervisors use the OS that’s already installed.

That's what vendors have been doing with client hypervisors in the years between offline VDI and today. Type 1 client hypervisors have gotten to the point where they can provide an almost seamless experience to the user, taking advantage of local hardware on an ever-broadening hardware base. Offline VDI is still not practical, though, so that forces us and the vendors to re-evaluate the technology.

5 comments

Register

Login

Forgot your password?

Your password has been sent to:

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

The lack of live data access to the offline desktop means that even if you live with the heavy load of chekcing out and syncing, the number of applications available makes this less than a full solution in any case.

data connectivity and access to virtualized, centrally hosted, applications which need to safely and securely connect to the LAN from anywhere in the world - and still perform optimally - are still a far way off.It feels like it's time for a thought-changer, a new way to connect the secure data and applications to your off-line machine. Maybe some combination of vpn, sftp, local caching, vdi and client hypervisors? made simple of course! user experience is more important than people tend to think.