By submitting your personal information, you agree to receive emails regarding relevant products and special offers from TechTarget and its partners. You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Virtual desktop infrastructure (VDI) provides IT managers with a promising and powerful way to centralize and manage PCs. No longer a box sitting under a desk, the PC is a server-side resource accessible from almost any client device, including another PC. Cost savings, enhanced security, and more flexible management and deployment are some of the big wins promised by VDI.

What's more, VDI is growing cheaper to deploy and maintain, and the technologies that underpin it have become easier to work with. Still, VDI isn't something you can simply drop in and run with. Good VDI performance requires adhering to best practices for everything from properly handling applications and network load to enhanced security and a storage architecture that supports your needs.

Two major points of concern are virtual desktop reliability and performance. They are strongly related, since a poorly designed VDI back end will be unreliable, sluggish or both. Worse, it'll leave those who adopted it with the sense that things might well have been better with a conventional desktop architecture.

From a basic technological perspective, though, VDI can be reliable and provide a good return on investment. The vast numbers of VDI implementations already in the wild are evidence of that. Plus, VDI's reliability and scalability have benefited directly from the progress made in virtualization and cloud computing over the past few years, as seen in the Hyper-V Replica feature in Windows Server 2012 and the failover prioritization features in VMware vMotion or ESXi.

The hard part is avoiding shortsighted design decisions, such as inadequately gauging workload or use cases. Such missteps will leave users disappointed with VDI. In part one of this series, we examine how to gain virtual desktop reliability through VM density and storage.

VM-to-machine ratio

The standard way to deliver VDI is to use a virtual machine (VM) back end -- a private cloud or other similar infrastructure, where you can consolidate many virtual desktops onto a handful of physical servers.

The first big step is figuring out the ratio of server hardware to virtual desktops. There are practical upper bounds on how many VMs can be consolidated onto a single host, both because of the hardware -- the number of threads per core and the number of cores -- and the host VM solution.

Another complication is concurrent user load. Are you dealing with a large number of systems with relatively low load factors -- or the opposite? If you are able to harvest CPU usage statistics for the physical desktops you want to migrate, you'll have an idea of what percentage of utilization to plan for. Also, use whatever telemetry you can collect to determine how many of those users are active and at what times of day.

Andre Leibovici, an architect at VMware, shared his views on what it took to build a large VDI system on Cisco hardware running VMware View and vSphere 5. (Note that some of his observations may not apply directly to your scenario, but there are lessons to be gleaned from his observations on VM-to-host ratios and other decisions made in that 10,000-seat project.) Leibovici also has a resource calculator for VMware View to help determine the number of hosts needed to support a given number of seats.

Storage for VDI

A major issue with VDI is storage performance. Consolidating the behaviors of thousands of desktops into a few virtual hosts is difficult, especially since desktops have markedly different I/O access patterns than servers. And upgrading from storage area networks (SANs) to Fibre Channel is a costly solution.

More on VDI performance

Solid-state drive storage is a more viable option for virtual desktops now than it was even a year or two ago, in part because SSDs are becoming cheaper and because they are being deployed in more creative ways. But simply substituting SSDs for spinning drives may not be cost-effective based on IOPS, even with the falling prices of server SSD tools.

The most efficient current solution, according to analyst George Crump at Storage Switzerland, is to dedicate SSDs to the most heavily trafficked volumes or to use SSDs as a tiered storage caching system (via a product like Virsto).

Another storage question is whether to use individual or differential images for each desktop VM. In VMware's lingo, this is about using "linked clones" versus "full clones." With the former, you can base a whole slew of VMs off a single image, saving a great deal of storage space in the process -- but at the cost of IOPS.

With full clones, each VM has its own separate disk image that takes up far more space but can have its I/O parallelized more easily. If you only have a few virtual desktops to host and little space to put them in, the former approach might be better. But if you have the disk space and the parallelism to throw at the problem, go with the latter.

E-Handbook

E-Handbook

0 comments

E-Mail

Username / Password

Password

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy