Top 10 virtualization killers

The data centers that support virtualization can be complex enough to kill many projects, but some smart planning can lead to greater success. (Stock image)

There's a crisis brewing in federal virtualization, but it's not what you think.

Government's virtualization software isn't failing; it's just failing to see the light of day. Consider these numbers:

A shocking nine in 10 government desktop virtualization initiatives (VDI) never reach production. Worse, in seven out of 10 cases, paper studies topple the project before a pilot can even begin.

And we're suffering for it. Skim through any list of topical federal initiatives or executive orders, and you'll find many—if not most—relying heavily on virtualization. The greening of IT, telecommuting, cloud adoption, securing data at rest, disaster recovery, big data analytics and data center consolidation are all virtualization-centric.

We know that virtualization is the solution for these critical modernization efforts, just as baseball managers know that good batting wins the game. But over the last ten years, we're batting .100, which leads to nervous federal agencies avoiding these efforts altogether.

Fortunately, the factors preventing virtualization projects from reaching production are both predictable and solvable—and ironically, they have little to do with virtualization software. By avoiding these ten infrastructure challenges, government agencies can increase their virtualization batting averages and win the game:

Cost: Unnecessary infrastructure costs are the number one killer of virtualization initiatives. As agencies perform initial ROI assessments, the hardware, set-up, and maintenance costs they consider (which are typically seven to 10 times greater than the cost of software licenses) can easily capsize the cost-benefit equation.

Complexity: Picture yourself in a data center, with multiple components of your virtualization pilot arriving from different vendors, on different days, with missing parts, and a ten-page bill of materials. If it's going to take eight weeks to even test drive the solution, you can pretty much forget about a green light for the project.

Power: Excessive power requirements can delay virtualization efforts by months—while agencies wait for additional power circuit installations for servers, storage area networks (SANs), controllers, disk shelves and switches. Meanwhile, adding power flies in the face of many current greening and consolidation initiatives.

Cooling: Adding cooling is as time-consuming, expensive, and ecologically unsound as adding power.

Scaling: Agencies are typically given two undesirable alternatives: risk buying all the infrastructure up front (ignoring the definition of "pilot"), or run a pilot on less expensive, non-production infrastructure, then rip and replace (to untested production hardware) when the pilot runs out of horsepower. More often then not, they choose neither.

Space: In eight of ten virtualization initiatives, rack space presents a problem—both in real cost and opportunity cost (i.e., less space for cubicles and offices). And nothing fills up rack space faster than servers, switches and SANs (all required for high-end virtualization).

Weight: Weight isn't always an issue, but when it matters—it matters a lot. First Responders and DOD tactical organizations must ship, hand-carry and configure systems in challenging conditions. For these IT organizations, heavy infrastructure can doom an initiative in its earliest days.

Politics: Even in the closest-knit data centers, separate tiers of infrastructure lead to separate areas of ownership, opinion, and potential conflict. Too often in virtualization initiatives, infighting between SAN teams, network teams, and server teams gives way to lost productivity, longer timeframes, or the death of the entire project.

Speculation: Estimating the ROI of a traditional virtualization project takes guesswork: How much RAM will be needed per VM? How much storage? How many IOPs? How many VMs will be running in 12-18 months? Plug all these guesses into a vendor's spreadsheet, and it will tell you what to buy. But chances are, your speculation isn't perfect. Which means you'll over-buy, under-buy, or find the whole project preemptively canceled when risk outweighs your (incorrectly estimated) return.

Performance: The question seems simple: Will a virtualization solution run fast enough to satisfy end user expectations? However, with a traditional virtualization infrastructure, optimizing the separate server, storage, and network components requires a complex balancing act. Performance-affecting decisions fall squarely on the shoulders of the agency, which also bears the consequences of any miscalculation.

Batter Up

So if you recognize that these pitfalls are threatening your virtualization initiative from conception, how do you avoid them? More and more federal agencies are turning to a new type of virtualization architecture—"hyper-converged infrastructure."

Hyper-convergence puts server and storage tiers in a single, small component, eliminating the need for separate servers, SANs and storage-network fabric. This means significantly less cost, complexity, power, cooling, space, weight and politics. It also means faster performance (servers and storage share the same system board). These hyper-converged components form large clusters on demand and appear to the virtualization software as multiple VM hosts and a shared SAN, supporting the highest-end virtualization features yet enabling customers to start small, and then scale-up based on facts (and increments of success), not guesses.

Perhaps most importantly, a process that used to take weeks—moving a virtualization pilot from cardboard boxes to up-and-running virtual desktops—can now be done in minutes.

Which means that federal agencies can take a much more effective swing at the critical virtualization-centric initiatives our modern government demands.

FCW investigated efforts by the departments of Defense and Veterans Affairs to improve a joint data repository on military and veteran suicides. Something as impersonal and mundane as incomplete datasets could be exacerbating a national tragedy.

The National Information Exchange Model's usefulness extends far beyond its origins in justice and law enforcement.

Reader comments

Wed, Jul 3, 2013
Kevin K

All of these reasons point moving applications to the public cloud - getting the benefits of virtualized infrastructure without the capital investments and at much lower risk

Wed, Jul 3, 2013

I see the primary roadblock for virtualization as a persuasive argument for virtualization to those that are technically savvy - you can buffolo some of the decision makers, but ultimately, if the solution doesn't make sense, it isn't going to get and keep traction.

Please post your comments here. Comments are moderated, so they may not appear immediately
after submitting. We will not post comments that we consider abusive or off-topic.