They say that people get fatter in their first year of college. As Wikipedia explains, “The freshman 15 is an expression commonly used in the United States that refers to an amount (somewhat arbitrarily set at 15 pounds) of weight often gained during a student's first year at college. In Australia and New Zealand it is sometimes referred to as First Year Fatties, Fresher Spread, or Fresher Five, the latter referring to a five-kilogram gain.”

When businesses complete their initial server virtualization projects, a surprisingly similar thing may happen. The old processes and usage policies may not apply like they once did. It took a lot of time and work to get systems to the point that they could be virtualized, and now that they have been, things are easier. Server changes don’t require all the intense scrutiny and labor they used to. Instead of laborious and often physical labor being involved in properly allocating resources, now everyone can virtually – and in a way, literally – hang out at the pool (of resources).

Sure, plenty of work still remains, but the virtualization project is done, the goal has been achieved. Savings on power have been dramatic. Productivity is up. Business is flowing. Life is good.

For about a year, anyway. But then some little problems crop up, maybe no one terrible obvious thing, but perhaps it’s not like it was. Has it gotten flabby, like there’s fifteen extra pounds (or more!) weighing down your activity?

You can’t take the metaphor TOO far, I suppose, but in the interest of back-to-school let’s explore some flabby (or simply ad-hoc) allocation policies and lingering hardware issues that can hang around in the background of a virtualization project for months or even years. Some of the pitfalls are obvious – here are 11 dating back to 2007 that, for the most part, are no less valuable today. But these are full-stop, hands-on configuration and security issues – entirely practical, of course, but not likely to be the source of slow decline, wasted resources or steadily rising power needs. A bag of donuts is an obvious thing to avoid eating if you’re looking to dodge the freshman 15, but the extra weight creeps up incrementally, insidious. In the same way, several big-picture considerations for efficient post-virtualization also exist, particularly for IT directors and project managers, though part of the problem is that it’s not always clear whose job it is to track down and avoid at least a few of these.

But in the spirit of the freshman 15 and the back-to-school season in the US, here are 15 pounds (or, if you prefer, a little shy of 7 kilograms) of virtualization flab that can combine to create major IT process inefficiencies and will detract from the success of your virtualization projects. You may experience some, all, or none of them; be sure you’ve faced them, and remain aware of them, and you’ll be in great shape going forward.

Disposal of unneeded physical resources. If you’re not repurposing or, ideally, physically eliminating all the extra hardware once you’ve virtualized and condensed multiple virtual servers onto fewer physical boxes, then your virtualization is only for show. While this seems like a no-brainer, businesses in highly regulated industries must also be compliant when it comes to retention and disposal of assets, and a conservative IT management may push to keep all that hardware around and on the grid/network ‘just in case’, reasoning that it’s not worth the savings in power, network simplification and streamlined asset management to unplug something that might be wanted later. Combat this by thorough auditing of the assets to identify those that are truly spares, and insisting that at worst, they remain as such, off the grid and off the network until they’re necessary.

Have a set plan for re-allocation of physical spares that the whole team understands so that, should the systems truly be needed again, they can come online with a minimal hassle – with the understanding that over a 3-5 year timeline, even the live systems are likely to migrate to newer and even more efficient hardware. Note: While you shouldn’t bog down today’s VM worrying about 2018’s environment, the end-of-life (EOL – generally meaning no longer being sold, but still supported) and end-of-service-life (EOSL – generally meaning no longer being sold or supported) dates for server systems are usually available from the original manufacturer in the purchase materials and on their respective web sites. Knowing those timeframes is a time-saver when doing any kind of asset planning that extends past the current fiscal year.

Along these lines, it’s wise to clearly identify the value proposition of existing physical server assets, especially if they predate you or you weren’t part of the purchase decision. Why did the company choose this particular system? Was it more efficient, or architected in a way that better matches business needs? Over what timeframe are these benefits going to be realized? This is important to understand. What may have been obvious at time of purchase won’t always be obvious, and if systems were purchased on the basis of power savings that begin to accrue in years four and five, stakeholders contemplating new systems in year three need to be aware of it. (Side note – many businesses do a great job of managing corporate data in terms of business applications, but very few have a handle on this kind of data – the reasoning behind past decisions that will impact the company even if the employee who made that decision leaves the company).

What is the ultimate fate/exit strategy on legacy servers that are being retained? Why are they being kept– be honest here, are they truly critical? If so, identify why that is. If not, is it a budget issue, or a familiarity issue, or…? It’s important to record as much detail for future decision-making as possible, because it can save – or cost – millions of dollars and thousands of man-hours in a migration, merger, or disaster recovery event, never mind its value in ensuring maximum virtualization value.

De-allocate unused virtual resources. As detailed in this great post by Mary Shacklett, virtual bloat is a significant issue that will continue to degrade performance over time, little by little, until the aggregate is both significant and difficult to reduce.

From the same article, be clear on what you actually want to measure with your Service Level Agreements (SLAs), because they can only reinforce the behavior they actually measure. If your SLAs are only targeting the speed of deployment, you may get faster but not necessarily any better. Reclamation of resources may be haphazard at best, and your processes may be driving a false positive rather than real improvement. Instead, balance your SLAs to focus on multiple indicators of efficiency, rather than speed.

Don’t get hung up on PUE when benchmarking virtualization and decommissioning success. Like Service Level Agreements, Power Usage Effectiveness is a single metric that doesn’t stand alone well. While you want to keep an eye on it (and in the public sector you may be required to keep it below a certain threshold), this metric is one of many and should not be driving your hardware, power, and cooling decisions to the detriment of productivity and overall business outcomes. Very few companies are actually in the business of delivering the lowest possible power bill, so be cognizant that a low PUE isn’t the same as success and a high PUE isn’t the same as failure. Putting a thermometer in ice won’t cure a fever; high PUEs indicate areas for further attention that likely go beyond the scope of virtualization and decommissioning alone.

What’s happening to the software licenses? License management is a significant project unto itself, and depending on how licenses are allocated and the terms of OS and software end-user license agreements, a virtualization may decrease or increase the license requirements (and costs) of impacted software, and by extension can affect hardware maintenance costs as well. Staying on top of them can reap significant savings, while playing licenses fast and loose can increase hardware maintenance costs even when fewer physical servers are in use, and can also create audit flag risks with unanticipated true-up costs.

So many patches…along with adding and where possible, removing licenses, IT teams must navigate the complexity of patch management compounded by multiple virtual servers. Be sure that your patch management system is up to the task, and is ideally scalable up and down for both physical boxes and VMs. Depending on how robust the patch management was before, this may not be an issue, but here again, relying on a Just-In-Time patch management solution will inevitably demand additional time investment from and create additional stress for IT staff, especially when a patch rollback, security hotfix or other time-sensitive patching activity is required.

What about the storage? Server virtualization has many inherent benefits, and a low upfront cost on its own, but increasing the storage load with more disk spinning will actually slow processing and reduce productivity over time, especially if virtual servers . Storage virtualization is a completely different animal, because ultimately a byte is still a byte, but there are a host of compression and deduplication technologies to consider. At the very least, ensure that current storage systems won’t pose an obstruction. If they do, identify the requirements for storage in the new virtualized system, and go into a storage RFP armed with actual data rather than a wish list whenever possible.

What about the network? Network virtualization, and software-defined networking, are a separate consideration, but unless that’s also part of the overall virtualization project, what matters more immediately is the effect that virtualization has on network traffic over the existing system. With more calls to the storage area network from fewer physical servers and potentially more workstations and devices, server virtualization can complicate any problems that may already exist in an aging or inefficient network design.

Who is supporting what? While this is always a relevant question when it comes to the complexities of any systems beyond a server closet, since virtualization can affect licensing, storage response and network efficiency, IT teams need immediate and upfront visibility regarding who to call or what websites to visit in the event of a failure. The fewer points of contact in this area, the better from efficiency’s sake and the perspective of the IT professional, but for the company it may make good business sense to have multiple providers for service across hardware, OS, and business line software. And of course, having twenty different options with clearly identified indicators and contact numbers is always better than having a single point of contact you can’t reach.

Consider other existing or upcoming project areas and determine what effect, if any, they may have on virtualization, or vice versa. For instance, if you plan to use a third party alert monitoring tool, a cloud backup service or asset management service package, there may be pricing issues depending on whether a service is billed per physical or per virtual server. Be sure you’re clear on which one is intended, if the invoice or statement of work isn’t explicit on these points.

What does the future look like? While virtualization is no longer a bleeding edge computing model, it’s not necessarily compatible with every migration path or current technology either. An unsung advantage of virtualization is the relative ease of adoption it permits for co-location and even a full migration to the cloud, which ideally is as easy as a drag and drop. But if your parent company is running everything differently, either without virtualization or using a different set of systems and/or vendors, ensuring future systems use like-for-like may require additional adjustments, and discussions with and buy-in from management.

Are traditionally-managed servers really the main source of waste in the datacenter? Virtualization may be a strong step in the right direction, but the real power hog may be the cooling system, for instance. Look at the problem holistically, focusing on goals rather than systems, to ensure that your virtualization project doesn’t end up underwhelming any critics in the face of some other unaddressed issue that is permitted to continue.

While you can look back retroactively and find value in trimming off these 15 pounds from your existing VM installations, it’s better still if you never gain that extra weight in the first place. For the most part I think these big-picture considerations are relevant to storage and network virtualization projects too. While any one of them could probably be a blog post unto itself, the hope is that these can get you thinking from a higher perspective than the day-to-day grind about the big picture effects of virtualization in all its various flavors. If you have experienced other sorts of flabby behaviors, or have a variation you’d like to share, be sure to let me know in the comments.