Physical vs virtual: What's your poison?

Sysadmin Blog Virtualization is not new - mainframes have been doing it for ages, and other non-x86 operating systems have been slicing up servers for quite some time as well. Yet if I had to pin a single IT label on the first decade of this century, I'd tag it as the decade of x86 virtualization.

Virtualization went mainstream in the noughties. It graduated from a technology almost exclusively used in large enterprise servers, to something so common that even smaller SMEs are using it for Virtual Desktop Infrastructure (VDI) deployments.

To start a discussion on VDI, or any other aspect of virtualization, a primer is in order. If you know a fair amount about computers, then explaining the basics is reasonably simple. Virtualization is a method by which you can run multiple containerized operating systems (guests) on a single physical computer (the host). You install your operating system to a Virtual Hard Drive (VHD) which acts a lot like an .iso file. It contains the file system of your virtual machine in one big file.

You devote a slice of your host’s resources to a guest, allowing that guest to occupy a fixed amount of RAM, share X number of cores and access other resources such as optical drives or network cards. You can turn guests on or off at will as easily as mounting an .iso in Daemon Tools.

While this will explain the basics of virtualization to the kind of computer adept who already has Daemon Tools installed, explaining this to your pointy-haired boss is another challenge entirely. I have gone through many different models of explanation and the one that has worked best so far is a boat analogy.

Picture a large ocean-going vessel whose engines drive a single large propeller. That one large propeller has an awful lot of power available to it, but the only way to steer is with a rudder placed behind it. It’s really good at going in a straight line, but remarkably clumsy and awkward for anything else.

Now think of more modern ships, where you instead use the generators to produce electricity, and drive dozens or even hundreds of smaller propellers. Instead of having rudders these smaller and more numerous propellers can turn in 360 degrees offering the ability to individually direct thrust. You lose a tiny bit of efficiency in converting to electrical power and the current all over the ship to power your props, but now your ship is far more easy to steer.

To extend the boat analogy, virtualization is the ability to split the resources of a single physical computer (the host) to support multiple smaller virtual computers (the guest.) No single guest would run as fast as if it were installed directly on the host system, but you can run a lot more guests (thus doing a lot more thing simultaneously) using virtualization than you could with a physical box. The server doesn’t go as fast in a straight line, but it is a heck of a lot more manoeuvrable.

From there it gets significantly more complicated; I could write an entire set of articles dedicated to the more advanced concepts (and in fact, I will!). Things like RAM deduplication, variable versus fixed VHDs, hardware assisted virtualization, IOMMU and more - they are all necessary for any virtualization admin to know, but for now only the basics are required.

With VDI, the actual work your users do on their desktop is not performed on the computer in front of them. They use a remote access application (for example RDP or X11 forwarding) to connect to a virtual operating system living on a server somewhere. The computer they are accessing from doesn’t actually matter all that much. It could be a many kilodollar gaming rig, a cheap thin client or even a mobile phone.

The differences between physical desktops and virtual ones can be in some cases stark, in others so insignificant as to be nearly indistinguishable. My most recent IT project has centered on trying to wring power savings out of my network, in an effort largely to reduce the cooling load our equipment requires. It is here, in attempting to configure my network for power conservation and Lights Out Management (LOM) that I encountered some of the more frustrating differences between VDI deployments and physical desktops.

It should be noted that there are other remote working solutions. Terminal Services, Citrix and similar offerings would suffer the same issues as VDI. I use VDI as the discussion point in this article entirely because it’s what I have deployed in my environment. (Why I chose VDI over other competitive offerings is reserved for a future set of articles.)

Perhaps the biggest frustration with virtual machines, be they servers or desktops, is that when the operating system has crashed, the users can’t simply go and restart a computer to have it all back up and running. I am sure the very concept of users rebooting servers will have some people flooding the comments with outrage, but I seriously dislike having to wake up at four in the morning just to poke some system in the eye. (Some applications to which we have no alternative are fairly flaky. BSODing a computer, be it physical or virtual, happens about once a quarter.)

With virtualization, the only way to start, restart or otherwise control a virtual machine is through the management software for whatever flavour of virtualization the host system is running. Giving users access to this software is generally a Bad Plan. Even if you were to do so, training them in how to use it properly is no easy task.

This entered my radar on the power management and LOM project with Windows 7. By default, Windows 7 suspends your computer after an inactivity of about half an hour. When a Windows 7 guest sends the sleep signal to the host, the host suspends that VM, making in inaccessible to the user. This is an easily correctable problem; but a perfect example of how power management considerations are different when using VDI.

If that Windows 7 system had been a physical computer under the desk of the user they could have pushed the power button to turn it on in the morning. More importantly, I would have been able to sleep in.

Another consideration is that the implementation of VDI has caused our entire workforce to become very familiar with the concept of remote accessing a computer. Everyone does it everyday and it seems perfectly normal to them. With increasing frequency, various staff are requesting (and requiring) the ability to access work systems from home. Leaving aside the discussion of the security concerns, it has some interesting power management, LOM and even maintenance repercussions.

If all the desktops were physical, then those users who were not set-up for home access could have their systems programmed to automatically power down at the end of the work day, and come back on-line just before the doors opened. The folks who needed remote access would have their systems available for a longer access window. While we can now do this with the thin clients on everyone’s desk, the real power consumption has moved from under the desktop into the server room.

If none of my users remoted into the network during off hours, the servers themselves could be powered down at the end of the business day; suspending all non-essential VMs, and shutting down their host systems. With a mixture of users who access their systems from home after hours, and users who nine-to-five it, much closer attention to virtual machine distribution needs to be paid if I want to power down any of the host systems during off hours.

Worse yet is load balancing your virtual machines across your hosts. The major virtualization players offer some neat software that can do this automatically for you, but I might as well ask the magic budget fairy for a Toughbook. It isn’t going to happen, and thus, I load-balance my VMs by hand. This creates interesting conflicts when trying to weigh load balancing against power management and even critical VM distribution.

As much I want to power down all non-essential systems when not in use, I also don’t want a single hardware failure taking out all of the production VMs responsible for the manufacturing equipment in a single go. I must also ensure that critical VMs have full LOM capabilities in case there is a problem with the host, and it needs to be repaired remotely. As not all of my servers have full LOM capabilities, this means being choosy about which hosts they live on.

Virtualization has its power management bonuses too. Overall, even with leaving the servers running 24/7, I am consuming less electricity than if all VMs were physical desktops or blade servers. With everything that is required for after-hours work confined to the datacenter, I can actually shut off entire segments of the network at night. Switches, phones, desktops, monitors, printers and all other forms of electronic gadgetry.

Still, it is interesting how much virtualization can complicate the life of a sysadmin. The “eggs in one basket” syndrome common with VDI has power management implications of its very own. Intel would love to come along and tell me that with their ridiculous new shiny servers, I could collapse thirty-two virtual hosts into six. They’d even be right; I’ve run the numbers, and right now I can run my entire network on six ridiculous servers. Eighteen months from now I could run it on three.

If I did that, however, I’d be sitting there praying every night that those three servers don’t blow a stick of RAM or lose a CPU fan, or that rodents of unusual size chose not to have a gnaw on cat6. For this reason, I feel I am actually better off with my older servers; there is a “sweet spot” past which I feel a host simply has too many guests for comfort.

These problems laid bare, my next article will focus on what I’ve done to overcome these issues. Some approaches are technological, while others are matters of policy and procedure. I don’t have access to the really awesome tools used to make virtualization really shine, so it will be an investigation into VDI power management with nothing but the bare basics to help you. ®