There are a few questions that I've found on ServerFault that hint around this topic, and while it may be somewhat opinion-based, I think it can fall into that "good subjective" category based on the below:

Constructive subjective questions:

* tend to have long, not short, answers
* have a constructive, fair, and impartial tone
* invite sharing experiences over opinions
* insist that opinion be backed up with facts and references
* are more than just mindless social fun

So that out of the way.

I'm helping out a fellow sysadmin that is replacing an older physical server running Windows 2003 and he's looking to not only replace the hardware but "upgrade" to 2012 R2 in the process.

In our discussions about his replacement hardware, we discussed the possibility of him installing ESXi and then making the 2012 "server" a VM and migrating the old apps/files/roles from the 2003 server to the VM instead of to a non-VM install on the new hardware.

He doesn't perceive any time in the next few years the need to move anything else to a VM or create additional VMs, so in the end this will either be new hardware running a normal install or new hardware running a single VM on ESXi.

My own experience would lean towards a VM still, there isn't a truly compelling reason to do so other than possibilities that may arise to create additional VMs. But there is the additional overhead and management aspect of the hypervisor now, albeit I have experienced better management capabilities and reporting capabilities with a VM.

So with the premise of hoping this can stay in the "good subjective" category to help others in the future, what experiences/facts/references/constructive answers do you have to help support either outcome (virtualizing or not a single "server")?

We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.

11 Answers
11

In the general case, the advantage of putting a standalone server on a hypervisor is future-proofing. It makes future expansion or upgrades much easier, much faster, and as a result, cheaper. The primary drawback is additional complexity and cost (not necessarily financially, but from a man-hours and time perspective).

So, to come to a decision, I ask myself three questions (and usually prefer to put the server on a hypervisor, for what it's worth).

How big is the added cost of the hypervisor?

Financially, it's usually minimal or non-existent.

Both VMware and Microsoft have licensing options that allow you to run a host and a single guest for free, and this is sufficient for most standalone servers, exceptions generally being servers that are especially resource-intensive.

From a management and resource standpoint, determining cost can be a bit trickier.

You basically double the cost of maintaining the system, because now you have two systems to monitor, manage and keep up-to-date with patches and updates (the guest OS and the host OS).

For most uses, this is not a big deal, as it's not terribly taxing to maintain one server, though for some especially small or especially technically challenged organizations, this can be a real concern.

You also add to the technical skills required. Now, instead of just needing someone who can download updates from Windows Update, you need someone who knows enough to manage and maintain the virtualization environment.

Again, not usually a problem, but sometimes, it's more than an organization can handle.

How big is the benefit from ease-of upgrade or expansion?

This boils down to how likely future expansion is, because obviously, if they don't expand or upgrade their server assets, this benefit is zero.

If this is the type of organization that's just going to stuff the server in a corner and forget about it for 10 years until it needs to be replaced anyway, there's no point.

If they're likely to grow organizationally, or even just technically (by say adding new servers with different roles, instead of just having an all-in-one server), then this provides a fairly substantial benefit.

What's the benefit now?

Virtualization bring benefits beyond future-proofing, and in some use-cases, they can be substantial.

The most obvious one is the ability to create snapshots and trivial-to-restore backups before doing something on the system, so if it goes bad, you can revert in one click.

The ability to experiment with other VMs (and play the "what if" game) is another one I've seen management get excited about.
For my money, though, the biggest benefit is the added portability you get from running a production server on a hypervisor. If something goes really wrong and you get yourself into a disaster-recovery or restore-from-backups situation, it is almost infinitely easier to restore a disk image to a machine running the same hypervisor than trying to do a bare-metal restore.

There are a few benefits to virtualizing a single server. The first few things to come to mind are

Create snapshots

Import/Export VMs (for example, export the VM as an .OVF so developers can load it into Workstation or Player to have an exact copy of a server)

Easily clone or make into a template (for when you decide VMs are kinda nice)

Readily available to add additional VMs in the future

I think the most important of those would be the snapshot capabilities. We use VMWare all over in our company, so for us it would make sense to have the server "ready" for when there's a need for more VMs.

+1 for the snapshots mention. While it isn't used for "backups" it is a great thing to use before upgrading that server side app in an environment without a test server around.
–
TheCleanerMar 13 '14 at 14:38

+1 Not to mention that a virtualised environment is ideal for creating said 'test server'. It's certainly not as good as dedicated test hardware, but it's a lot better than nothing.
–
CalrionMar 16 '14 at 9:52

The most compelling reason to use a hypervisor for a single server, especially with something like Windows Server is that you have total hardware abstraction for the production OS and can just move it to completely new server hardware without any problem, should the need arise. I consider this a real valuable feature that by far outweighs the drawbacks of having an practical unnecessary hypervisor running in the background.

I think the operating system being virtualized is a big factor, along with performance requirements and potential for expansion/growth. Today's servers are often excessively powerful for the applications and operating systems we use. In my experience, most standard Windows systems can't make efficient use of the resources available in a modern dual-socket server. With Linux, I've leveraged some of the granular resource management tools (cgroups) and containers (LXC) to make better use of physical systems. But the market is definitely geared toward virtualization-optimized hardware.

That said, I've virtualized single-systems rather than bare-metal installs in a few situations. Common reasons are:

Licensing - The dwindling number of applications that license based on rigid core, socket or memory limits (without regard to the trends in modern computing). See: Disable CPU cores in bios?

Portability - Virtualizing a server abstracts the VM from the hardware. This makes platform changes less disruptive and allows the VM to reference standard virtualized devices/components. I've been able to keep decrepit (but critical) Windows 2000 systems on life-support using this approach.

Future expansion - I have a client now who has a Windows 2003 domain controller running on 2001-era hardware. I'm building a new single-host ESXi system for them which will house a 2012 R2 new domain controller for the interim. But more VMs will follow. In this configuration, I can offer reliable resource expansion without additional hardware costs.

The downsides of doing this with a single host/single VM is management. I'm coming from the VMware perspective, but in the past, ESXi was a bit friendlier to this arrangement. Today, the requirement of the vSphere Web Client and restricted access to basic features, makes running a single-host (and single-VM) solution less attractive.

Other considerations are crippled hardware monitoring and more complexity involved with common external peripherals (USB devices/tape drive/backups/UPS solutions). Today's hypervisors really want to be part of a larger management suite.

One reason I can think of in favor of virtualizing a single server into a VM on a single host is the ability it gives you to then mess with a test environment for that "server".

If the hardware is more than capable, you could clone the server VM and remove its NIC/network abilities and isolate that clone as a "test platform" to mess with before trying the same out on the "production" server. An example would be if the server is running an ERP software and you want to test what would happen if you ran a particular script against the ERP software/database. You could do it on the cloned VM as a test first. This could then be done in conjunction with a snapshot of the live VM before deployment on it, with the added benefit of knowing it should work fine.

Creating the same cloned "test" environment could be done with a P2V of an existing physical server, but you'd then require an additional physical host to place your new test VM on...in the above everything can reside on the same physical hardware (which nowadays is almost always overkill for a single VM)

Built-in "KVM over IP" (sort of) - you can access your server remotely on the console without needing an KVM over IP. Sometime you just don't want to do something over RDP and need console access. With a VM, you fire up the management tool of choice (XenCenter, vSphere Client, etc) and you're on the console of your VM.

With VMs (and for non-VM servers, with my KVM over IP) I no longer have to stay in my cold server room for hours.

Migration to new hardware - OS upgrade aside, to put in your new hardware you have to migrate the system, move things around, etc. With a VM, you don't (usually) have to do anything. You upgrade your hardware, put the VM files on the new hardware and fire up.

While one does not foresee future VM, "if you build it, they will come". You'll want to spin up a new VM to test something and try new stuff, etc. There is just so much more possibilities.

VMs give you the ability to revert with snapshot, take a copy of it, make a clone of the VM (at run time) and then spin it up - whether to test something before putting it live, or just to have a second of the first. There are many things you can do with VM snapshots and the likes.

Redundancy - if you throw in a second VM server, you can have redundant hardware and while I don't know about the current VMWare licensing schemes, XenServer now has XenMotion apparently part of the free package so the cost overhead may not apply.

The reasons I would not use a VM:

Overhead - hardly but there is obviously some overhead.

More complex to manage - a little more complex but it's easy to learn. If you're not going for a massively large virtualized environment, training is trivial.

+1 for the "kvm over IP". Nice to have that functionality when a "server" reboots and hangs at boot and you suddenly can't RDP or ping it from home over VPN. Not having to drive in to see what happened is a real benefit.
–
TheCleanerMar 13 '14 at 15:26

@ewwhite - all my new servers have that but not my old ones. but i don't have the license for the "KVM", I can reboot, check data through lights-out but not the console. I don't need it in my case really as I have a KVM over IP already which gives me all that I need, when combined with lights-out.
–
ETLMar 13 '14 at 15:35

I'm not going to provide as detailed an answer here as others have so I'll just say that I'm finding it harder and harder these days to justify installing the server OS on bare metal as opposed to installing a hypervisor (of your choice) and virtualizing the workloads. The advantages to doing this, in my mind, are:

Cost benefit. In the long run, if I need to deploy additional workloads I don't have to shell out for more hardware for those additional workloads. In some cases, when using Hyper-V, I may even save on my licensing costs.

Ease of deployment and redeployment.

Ease of implementing high availability and failover.

Portability. I can likely move the VM just about anywhere if I need to decommission or outsource the current host.

Future proofing. Your fellow sysadmin may not currently see any future need for a hypervisor based infrastructure but my guess is that within 12 to 24 months he will and he'll be glad he chose to go down the virtualization route, if he does in fact choose that route.

Disaster recovery. I can backup an entire VM and restore it or replicate it to another host in a matter of minutes.

I'm coming in late, and feel like people have already made some of the points I would have wanted to make, but I'll briefly recap:

Future-proofing: Easier to add more RAM/CPU/disk/etc. as the need arises.

Portability: Easier to move to new hardware, especially in case of disaster.

Virtualizing is better than keeping horrible old hardware around to run something you can't get rid of.

The management software is often as nice as KVM or DRAC. (Also, if you happen to inherit something where the previous admin departed without leaving their password, you can use them as "physical access" to break in. As handy as the bolt cutters I have in my car for the same reason--previous admin at one job used padlocks on hardware. I inherited the servers but not the key.)

Snapshotting and making copies so I can test risky procedures before deploying them.

However, the thing that no one has mentioned yet and probably should be mentioned: If you're in the kind of shop where people may need a test server, and are likely to solve that need by grabbing a spare desktop and slapping a server OS on it, being able to offer them a VM will likely suit your and their needs much better. Virtualizing the new server can be the "reason" to allow future virtual expansion. (And, frankly, if you're not in that kind of shop, you probably already have virtualization.)

Of course, not everything virtualizes. I scored physical hardware for the management software that included PXE by describing to them what they'd need to do to turn off TCP Segment Offloading (PXE ran like a one-legged dog with TSO on, but they would have had to turn it off for the entire virtual VLAN, and they were disinclined to do that). So, if the new server is something specialized enough to be unsuitable, well, never mind.

But barring that type of specialization, it'd be worth it to me to get rid of a bunch of (potentially unmanaged) PC-class machines running server OSes lying around, now or in the future.

If your use case doesn't require 100% of the power from dedicated hardware then I would go virtual every time. It provides flexibility, snapshot facility and the built in console access (even though you should use out of band management as well)

Absolutely, I virtualize whenever I can. This allows me to prepare to do the following in the future:

Full system backups that are much easier to do, and quite often cheaper too.

The OS can be portable, I can move the VM to another host if I need to, with or without downtime and clustering, doesn't matter at this point

Windows licensing can become cheaper under certain conditions

When low on hardware, I can use production systems to test updates (not the best practice, but budget, budget...), after taking a snapshot. Can't do that on a regular host, unless it's booting from an expensive SAN

Getting the lowest possible end server hardware, I still get more resources than I might need for a specific server role. Might as well get better hardware and use all of it with VMs.

Virtualization features can often replace unnecessary software. For example, I used to set up doubletake and neverfail a lot, to set up DR replicas of Windows servers. With virtualization, I can do that at the hypervisor level, utilizing much cheaper technologies, and running more reliable and flexible solutions.

In short, unless the server is going to be running specific software that has limitations, prohibiting it from being virtualized (strict network or disk IO latency reqs usually, and with the right hardware, even those are achievable with virtualization) I try to keep things as virtual as possible.

"Windows licensing can become cheaper under certain conditions" Can you tell me some of these conditions?
–
Uwe KeimMar 14 '14 at 6:49

3

simple example - I need to run an AD/DNS/DHCP server and a terminal server. I can go and purchase two windows licenses, but I get one 2012r2std license, and run two VMs on this license. $700 savings right there.
–
dyasnyMar 14 '14 at 14:21

I recently had a VM whose disk wasn't big enough. It was running something that generated large amounts of data in a relational database, which for performance reasons had to be on the same machine.

After expanding the disk image twice, I got to the stage where there wasn't enough space left on the host to safely copy and expand the image again. I would have saved me a couple of days work to have done this from the start on a dedicated machine, even if it was a cheap PC, not a high performance server.

With a dedicated machine you can just shut it down, and add more disks. If it's a server running other VM's, unless you have spare hot-swap bays, you might have a problem shutting it down for this.

That's a capacity planning issue, no? On server class hardware, it's still possible to make disk subsystem changes on the fly, even with a hypervisor.
–
ewwhiteMar 14 '14 at 13:47

1

With a dedicated machine you can just shut it down, and add more disks. If it's a server running other VM's, unless you have spare hot-swap bays, you might have a problem shutting it down for this. - huh? If you don't have spare bays on a dedicated machine you aren't going to be adding more disks either. That's simply planning accordingly on your disk sizes up front.
–
TheCleanerMar 14 '14 at 13:48