6 Answers
6

Simply put, virtualisation isn't the answer to everything, but it is great!

You say add another layer/makes it run slow, but, in modern systems, this is not actually that much. Many techniques and features exist now that make this "layer" minimal (Such as Intel VT-x, AMD-V). If you are using hypervisor based virtualisation, this layer is even smaller still.

However, based on the way that disks and memory are utilised, it is possible to see speed increases in some situations.

Now, a quick summary of virtualisation products as there is some confusion. There are 4 categories, with most being quite different:

Desktop (software based) Virtualisation - Generally designed for programmers, testers and IT pros - Speed is still very fast/near native on modern machines, but, at the mercy of the guest operating system it runs under, so, whilst I am running 3 VMs 24x7 on my machine for various tasks, it isn't really "designed" for this - (e.g. Microsoft Virtual PC, VMware Workstation, Sun (Oracle?) Virtual Box). These emulate an entire virtual computer.

Server (software based) Virtualisation - this was quite a large market for a while, but, it was less capable than Hypervisor and is generally a dead market now. Basically it is desktop virtualisation that is just optomised for a server environment - (e.g. Microsoft Virtual Server, VMware Server.

Software Virtualisation - As per comments, I left this one out. This is a specialised market which is usually for virtualising single programs (e.g. Microsoft App-V, VMware ThinApp). This creates a thin "layer" between your computer and the software - it basically intercepts all calls made by the program in order to redirect file/registry writes and basically sandbox the application. This has a few benefits such as the ability to run multiple versions of some complicated applications and makes deployment quite easy (all though, it can be a difficult area to understand/get in to).

As for why they are so popular now - It all comes down to cost and administration time.

For example, in my company, I need to run many "systems" for various systems I use... SQL Server, an intranet system, billing system, email system, VOIP, a legacy system for some old software I have for a few clients and a few more - granted that some of these can be run from the same box, but, for a long list of boring reasons, I want to split them up.

This choice in 2004/5 meant that I had 6 servers here that were on 24x7 - it worked great, but, each machine had 2GB of memory (when it was expensive!), and a P4 era processor that I could cook an egg on. The processors on all the systems hardly went above 5%... maybe a peak at 10-15%, but the majority idled on 0% for almost the life of the machines.

In electricity, I really can't remember, but, I believe this cost me around £100 per machine per year.

Now, if I had instead virtualised this environment (which I ended up doing), I was able to move all of these systems to a single machine and benefit from many features such as oversubscribing memory.

What this means is, my 6 machines each had 2GB of memory, but, on average they were all using around 1/4 of that - On this new machine, I was able to just put in 8GB, and, in a seamless P2V (Physical to Virtual - the process of migrating a machine) move, I have all my machines running as fast as they ever were... in fact, they are faster (as it has a faster CPU).

In addition to this, there are many other benefits such as migration (V-Motion) and high availability that just make virtualisation a brilliant choice.

It should also be said that there are some extras that many home/non business users do not think about - I was able to get rid of some of my networking equipment, loose a load of cables, no need for my expensive KVM... I could drop 5 UPSs and my PDU... And best of all, it fits in a single cupboard/out the way instead of the annoying huge humming of a ventilated rack! ... and not such an important need to run air conditioning so high in the summer (which I didn't factor in to the electricity costs above).

I may have been small, but, think of larger companies doing this on a larger scale - I once helped a largeish company migrate over to a virtualised environment - They didn't have the budget, so, I negotiated that my contract terms were that I would get any cost savings for 1 year, and 20% for the next 3... It paid me very well, and, they enjoyed the administration benefits in the first year - and huge cost savings going forward.

I hope this answers your question! If you have follow up questions, I will be happy to answer them.

Perhaps you should write a blog post about it @Wil ;-)
– Ivo Flipse♦Oct 29 '11 at 11:52

4

I fail to see how VMs contribute to high availablity. Just the other day we had a hardware vault on our VM server hardware. Instantly eight servers were gone off the network. VMotion doesn't work unless the host you want to move is running, from what I understand.
– AndyOct 29 '11 at 12:52

2

@Andy - vmware.com/products/high-availability/overview.html - I have used this for clients, the VM runs in two locations and if one goes down, the other picks up instantly without even loosing a single ping - it really is truly amazing technology without having to configure clustering or anything in the software - this is purely a feature of the hypervisor.
– William HilsumOct 29 '11 at 16:16

1

@IvoFlipse Deal! .... When I have the time! Should be starting a new job next week (all be it for a month contract) and things are just a bit hectic right now.
– William HilsumOct 29 '11 at 16:17

Testing Software against Operating Systems
I have seen a programming script that, when a new version of their software was built, it automatically starts virtual machines for various different operating systems and installs the new software and then runs some unit tests to ensure that everything worked and then shuts down the virtual machine. In this particular case it was only one vm for each operating system, but it would be possible to extend this to more scenerios. Eg A Vm for Windows 7 32-bit, another for Windows 7 64-bit. Another for W7 32-bit with Service Pack 1 , W7 64-bit with SP1, vms with IE 9, vms with IE. Since only one of these vms is run at any one time, all it is using is disk space, so it is possible to have dozens of vms on a normal server.

Saving disk space :
If I have ten virtual computers all running the same operating system, it is possible to have them share the same base virtual harddisk and they then write their changes to their own virtual hard-disk.

Allocating/Re-allocating space.
With different physical servers, it is quite common to see one machine that is running out of disk-space and other servers which have loads free. Unfortunately you can't that half a disk (or half an array) from one server and stick into another server. But with virtual servers, it is possible to reduce the allocation for one server and increase the other (or just use dynamically expanding disks).

Snapshots.
This allows you to take a snapshot of your server at a point in time, rather like an almost instant full backup. This means you can do things like, take a snapshot, shut your server down. Mount the snapshot from last week, check some things and then shutdown and mount your most recent snapshot and continue on, all without spending hours backing up and restoring your server. With a little more work, you can mount the older snapshot as another virtual machine and have the old and new copies running side by side)

Moving virtual servers.
If you have, say two host servers and your find that host1 is overworked, but host2 is not, it is possible to move one of the quests from host1 to host2 which is almost as simple as shutting down the guest moving a (rather large) file. (There are options, usually extras, that allow you do wonderful things like move guests between hosts without shutting down the guest, so users don't notice.)

And it isn't only servers/businesses where virutalisation can be benefical.

I do my personal accounts using an old version of Quicken and a really, really old version of Excel and a few other little programs. This setup doesn't work right in Vista/W7 and doesn't work at all in 64-bit windows. I used to run this on an old computer which started to become unreliable. This is now in a virtual XP hard-disk and now when I get a new machine, I just install Virtual PC and copy my virtual machine across and start my virtual machine and everything is setup and works much faster. No need to install Quicken and Excel and no need to find the floppy disks that excel came on (did I say it was a really, really old version).

The downside of doing this with windows at home is the extra licensing cost. As AaronM has pointed out, there can be significant cost savings for business, but that is not the case at home.

Snapshots are not as great as they sound - they incur a BIG performance hit. Every snapshot you take slows down the Virtual Machine, and if you continue taking snapshots with multiple branches your VM will grind down to a halt. If what you need is a backup, you'd be better off copying the VM contents elsewhere rather than taking snapshots. Snapshots are ideal when you don't take too many of them and need a "quick" backup and restore mechanism, such as testing software installs or virus behaviors.
– HippoOct 29 '11 at 15:55

+1 for snapshots. If what is being tested doesn't work, revert, if it works, get rid of the snapshot.
– BratchOct 30 '11 at 19:55

@JacobHayden. Compatibility is not perfect, which is what XP mode is for. But I did not actually get as far as testing my copy of Excel with 64-bit windows, (but I am running the 32-bit version of Office 2007 under 64-bit W7 at work). My problems were more relating to the other programs, as the Quicken setup just crashed and secondly I had issues with ODBC. It was easier to avoid the problems and continue using XP in a virtual machine.
– sgmooreOct 31 '11 at 22:41

I do everything in Linux on my notebook (not enterprise at all), but I still need the occasional thing in XP or 7. I used to have to go through the serious aggravation of rebooting my dual-boot machine twice - once to get to XP and once to get back. Now, I can have XP running in a vm so it feels just like another application under Linux. It's a huge improvement. And on top of that, all I have to do is copy one (huge) file to completely backup Windows - in a ready-to-run with all my settings intact form. It's brilliant!
– JoeNov 1 '11 at 10:21

In large enterprises it also allows for significant cost savings in licensing requirements. IE a Microsoft Server 2008 Datacentre two CPU license will allow you to run as many copies of Server 2008 R2 on a virtual box as it can handle, without the additional overhead of per OS licensing. Likewise Microsoft SQL Server is licensed per CPU.

A single physical server with two CPU's could run several guest OS's and each of them could run an instance of SQL server - all covered under the single physical server license, which can give considerable cost savings.

Another key reason I think it is so popular is that it is considered a "Green" way of operating your data center, because it has the potential to use less electricity. And Greenwashing is a big thing for corporate PR departments as of late.

In a typical non-virtualized environment you you have build each server with excess capacity to handle the peak load, which means that you have a lot of extra horsepower suckling on a power outlet just in case everyone decides they need to kick off an expensive request at the same time.

In a virtualized environment, multiple logical servers can share that excess capacity under the assumption that the logical servers co-located on a physical machine aren't all going to get maxed out simultaneously.

A second reason it is gaining steam is that it is riding the coat-tails of cloud computing. Virtualized servers are a core technology that makes it possible to offer many of the features of cloud computing, that not-coincidentally mirror those of virtualization. Cloud computing is a hot trend right now and chances are that if you are putting servers in the cloud they are virtualized servers.

Hi JohnFX, could you explain a little about how cloud computing and virtualization are alike or related? Thanks :)
– Dark TemplarOct 29 '11 at 20:12

1

I was mostly referring to Hardware-As-A-Service type cloud computing, wherein you outsource the server platforms and access them over the Internet. Before virtualization companies like Rackspace would literally have physical computers dedicated to each customer. Now they just allocate resources using virtualization and it saves them considerable cost.
– JohnFxOct 30 '11 at 5:31

All of the things mentioned in previous answers are correct, but the real reason it gained so much popularity early on in large enterprises is that it got around all of our vendor software license and encryption export restrictions when moving call center jobs to developing countries.

Mrm's comment is right on the money. In addition to allowing software to be used many, many more times than the number of purchased licences would allow (and providing a nifty legal gray area, since the software was technically installed on only one system and it's very difficult to prove forensically that multiple systems used it, much less explain how that's illegal once you proved it) virtualization allows lazy IT departments to deploy old versions of software. This saves money and man-hours on upgrading, retraining users and dealing with issues caused by the upgrade

I wouldn't say this is very accurate at all - Please read my answer - I wouldn't call myself lazy at all and I run legacy applications. The reason is, I can run Windows NT 4 for a client's system fine all virtualised, where as, where on Earth am I going to find support for old hardware like that? And if something breaks, I am going to be in serious trouble... It works flawlessly inside a VM. On top of this, every VM has a BIOS ID, NIC MAC and more, so, they all look like a separate machine completely and it is very easy to tell the difference.
– William HilsumOct 29 '11 at 16:28