The Virtualization Experiment

I spent the last couple of months testing out running different VMs for each function of application development I was working on, and I am pretty convinced that technology has not yet reached the point where this makes sense.

I’m not saying you can’t do it. I’m just saying it isn’t worth the cost.

My initial assumptions or theory

I first thought that this would be a good idea for several reasons:

I might actually get a speed boost by having dedicated virtual machines for each function of application development, by not having the other cluttering programs installed on the machines that didn’t need them.

I would be able to only use what I needed at the time, thus reducing the overhead I would incur.

I would be able to keep each virtual machine clean to its purpose, so it would never get cluttered up and need to be repaved.

I would be able to transport my machine to any computer just by transferring the VM.

If I hosed a machine I could restore to a known good state from a backup.

Most of these were decent assumptions. To give you an idea of what I was really aiming for, you have to understand my VM setup.

Web Development Machine

Mobile Development Machine

SQL Server Machine

Other / Side Project Development Machine

Test Dangerous Stuff Machine

The basic idea was isolate in order to make things clean and orderly and increase performance on single tasks by only using what I need.

It wasn’t a total disaster

But it wasn’t exactly what I had planned.

Things worked out pretty well, and I could have kept operating in that mode, but there were enough irritations that I decided to bite the bullet this weekend and axe the VMs for normal development work.

The biggest factor that made me end this experiment was performance. I had my VMs running on my SSD drive, but I still felt that the performance was suffering quite a bit.

Compile times and general Visual Studio user experience just were not what I thought was tolerable for every day use. Considering how often I compile in a day, even a small bit of a performance hit is magnified.

It was definitely nice to have development activities separated out to the point that I could just fire up certain VMs to do certain kinds of work, but it also would be a bit of a pain when I got to my machine and found the wrong VM was running or that I needed to apply windows updates to 5 machines.

Having SQL server on a separate box seemed like one of the best ideas but when I really think about it, SQL server running on your machine only really consumes memory when it is not actively being used and I have 16 gigs of RAM. Also losing that auto complete you get for connecting to a local instance of SQL server was a fairly large price to pay.

Another big issue turned out to be tools and tooling. Turns out many of the tools I use aren’t specific to a certain development task. Things like text editors and R# needed to be installed on each VM. Often I would find that the tool I wanted wasn’t installed on the VM that I was using and it became a headache keeping everything up to date and installed in the right place.

One issue I didn’t expect was the use of monitors. Using VMWare I could expand a VM onto multiple monitors but I found that it was irritating when the VM was covering up a window on the main computer and I needed to switch back and forth between multiple machines. In my configuration, I have 6 monitors hooked up to my PC, so for most people this probably wouldn’t be as big of an issue.

I had some big dreams of using Unity mode to allow me to use applications from different VMs and make it all feel like it was on the main PC, but that technology just isn’t quite what it needs to be to make it worth the cost of efficiency. Right now it seems Unity mode is rather slow, error prone, and hard to use.

For me it comes down to this…

I want my workstation to be as fast as possible when I am doing disk or processor intensive tasks. PCs are not at the point yet where we have so much CPU and disk speeds are so fast that everything is still virtually instantaneous in a VM. If we get to that point, I won’t feel like I am making a sacrifice in a VM.

I want to be able to seamlessly switch tasks and monitors. The reason I have 6 monitors on my computer is because I like being able to drag any window anywhere and keep right on going. Running VMs put a bit of a stutter step into my window flinging.

I don’t want to maintain 5 different operating systems. You don’t really notice the effort when you are just maintaining one, but when you are trying to keep 5 operating system up to date with patches, software updates and everything else, it become a pain.

Sure, I can’t have that nice clean, my computer just stepped out of a shower feeling as I open up my VM targeted exactly for the task I am about to do. Yes, I have to put all kinds of junk into my registry and feel “icky” about it. SQL server is always running in the background chomping up a gig or so of my RAM.

But! I am running about as fast as I can and I have the agility I need to be more efficient.

As always, you can subscribe to this RSS feed to follow my posts on Making the Complex Simple. Feel free to check out ElegantCode.com where I post about the topic of writing elegant code about once a week. Also, you can follow me on twitter here.

I’ve been doing a similar experiment since I started at TrackAbout, and my experience has mirrored yours. If you’re picky about how things are set up–and what developer isn’t–it becomes a real chore to keep simple things in sync across all the VMs. And the performance penalty is making useful tools like ReSharper less fun.

Thanks much for the write-up. For some reason, I’d gotten it in my head that I was the last one to go the VM route. Glad to hear it’s not all peaches and ice cream, and I can procrastinate it a little while longer 😉

A couple questions I have in regards to this experiment, that may be answered in previous posts, are:
Was Virtualization hardware used (a complete system configuration supporting either AMD or Intel’s V/VT architectures)?
Was direct-mapping of a drive an option in the virtualization software, as opposed to using VHD or similar formats, and if this option was available, was it used?

I ask because both of these can have significant impact on virtualization performance. One of my close friends works on a VPS farm, and one of their advantages over their competition is not using disk image style disk virtualization. They actually use a hardware solution for storage virtualization. The disadvantage to this is it makes said virtual disks difficult to mount to other VMs.

I have been developing on VMs for the past couple of years now (in good part because I needed to run multiple versions of Office), and can easily relate to the update problem, which is really a pain. On the other hand, I like the ability to spin a clean, pre-configured machine in a minute for a project or an experiment, tweak it and trash it right after. It enforces a form of cleanliness, too: because I expect the VM to be temporary, I am much more rigorous about archiving/backing up what I need to retrieve.