One thing that I don't mention in that article is virt-manager (apt-get install virt-manager), which you'll almost certainly want as well. Virt-manager is the tool you see me managing and remote controlling the virtual machines with in the YouTube videos; it can also be used to set the VMs up to begin with, instead of using virt-install from the command line - and it can be run either on the host, or on another machine which can get to the host by SSH. Very slick tool.

How is the stability of KVM? I currently run a single PE r710 with ESXi 4.1, and am upgrading our infrastructure to include 3 additional servers, all with local storage (no budget for a SAN). How is KVM running for extended periods of time with tens of VMs running 24/7? I'm mostly considering uptime and stability rather than performance, because it seems like the performance is there.

How is the stability of KVM? I currently run a single PE r710 with ESXi 4.1, and am upgrading our infrastructure to include 3 additional servers, all with local storage (no budget for a SAN). How is KVM running for extended periods of time with tens of VMs running 24/7? I'm mostly considering uptime and stability rather than performance, because it seems like the performance is there.

I don't have extended mult-VM experience, but I have had one of the "Turnkey Linux" (Ubuntu-based) guests running for three months without interruption.

I haven't heard of any real stability problems. Since you specifically mention ESXi 4.1, you should probably also look at the version that your distro will use.

While most of the functionality of Lab Manager has been built into vCloud, it is still missing some important things. One of those things is linked clones. This greatly reduces space needed on the SAN for storage of virtual machines. Another problem is that we currently get Lab Manager as part of their academic alliance program. I doubt that we will get their new flagship product for free when it comes out. We have two years before Lab Manager is EOL'd. Better to be proactive and/or build our own solution that wait and hope they address our needs out of the kindness of their hearts. BTW, our infrastructure group for the university reports that they are paying 40% of their server costs just for the VMware licenses. Thats pretty damn ridiculous.

I've had a public-facing KVM box running for about 18 months, with six fulltime mixed Squeeze and Lenny guests, and a couple of occasionals. One guest is under constant moderate network load (about 15Mbps), one has bursty demands of 10 megabits for a short time every hour or so, and the rest have fairly minimal demand profiles. None are burning much CPU.

It's been absolutely flawless so far. No downtime or even mildly strange behavior. And that box is several kernel revs back; they'e been making very rapid improvements to the KVM system. The system has never acted even the tiniest bit strange; it runs like a Swiss watch, 24x7x365.

Sounds like KVM is up to the task for most of our VMs then. The only thing I'd be concerned about is running our database machine on it, and that's just a matter of setting doing the right setup (direct disk access or carving space out with LVM, cache=none) and doing some hard power-off tests (easy with a VM). We have the hardware either way, it's just nice to be able to throw images around than to do a reinstall.

The only thing I'd be concerned about is running our database machine on it

FWIW, several of the VMs on my host that's running webstuff are running mysql and/or postgresql. No problems, no hiccups. Depending on your definition of "heavy load", they're not super heavily loaded; but one of them averages about 50QPS or so, sustained, 24/7. That VM is a "server-in-a-bottle" running apache and mysql on the same VM, if it helps.

My project is probably one of the heaviest users of KVM imaginable. We have multiple 32 core machines starting up and terminating thousands of VMs per day per node. We're looking to purchase 64 core nodes now.

There was a bug in KSMD in pre-3.x that seems to cause invalid page accesses when a process tries to allocate memory, but this seems to have been remediated.

I have a 2 node cluster and a 3 node cluster running KVM VMs in production. We don't have shared storage, but have had great success using the Open Source VM manager called Ganeti (http://ganeti.googlecode.com) It spools up VMs backed by DRBD replicated LVM images. It supports live migration, balancing, and a few other goodies. Performance has been great and we're slowly moving our small images which used to be hosted at a VPS provider to it as well.

This thread has piqued my interest in Xen/KVM (especially KVM) as a replacement for ESX. I've been waiting on Windows Server 8 to be released in order to determine whether to upgrade to ESX 5.

So, what exists as a replacement for vcenter/ESX as far as KVM is concerned? We don't have a need for anything beyond vcenter standard edition (mostly just live migration and template deployments). What's available in the KVM world that's a suitable deployment/alternative to vcenter [standard]?

This thread has piqued my interest in Xen/KVM (especially KVM) as a replacement for ESX. I've been waiting on Windows Server 8 to be released in order to determine whether to upgrade to ESX 5.

So, what exists as a replacement for vcenter/ESX as far as KVM is concerned? We don't have a need for anything beyond vcenter standard edition (mostly just live migration and template deployments). What's available in the KVM world that's a suitable deployment/alternative to vcenter [standard]?

I've been playing with ProxMox for the very purpose of giving ESXi the boot. It's essentially Debian with their own pretty web front end on top. It is supposed to be able to do live migrations, but I have not been fortunate enough to test it out yet.

There was a bug in KSMD in pre-3.x that seems to cause invalid page accesses when a process tries to allocate memory, but this seems to have been remediated.

Hmm, do you happen to have any more details on that bug?

Essentially, under extremely heavy load, pages aren't properly marked as allocated or deallocated; in rare spurious cases, this causes a kernel page inconsistency which is flagged by the kernel, and the process in question is killed. 'rare spurious cases' can mean a few times a day when you're doing it thousands of times.

We use KSMD heavily enough to peg an entire core, so this is probably an extreme use case.

I'm using ESXi 5 right now for hosting VPSs, but I don't like the idea of some proprietary software managing security for my guests. Also, I'm using the free version, so there's no support (especially for the hardware I'm on). For example, if a root kit enters the wild for ESXi, I'm completely on my own. Definitely interested in KVM: don't need Windows support.

oVirt has had a stable release, and there are RPM's out there for deploying on CentOS 6 so you don't have to spend 4 days trying to get maven to stop being a dick. It's worth looking into if you're looking for a web based managedment system for your VM farm.

So, due to misfiguring my withholding, my tax return was a bit nicer than expected. I am thinking that with all the projects I want to do, it is time to jump into this as the first as many others might be built on top of it. With that said, anyone have any good minimum/recommended system specs if I want to run the following virtual machines using KVM?

1. Asterisk PBX - Light home use, no more than about 4 phones, mostly used as an intercom to call between them.2. MythTV (backend only) - I have an HD Homerun Dual tuner. This would handle all the recording and scheduling, but none of the playback.3. XMBC backend - Used to house the video libarary for other XBMC instances (probably running on Raspberry Pi computers)4. DNS server for home network as well as possibly expanding it out to replace my router. May use a prebuilt distribution for this.5. Possibly implement a small security camera system using ZoneMinder and a couple of IP cameras.6. Spinning up various OS instances for things I just want to play with/test. Probably not heavy use and most likely not left on full time

Any recommendations for CPU/Motherboard/Memory for this workload? I don't have a set budget for this yet. I can accept it being a bit slower as this is home use only and even the MythTV backend doesn't have a lot to do since the HD Homerun is taking care of streaming the HD MPEG2 and the backend just has to record it.

Yeah, everything in linux/KVM is targeted at very aggressive memory caching and utilization so the more RAM you have the better. Then performance is just a question of the number of cores and their speed and the number and speed of your disks. If anything, you could have 2 disk storage systems, one for a lot of data that is just a RAID 1 for fault tolerance and 'decent' speed, and one that is faster but probably smaller drives (and more of them) in a RAID 10 which gives you fault tolerance and speed.

Ram is cheap, so that is good. I was looking at motherboards with the capability of at least 32 GB ram, even if I only use 16 GB to start.

Keep in mind that the DIMMS required to get to 32GB on consumer boards (8GB unbuffered unregistered) effectively don't exist yet - so while those motherboards might "support 32GB", in the real world you aren't breaking the 16GB barrier on them barring some unobtainium hitting the market.

If you want more than 16GB, you're going to have to go Opteron or Xeon. The bad news is, that means expensive motherboards. The good news is, the CPUs themselves aren't horribly pricy compared to higher-end consumer CPUs, and the RAM is considerably *cheaper* - I move a lot of Opteron 4xxx machines that use 8GB registered DIMMs at about $90/8GB DIMM. The motherboards are $300-$350, but they're dual-processor capable, and can go up to 64GB using the 8GB DIMMS, or 128GB using 16GB DIMMs (which ARE on the market, but are more expensive per GB than the 8GB parts).

For disks, that will up the price a bit, but I certainly see your point there. Can one assume that SATA Raid is fine for home use?

Yes, although I'd advise spending a little more money and getting server-grade disks - I use WD Black, and try to avoid the WD Green, for example. This isn't idle paranoia, I've been bitten by consumer drives considerably more often than I have been bitten by the higher-grade drives.

Oh, and if by "SATA Raid" you mean "motherboard RAID", NO NO NO NO NO. Kernel RAID is fine, but avoid motherboard RAID like the plague.

My lower-end storage configuration for virt servers is a 4-drive RAID10 of WD Black 1TB or 2TB drives, and I can recommend it without reservation for that use.

$500 would be easy to budget. $1000 is a bit more than I would want to spend for home use, but somewhere in between is probably my sweet spot.

The Shadow - Good to know on the kernel raid. Makes motherboard selection a bit easier as well if I don't have to look for one with RAID support. I've used motherboard raid a bit in the Windows world, but it always seems a bit wonky when it comes to rebuilding the array after a failure occurs, so I have no trouble believing your warning against it.

Ram is cheap, so that is good. I was looking at motherboards with the capability of at least 32 GB ram, even if I only use 16 GB to start.

Keep in mind that the DIMMS required to get to 32GB on consumer boards (8GB unbuffered unregistered) effectively don't exist yet - so while those motherboards might "support 32GB", in the real world you aren't breaking the 16GB barrier on them barring some unobtainium hitting the market.

If you want more than 16GB, you're going to have to go Opteron or Xeon. The bad news is, that means expensive motherboards. The good news is, the CPUs themselves aren't horribly pricy compared to higher-end consumer CPUs, and the RAM is considerably *cheaper* - I move a lot of Opteron 4xxx machines that use 8GB registered DIMMs at about $90/8GB DIMM. The motherboards are $300-$350, but they're dual-processor capable, and can go up to 64GB using the 8GB DIMMS, or 128GB using 16GB DIMMs (which ARE on the market, but are more expensive per GB than the 8GB parts).

Wut? 4x8GB DDR3 non-ecc unregistered DIMMS are readily available these days. And 64GB can be achieved with a socket 2011 system if you're that way inclined(and will generally be cheaper than trying to do it with a Xeon based setup).

Re: 4x8 gig ram sets, I just bought the Mushkin Silverline set from Newegg for ~$209, running it on a Gigabyte Z68 board & i5-2500k with zero issues.

Any tips for getting ovirt to connect to VDSM on Centos 6? I found http://www.dreyou.org/ovirt/ but I always get "install failed". Need to dig further obviously.

I have a stock Fedora 16 oVirt manager and I am trying to connect it to Centos 6 if possible. If it turns out to be infeasible I can reinstall the hypervisors with oVirt node but that will take awhile.