Work needs a virtualization server. Needs to happen in the next month or so. I have several questions, and probably more to come.

First, here are the requirements:* Budget: Let's say $5000, with some wiggle room if it's absolutely worth it.* Run up to 10 low intensity Windows Server 2012 VMs, but realistically, the typical load will be more like 5 VMs. VMs will provide low intensity services: domain authentication and software license services for 15-20 people.* I figure most VMs will run on 4 GB of memory allocation. One or two may need 8 GB. I figure 64 GB will be enough to supply all guests.* Disk performance is probably not important. Disk allocation of 30 GB per VM sounds reasonable, for a start. I think I'll be able to run all VMs on the same array. My initial guess is that RAID 5 of four 3 TB disks will do the trick for the guest storage array. A separate RAID 1 array of two 3 TB disks for the host seems like a good conversation starter.* Graphics power is not important* This is a very small business oriented virtualization solution that will provide very basic network services.

Questions:

1) Host OS. Honestly, my first instinct is to run VMware Workstation on Debian. This will let non-technical people interact with the virtualization platform, which is going to be necessary. It would be great to be able to use remote desktop to access the host. Can one get remote desktop like access to a Debian OS from Windows and Mac machines?

2) AMD or Intel. On the desktop, the answer is clear to me, but I'm in murky territory on this one.

3) Build, barebones, or buy? Supermicro makes some nice barebones solutions that aren't too pricey. The machine needs to fit in a rack.

4) Which motherboard and CPU? If a full build, which rack-compatible case that is also compatible with the motherboard? I know things can get stupidly nonstandard in this area.

5) Which RAID card or just use software RAID? Any decent on-motherboard hardware RAID out there? If using an add-in card, any motherboard incompatibilities to worry about? How painful is it to set up software RAID on Debian and replace a faulty disk?

Alternatively, I could run VMware Workstation on regular desktop hardware, I guess. I'm in relatively unfamiliar territory.

we just built a server in a similar ilk, supermicro is really great here. although they like proprietary MBs and you tend to be stuck getting the Chassis too, and they can be a bit pricey.

as far as non-techincal people accessing the VMs that should be easily handled by any virtualization service you choose. if you have the windows 2012 licenses for the guests, you might as well use it for the host as well. Hyper-V is a fairly competent hypervisor.

hardware, AMD has some really great features to offer on the lower end of the pricing, 2x $300 opterons can provide plenty of cores and performance with room to grow, I tend to prefer Tyan boards for AMD chips, Supermicro is still great, but they tend to treat Intel as king, and AMD as a second class citizen, just look at their website layout.

whatever you think the memory reqs are, double it, RAM is fairly inexpensive and you will probably end up wanting to provision many more VMs in short order, I know we figured on 5-7 VMs and suddenly we looking at closer to 10-15. so get at least 128GBs, on a 2P AMD that shouldn't even fill all the available slots, and should give you room to grow.

make sure you get something with plenty of GigE two ports should be a minimum, with four prefered. (again I really like the Tyan solutions with AMD procs and just about any 2U rack chassis (chenbros are inexpensive and I have had good luck with them)

if you do run a linux Host, you could look into using KVM or Xen, but you should have linux sysadmin skills before attempting that. once either one is setup, they are remarkably stable and easy to manage.

Mathematics is not a language, it’s an adventure. Most mathematics is done with a friend over a cup of coffee, with a diagram scribbled on a napkin. As Gauss once remarked, “What we need are notions, not notations.”

Another thing to consider, if this is a high availability or mission critical server, you might just want to buy a system, warranties and quick service can be invaluable. Although, I always build my own because who doesn't enjoy that.

Mathematics is not a language, it’s an adventure. Most mathematics is done with a friend over a cup of coffee, with a diagram scribbled on a napkin. As Gauss once remarked, “What we need are notions, not notations.”

1) What price range are you looking at for the server? Is power efficiency a requirement? Based on your specs of 64gb you definitely are looking LGA2011 or AMD Opterons depending on cost or operating cost because the intel's are more expensive up front but cheaper to run in the long run.

2) Is any of the VM's going to be for databases or require intensive I/O that would maybe merit SSD's for boot drives and Mechanical hard drives for storage?

3) Are you opposed to ZFS for a software type raid file system? (Take a look here: http://napp-it.org/ and look for the all-in-one solution)

Here are my suggestions:1) All-In-One concept with ESXi 5. You basically install ESXi to host your server and install Omni-OS or if you want a graphical interface Openindiana. Passthrough your storage disks and you create some ZFS pools. Serve them to ESXi to manage the platforms via NFS. The biggest issue here is the 64gb puts you on a license for ESXi which probably runs around a $1,000 or so.

2) You could look at XenServer as well. This is now free for all versions. The base of the system is a cut down version of CentOS but you can run everything off of that via VM's.

To Start Press Any Key'. Where's the ANY key? If something's hard to do, then it's not worth doing You know, boys, a nuclear reactor is a lot like a woman. You just have to read the manual and press the right buttons.

For the use case you've outlined, I don't think you need to allocate 4GB per VM. I think you'll be fine with a 32GB host, but leaving yourself an upgrade path to 64GB would probably be wise.

The size of the disk array seems like massive overkill. Smaller disks will reduce array rebuild times in the event of a drive failure, and even allow you to use laptop form-factor drives if you want to save space and power. I would personally go with RAID 6, not RAID 5.

For remote GUI access to the host you can use VNC. For the individual guests, just use Remote Desktop.

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

Thanks JBI. JBI, what are your thoughts on the host OS? Xenserver was mentioned above, which I'd need to read up on. Also suggested was using Windows hypervisor since it's essentially free with the purchase of Windows Server. Then there's my idea of VMware Workstation on Debian. There's the ESXi option, which I'm wary of and will cost some money.

I'm wary of Windows hypervisor too. What kind of silly restrictions is Microsoft imposing? A limit to the number of guest VMs or anything like that? I hate all the licensing nonsense.

I think 4 GB per VM is overkill for what you are going to be running. 2GB will probably still be overkill for most of them. We have many 2008 VMs running great with 1 GB RAM. It all depends on what the server is doing.

1) Host OS - I have the most experience with VMWare, so I'd throw ESXi on it. Depending on how much RAM you go with, the license may be free.

2) Intel is still the way to go. AMD does have a stronger case here since more cores can be put to good use with many VMs, but with the new Ivy Bridge based Xeons out, I'd go with Intel.

3) Since this is for a business, I'd definitely buy from an OEM. The support that comes with buying from an OEM is very valuable in this scenario. I work for a huge enterprise, so our pricing is cheaper than I can build myself anyway.

4) I use a lot of HP DL380s. Great midrange server. The Gen8s support 2 CPUs and at least 256GB of RAM. Should be plenty for you for awhile.

5) I don't see the benefit of using a software RAID scheme or something like ZFS for a single VM Host. IMO, you'd be best off packing your server with a few 15k RPM SAS drives connected to a decent hardware RAID controller. 2.5" SAS drives are more common than 3.5" drives these days. They are plenty fast for a VM host running the guests you describe and they are much more reliable than a consumer drive.

Something else to consider - Are you going to run Windows VMs? If so, You'd probably be better off buying a Datacenter license. This license is per-CPU and it's expensive, but allows you to run an unlimited number of VMs using the license. So 1 license covers all your VMs on that host.

"I take sibling rivalry to the whole next level, if it doesn't require minor sugery or atleast a trip to the ER, you don't love her." - pete_roth
"Yeah, I see why you'd want a good gas whacker then." - VRock

Yes. If you are 32GB RAM with 1 processor the license for ESXi is free unless you want support.

Does your budget include the Windows licensing? If it is $5,000 I would definitely use intel processor especially if you run ESXi. ESXi to me seems to be the pickiest about hardware for virtualization in my opinion.

To Start Press Any Key'. Where's the ANY key? If something's hard to do, then it's not worth doing You know, boys, a nuclear reactor is a lot like a woman. You just have to read the manual and press the right buttons.

I like the HP DL 380 suggestion. Newegg stocks fifty variations of that so there's some room for choice there.

I also like the ebay suggestion. Grabbing a server with 98 GB of RAM for under $1000 is crazy.

Big 10-4 on free ESXi for 32 GB or less. Any CPU limitations there? I could look it up, but since I'm here typing...

Another big 10-4 on Windows Server Datacenter as a possible host OS and just limit it to 1 CPU. Still, that's a good $3000 more expensive than Debian + Worstation if I'm doing the math approximately right.

The idea of VMware Workstation on Debian with zero CPU or RAM limitations and coming in at a very low cost is still sounding like a very credible solution. Actually, it's sounding like the solution to beat at the moment, unless I'm missing something.

Big 10-4 on free ESXi for 32 GB or less. Any CPU limitations there? I could look it up, but since I'm here typing...

No CPU limitation that I could find. Their website is a nightmare though. I have it running on a couple of dual quad core servers right now (DL380G5) and it allows use of all 8 cores.

flip-mode wrote:

Another big 10-4 on Windows Server Datacenter as a possible host OS and just limit it to 1 CPU. Still, that's a good $3000 more expensive than Debian + Worstation if I'm doing the math approximately right.

A datacenter license is for 2 CPUs. Search for "Windows Server 2012 Licensing and Pricing FAQ" for a decent run down of license stuffs. It is expensive, but if you are running Windows VMs, you'll need the license regardless of the Hypervisor.

"I take sibling rivalry to the whole next level, if it doesn't require minor sugery or atleast a trip to the ER, you don't love her." - pete_roth

If you know that the number of VMs will stay fairly low, it may be cheaper to buy multiple Windows Server Standard licenses. Each standard license entitles you to run 2 VMs. A standard license is around 1/5 the price of datacenter, so you could license 10 VMs via 5 Standard licenses for around the same price as 1 datacenter license.

Isn't software licensing FUN!?!

"I take sibling rivalry to the whole next level, if it doesn't require minor sugery or atleast a trip to the ER, you don't love her." - pete_roth

You can talk with them on an online chat thingie on the ebay listings and they'll make a custom-spec system under a custom listing for you to purchase. A majority of your expense is going to come from the hard drives.

This model of HP server is a pain to work inside if you have expansion cards because it's got some plastic shield. Once you get stuff situated, though, it's not like you'll be going into the server all that often.

eSISO also does G7 and Gen8 servers though I have limited experience with those. For my recent one, I spec'ed out 32 GB of RAM, 2 x Xeon L5520 2.26 GHz, a P410i w/1GB FBWC, a P812 w/1GB FBWC, a second 8-bay 2.5" set (16 slots total), a SAS Expander card so the P410i could run all 16 (the P812 is dedicated to a 12x3TB MSA60), and a bunch of 146 GB 15K SAS drives. The only caveat is to over-communicate and make sure you know what you're getting. For instance, I originally was sold Dell/Seagate SAS drives when I wanted HP OEM for full warranty coverage from HP (can buy service agreements after the fact). So I had to return and buy some more at a higher price. I wasn't on a time crunch so it didn't really matter too much. Also, going my route I needed to purchase some additional internal SAS cables as they didn't offer extra beyond what they got as kits. I needed 2 more if memory serves.

Another caveat with the G6/G7 models, when updating the SAS backplane firmware you need to be careful.

We run free ESXi 4.1 on several servers and are limited to 4 VMs per server. With the licensing of 5.x we just never bothered looking into that seriously. A while back I built two DL380 G7s that were very similar and loaded them with 300 GB and 146 GB 15K drives. I am a big fan of HP's firmware update process and the hardware and service in general, even with my issues with the SAS backplane firmware updating.

The idea of VMware Workstation on Debian with zero CPU or RAM limitations and coming in at a very low cost is still sounding like a very credible solution. Actually, it's sounding like the solution to beat at the moment, unless I'm missing something.

Don't do this. There are plenty of more widely supported options that are designed to run business critical assets with the reliability they need for cheap or free, and WMware Workstation or KVM* aren't it.

There is a free version of ESXi 5.1, and ESXi 5.5 removes the ram limit of 32GB that existed in 5.1. vSphere Essentials also start at $500 (IIRC) for a three-host license, which is hardly bank breaking. Hyper-V is also 'free' in various configurations and comes included with Server 2012 Standard and Datacenter.

What no one asked (as far as I can tell) is how business critical are these servers? From the budget, I'll assume this would run the business infrastructure of a small company, so if these servers went offline due to a hardware or software failure how screwed would the business be? If a business needs these servers to run, get boring, supported, and reliable over some crazy custom ZFS NAS. If it's not business critical, then go nuts.

*Unless you're awesome with KVM or Xen, then you should be working for a IaaS provider making $250K a year and not on baby VM hosts.

1) In terms of which hypervisor tech to go with: if you want ease of use that's VMWare. If you want more freedom then KVM followed by Xen. 2) If power isn't too much of a concern I would strongly suggest looking at AMD for 2P builds. The prices are VERY VERY low. If you look at Intel's prices you'll see a performance hole right where a 4 core clocked at 3.2 should be. It's either low core high clock at +/- $400 or more cores at a very low clock at +/- $600 leaving a wide performance hole. The next step up are + $1000 CPU's. With the money saved you can dump it into memory which is almost the most important thing to have. Nothing sucks more than when you go to stand up a new VM and you don't have enough memory on-board. 3) Barebones if you are comfortable with it and can support it. Get an OEM if you are not and want support in case something goes wrong. I have a strong thing for Supermicro ATM and I'm still able to do a in-house build every once in a while. 4)At 2P most things are E-ATX. Those boards will even fit in some mid tower cases. However, go with rack mount. Supermicro has a compatibility chart depending on what board you pick. But if it's E-ATX then that's what it is. You can mix and match to an extent. Just make sure you look at the specs before buying. 5)If you are comfortable with it you can go software raid via ZFS. If you are not then go the hardware raid route. It still has it's place.

Don't do this. There are plenty of more widely supported options that are designed to run business critical assets with the reliability they need for cheap or free, and WMware Workstation or KVM* aren't it.

Having run KVM for four years now in production it is has been rock solid. I'm talking Tonka truck tough. Had we not moved to a new file system layout they would still be running. I'm not saying it is THE way to go but it should be considered especially if Hyper-V is anywhere in the discussion. If uptime is a factor then Windows being the hypervisor for your VM's isn't exactly the epitome of uptime on Tuesdays.

If power isn't too much of a concern I would strongly suggest looking at AMD for 2P builds.

Yeah, we did that analysis here (where I work) not too long ago. Our last custom server build was a 2P (12 core) Opteron server with 64GB registered ECC RAM and 4.5TB hot swap RAID-6 array for around $3.5K. RAM prices have gone up so you may not be able to get 64GB RAM within that budget (ours, not yours... 5K will still get you there!) these days, but AMD is definitely still worth a look for servers.

The years just pass like trains. I wave, but they don't slow down.-- Steven Wilson

Work needs a virtualization server. Needs to happen in the next month or so. I have several questions, and probably more to come.

First, here are the requirements:* Budget: Let's say $5000, with some wiggle room if it's absolutely worth it.* Run up to 10 low intensity Windows Server 2012 VMs, but realistically, the typical load will be more like 5 VMs. VMs will provide low intensity services: domain authentication and software license services for 15-20 people.* I figure most VMs will run on 4 GB of memory allocation. One or two may need 8 GB. I figure 64 GB will be enough to supply all guests.

4GB per VM is reasonable. That's usually what I allocate for servers.

Go with 64GB. Windows can always use more RAM, so you might as well go with more.

* Disk performance is probably not important. Disk allocation of 30 GB per VM sounds reasonable, for a start. I think I'll be able to run all VMs on the same array. My initial guess is that RAID 5 of four 3 TB disks will do the trick for the guest storage array. A separate RAID 1 array of two 3 TB disks for the host seems like a good conversation starter.

Yes, you'll be able to run everything off of one array provided it has good throughput. Disk IO is very important with VM servers otherwise they will choke. You'll probably want 8 disks in RAID 10. RAID 5 is peaky, so it's not a good RAID level for things that need sustained IO, like VM servers.

3TB is a huge amount for the host OS. Linux, Xen, or vSphere don't require a lot of space. I've been adding two small SSDs to servers and using them for the host OS.

30GB is too small for Windows. I usually end up starting out with 50-60GB.

* This is a very small business oriented virtualization solution that will provide very basic network services.

Questions:

1) Host OS. Honestly, my first instinct is to run VMware Workstation on Debian. This will let non-technical people interact with the virtualization platform, which is going to be necessary. It would be great to be able to use remote desktop to access the host. Can one get remote desktop like access to a Debian OS from Windows and Mac machines?

Xen or KVM are the best options for keeping costs down.

Xen is a full blown hypervisor that's been thoroughly tested by Amazon and others. The open source version is the full version, and you can buy support from Citrix. You also get some management tools to make things easier if you sign up for support.

KVM is kind of a hybrid. It uses the Linux kernel as the hypervisor, but it's still a full Linux installation. It's a better solution then VMware Workstation in that it's integrated tighter then VMware workstation. You'll still have the fat Linux host, but the VM portion will be thinner. There will also be more tools for dealing with the disk images.

vSphere is an option as well. You'll have to pay for all of the cool stuff, it won't be cheap, and it will probably irritate you because of the little things it can't do. (I'm at the third part.)

2) AMD or Intel. On the desktop, the answer is clear to me, but I'm in murky territory on this one.

AMD, because price and cores. You'll want lots of cores.

3) Build, barebones, or buy? Supermicro makes some nice barebones solutions that aren't too pricey. The machine needs to fit in a rack.

4) Which motherboard and CPU? If a full build, which rack-compatible case that is also compatible with the motherboard? I know things can get stupidly nonstandard in this area.

I usually spec out systems from ServersDirect.com, although I'm shopping around for other vendors.

The specs for the last VM server and email server I built are at the end of this post. The VM server was about twice your budget due to the disks, and the email server about $1,300 more.

5) Which RAID card or just use software RAID? Any decent on-motherboard hardware RAID out there? If using an add-in card, any motherboard incompatibilities to worry about? How painful is it to set up software RAID on Debian and replace a faulty disk?

I like Arecas. They have a network port builtin, so you can check out the RAID array without having to install software, which might not be possible. vSphere is particularly bad in this regard.

Software RAID should be fine for the host OS, but I would get a dedicated card for the guest array with a battery backup unit.

Check the compatibility lists for vSphere, it's picky about hardware. Xen and KVM should support anything supported by the Linux kernel. Check on Xen, but I know KVM isn't picky.

If you're more comfortable with Windows, and don't want to futz around much with licensing, Hyper-V Server 2012 is free from MS. It just installs the Hyper-V role with PowerShell and the Windows command line. It's not too bad to use, and can be managed remotely every easily through the Remote Server Administration Tool. Anecdote alert, but I've found it to be solid and use Hyper-V across all my machines (VirtualBox reminders to upgrade got really annoying).

The only reason I've been using it at work, instead of ESXi or VMWare, is because work is mostly Windows, and the VMs were being used for MS SQL and SCCM 2012. RSAT is basically remote desktop for server administration though. Installed on Windows 7/8, you get the full Server Administration application,the Computer Management application, remote desktoping (though it's pretty limited when you're trying to remote into a Hyper-V 2012 server) and any consoles for the roles installed on the target server; in this case Hyper-V. It's a neat tool that makes administration there a bit easier, and I've found everything pretty simple to install.

All that being said, if you're more comfortable with Linux, and you'll be the primary user/maintainer/technician/person who presses shiny buttons, it may be in your best interest to use that instead.

Lenovo W520IBM dx340Nokia Lumia 928Sony a7 with far too many lenses to list or even count

We went the Hyper-V route about 4 years ago. It's extremely easy to setup and install VM's, and the performance is pretty good. I'm not sure if Server 2012 has the 32GiB limit that Server 2008 R2 has, but we needed to license the Enterprise version of Server 2008 R2 to get access to all the memory. ESXi I thought wasn't a good platform technology wise. Some patches are 2GB in size, which gives me the impression that something is a little odd with the architecture. Of course Windows updates smaller, but it updates every damn day.

It was touched on a little bit, but I'm finding that the limitation on VM's is actually disk space. CPU and memory are almost irrelevant. Certainly have several gigabit NICs. You need lots of disk space, and rock-solid disk space at that. We're running 20+ VM's on two machines with SAN storage and 2TB per machine isn't quite enough. I'm constantly deleting shadow storage to keep the disks from growing too much. SQL servers seem to be the biggest hogs, three disks per machine totaling 250GiB per machine. Roughly. When the host disks get full, Hyper-V will stop all the machines on the host. Not pretty.

One thing I would not cheap out on is the physical disks you use. If you build it yourself, I would go with the WD RE4's. All drives fail, that is the law of things, but I think the WD RE's are a little more resilient. The Areca RAID cards are the real deal, though.

You guys are an impressive bunch. It's good news that no one has jumped out and said "don't use this product it sucks". For some added background, about 3 years ago I set up an ESXi 4.1 box and it's been running 24/7 ever since. It's running 5 VMs on a RAID 1 array of two 2TB disks, while the OS is on another 2TB RAID 1 array that I also use to keep backups of the VMs. While it's been very reliable, it's presented some minor annoyances. Keyboard input does not work at the terminal so I've done all management through the remote management tool. My primary complaints, though, with ESXi 4.1 are that file transfers are very slow and I don't want to hack anything to try to get something quicker like FTP working, and console performance through the administration tool is pretty slow. The administration tool is just laggy in general. But overall it's been quite excellent, since stability is the primary concern.

I'm sure some of those things have been addressed in newer versions of ESXi - or I guess now it's called Vsphere Hypervisor - but I assume otherwise the newer version is pretty similar to 4.1. For what it's worth, I think VMware's product names are annoyingly combersome But that says nothing about the products themselves.

I've downloaded XenServer and I'm going to give it a try on an old spare Dell Optiplex.I'd also like to give Hyper-v 2012 a try.

As for the hardware, I'm pretty interested in some of the suggestions that have been made here. I think I won't be building anything, but will be buying a system. Not sure which system yet. The idea of a used system from Ebay is intriguing, but new systems with warranties have bring a reassuring feeling.

Here's one of the systems I need to renew if you want an example: 2UX933032T

After I finish my current built-out, I have to get management involved in caring about getting support packs (we've used them, they are VERY NICE TO HAVE) and I have to wade through the hell of HP and vendors (CDW, SHI, others) trying to repackage the costs in ways that look cheap monthly but nail me long-term.

For ESXi, I think there are some unsupported (by VMWare on 4.x) HP plugins for stuff like RAID health that you can install. That's on my to-do list for installing on a spare machine to make sure it doesn't hose anything. Also, I assume you've found how to do patch management via the automated tool in vSphere 4.0?

If you're more comfortable with Windows, and don't want to futz around much with licensing, Hyper-V Server 2012 is free from MS. It just installs the Hyper-V role with PowerShell and the Windows command line. It's not too bad to use, and can be managed remotely every easily through the Remote Server Administration Tool. Anecdote alert, but I've found it to be solid and use Hyper-V across all my machines (VirtualBox reminders to upgrade got really annoying).

I've tried it and failed in a test use. Maybe I'm not MS-smart enough. At least I'm not the only one:

Yeowza. I'm easily impressed excited; I got XenServer 6.2 installed on a test machine and XenCenter installed on my workstation. Coming from ESXi 4.1, the install process and the XenCenter are painless. The XenCenter is friggin quick and doesn't lag one bit.

Anyone with XenServer experience, I'd love to hear more about high points and low points. Is it true that this totally free thing doesn't have any features clipped? I guess the more direct question is what does a standard install of XenServer 6.2 NOT do? Are there "plugins" or anything that are recommended? It looks like it can do live migration and server pools and such. Does do anything like scheduled system backups and scheduled VM backups (I don't even know if any hypervisors do that stuff, but backup and disaster recovery is extremely important to me, and the fact that I can set up live migration really tickles me - last I checked it cost some money to get that feature on the VMware platform, but maybe that's changed.).

Yeowza. I'm easily impressed, but I got XenServer 6.2 installed on a test machine and XenCenter installed on my workstation. Coming from ESXi 4.1, the install process and the XenCenter are painless. The XenCenter is friggin quick and doesn't lag one bit.

Anyone with XenServer experience, I'd love to hear more about high points and low points. Is it true that this totally free thing doesn't have any features clipped? I guess the more direct question is what does a standard install of XenServer 6.2 NOT do? Are there "plugins" or anything that are recommended? It looks like it can do live migration and server pools and such. Does do anything like scheduled system backups and scheduled VM backups (I don't even know if any hypervisors do that stuff, but backup and disaster recovery is extremely important to me, and the fact that I can set up live migration really tickles me - last I checked it cost some money to get that feature on the VMware platform, but maybe that's changed.).

No, it still costs tons of money on VMWare. I think VEEM or something is big in that area. We're cost-constrained at our place so we've got some other ghetto stuff going on to back up the VMs.

Work needs a virtualization server. Needs to happen in the next month or so. I have several questions, and probably more to come.

First, here are the requirements:* Budget: Let's say $5000, with some wiggle room if it's absolutely worth it.* Run up to 10 low intensity Windows Server 2012 VMs, but realistically, the typical load will be more like 5 VMs. VMs will provide low intensity services: domain authentication and software license services for 15-20 people.* I figure most VMs will run on 4 GB of memory allocation. One or two may need 8 GB. I figure 64 GB will be enough to supply all guests.* Disk performance is probably not important. Disk allocation of 30 GB per VM sounds reasonable, for a start. I think I'll be able to run all VMs on the same array. My initial guess is that RAID 5 of four 3 TB disks will do the trick for the guest storage array. A separate RAID 1 array of two 3 TB disks for the host seems like a good conversation starter.* Graphics power is not important* This is a very small business oriented virtualization solution that will provide very basic network services.

Questions:

1) Host OS. Honestly, my first instinct is to run VMware Workstation on Debian. This will let non-technical people interact with the virtualization platform, which is going to be necessary. It would be great to be able to use remote desktop to access the host. Can one get remote desktop like access to a Debian OS from Windows and Mac machines?

2) AMD or Intel. On the desktop, the answer is clear to me, but I'm in murky territory on this one.

3) Build, barebones, or buy? Supermicro makes some nice barebones solutions that aren't too pricey. The machine needs to fit in a rack.

4) Which motherboard and CPU? If a full build, which rack-compatible case that is also compatible with the motherboard? I know things can get stupidly nonstandard in this area.

5) Which RAID card or just use software RAID? Any decent on-motherboard hardware RAID out there? If using an add-in card, any motherboard incompatibilities to worry about? How painful is it to set up software RAID on Debian and replace a faulty disk?

Alternatively, I could run VMware Workstation on regular desktop hardware, I guess. I'm in relatively unfamiliar territory.

While there are many good virtualization products nowadays, we use CentOS + KVM for all virtualization workloads not only for our company, but for our customers also.However, please consider that KVM GUI/WebUI are somewhat less user friendly than what other players (especially HyperV and VMWare) offer.

That said, Linux + KVM offer outstanding performance even on modest hardware. For example, I use a Lynnfield-based i870 "server" (note the quotes) with 8 GB RAM and 4x 1TB HDDs for running over 10 Windows/Linux VM and I am very happy with the result.

Some general recommendations for a good virtualization experience:

1) CPU speed rarely is the main problem

2) you need much RAM to consolidate many virtual machines. However, hypervisor with memory merge capability (KVM and VMWare at least) will enable significantly higher consolidation ratios: http://en.wikipedia.org/wiki/Kernel_Sam ... rging_(KSM). Moreover, VMWare also have a dinamically-controlled ballon driver to reclaim unused guest memory. Consider the low RAM cost, I suggest you 32 GB of RAM minimum, 64 GB are better. Anyway, I would start with a 2GB-per-VM template, and ony selectively expand the VMs that really need more RAM.

3) don't understimate I/O speed requirement: many Windows VMs are going to issue many IOPS even when idling to the desktop (by default, Windows issue a cache flush + barrier sync each second). So, don't use RAID5 unless you have a very good hardware RAID controller.

4) absolutely don't use fake HW-raids as the ones integrated into Intel or AMD motherboards! They are incredibly fragile and you really risk to have an unusable RAID array if switching motherboard for some reasons. If you don't want to invest in a good RAID card, use Linux MD software RAID. In this case, please use RAID10 rather then RAID5 (I also wrote some Linux RAID 5/10 test/benchmark that you can find here http://www.ilsistemista.net/index.php/l ... lysis.html and http://www.ilsistemista.net/index.php/l ... evels.html). The bottom line is that not only RAID10 is way faster, but RAID5 rebuild with TB-level HD will take forever. Other info: http://www.baarf.com/

5) if you plan to use KVM, please don't use the default disk settings as they result in very low disk performance. The perfect combo is LVM + Writeback KVM cache. EXT4 and XFS backends + RAW disks are fine from a performance standpoint, but they don't support online snapshotting (to snapshot a RAW file, you need to stop the VM, create a new backed QCOW2 files, associate it to the VM and restart the guest). EXT4/XFS + preallocated Qcow2 are more or less fine and support online snapshots, but you lose something in term of I/O speed. Anyway, don't use BTRFS-backed VM disk files. BTRFS is an excellent filesystem, but it is (actually) not well suited for virtualization workloads.

6) if redundacy is required, one very nice feature of Linux + KVM is the possibility to constantly synching the storage content of two different machine via DRDB (1 Gb/s ethernet is pushed to its limit though. A 10 Gb/s connection is more than welcomed). Both ESXi 5.1 and HyperV 2012 acquired this capability lately, but I think that VMWare charge a significant amount to enable it (correct me if I am wrong). However, remember that while it is relatively simple to create an active/passive virtualizer configuration, an hot-standby or active/active one is trickier (and commercial hypervisor are probably better suited for this task).

Some questions: do you plan to use 1P or 2P systems? With cheap unregistered DRAM or registerd ECC? You think to self-assemble the system or to buy one from a well-know server vendor as HP or DELL? In this latter case, a good (albeit not last-gen) machine is Dell R515 with Opteron processors and 12x 3.5 bays chassis.

If you want to auto-assemble your system, Supermicro or Tyan MB are a great choice. 2x 8-cores Opterons or 2x 6-core Xeons will be more than enough. On the I/O side, I would use a first RAID10 array of 4x 480GB Intel S3500 and another RAID10 array of 8x 2TB WD SE (or RE) disks.

Regards.

Last edited by shodanshok on Fri Sep 20, 2013 11:46 am, edited 1 time in total.

If you are going to have 12 Windows Server 2012 VMs, then you should purchase Windows Server 2012 Datacenter edition. It will give you unlimited number of VMs, and you can use it on the Host itself.I too would recommend an HP server (intel). The DL380s are highly recommended.Shoot for Raid 10 if you are using 4 or more disks for data. It has a much faster write speed than Raid 5, and Raid 5 has fallen out of favor (Raid 6 is better than 5 from a recover-ability standpoint, but 10 is still recommended).I dont see how you can do a budget of $5000 though if you have to include the cost of the VM instances running Windows.