I'm building a new server. The purpose of this new machine is as a VM host. Previously we had several desktop-class machines masquerading as servers, and I aim to change all that. What I don't have (yet) is anything like a SAN. This server and the VMs that it hosts will all live on drives installed directly in the server — 8 of them with 250GB each, to be precise.

The server itself is an IBM x3620 M3 and will initially have 1 X5650 processor and 12GB RAM with an m5015 RAID controller. I'm going with Server 2008 R2 + Hyper-V for the bare metal, as we have one other system right now that already uses this. It's a two-socket server, so if cpu or memory load start to become a problem I can add more ram or another processor later.

I'm expecting to host 2 VMs initially, with as many as 3 more moving to this server over the next year. Both initial VMs are essentially web servers (both internal-facing, so limited load). The other three candidates include my domain controller (a real server, but it's six years old), my SUS/reporting server, and an "IT" server that doesn't have any real load; we use it for testing and other things and it sits mostly idle.

So, I have 8 drives. What strategy should I use to allocate them? One big 10 array? RAID 1 for the host OS, with something else for the guests? The 8 drives max out the available space in the server. Any thoughts appreciated.

2 Answers
2

This is a tough question to answer, because so much of it depends on disk access patterns and load. For example, if all of your VM's exhibit light to moderate disk activity, one large RAID10 setup will most likely give you the best overall performance across all the VMs.

However, if you have one, or a couple, of VM's that have atypically high disk usage patterns, it's entirely possible that you could end up with one or two VM's negatively impacting the performance of all of the rest of the VMs by thrashing the disk. Based on performance testing at my last job, we discovered that we actually got much better performance by splitting the disks on our VM servers (we also ran with 8 disk servers) into 4 separate RAID1 pairs. We'd then drop the VM's onto nearly (or sometimes completely) dedicated disk pairs. This has the result of potentially limiting the maximum performance of the disks for that VM (vs RAID10), but also removing the ability of that VM to negatively impact the other VMs because its disk use.

With our setup, and the very high disk load it placed on servers, we found that 4 RAID1 pairs running 4 VMs was a good solution (and more cost effective than 4 separate servers). This may not make sense for you, however, as very situation is different.

For the specific setup you mention, I would probably lean towards one big RAID10. The SUS/Reporting server is the main one I might be concerned about, and if I knew for sure (via testing and benchmarking) that the SUS VM was going to impact the rest of them, I might shift it off to it's own RAID1 pair, and RAID10 the rest of the disks.

One last consideration is disk space requirements. By going with one large RAID10, you're making your whole (resulting) disk quantity available for slicing up and partitioning out however you want. If you go with a different setup, such as the before mentioned RAID1 pairs, you are putting constraints on how easily you can partition out that disk space. If your VM's will each be smaller than the size of a single disk, then it's not an issue. If they will be larger, then space is another point to take into consideration.