Best Storage option for Hyper-V Cluster?

I'm working with a small private school that recently had a number of servers donated, some of which are actually new enough to run Server 2012 and Hyper-V. My problem is trying to figure out storage. I've never done any SAN related stuff.

We want to take advantage of clustering because I and the other gentleman that generally handle their tech needs are not available all the time, so we're trying to build something reliable enough (not 5 9's) that part of it can fall over but keep running until we can get out there to fix it. The goal is to virtualize a couple of servers that have had a long history of random hardware issues, as well as many staff computers, opting to replace them with thin clients connected to Terminal Services (or, just stripping down their desktops and doing RDP, so if the desktop dies, we just roll out a new one and don't have to worry about software and user settings).

I was thinking that the three servers that we'll be using as hosts would be clustered, and a fourth machine would be built up with a nice RAID card or two and connected to a shelf of disks which we would carve up into iSCSI LUNS. Or am I over thinking this and we should just make it one giant SMB3 share to present to the cluster? Should I be looking at 2Gb/4Gb FC gear on eBay? I have no problem learning new things to build and admin, I'm just not quite sure which direction to run with this. 10GBe is out, that's just way too expensive right now, as far as I have found.

We've got $3k to spend on additional hardware and software as required. Current storage is approximately 3 TB (I don't have the % usage at hand currently but a fair chunk is raw video for completed projects) on the current file server. That machine probably going to be rebuilt as a VM, and then kept as a target for backups.

Wouldn't bother with 2Gbps FC stuff when you can just cluster a few gigabit ethernet links instead... 4Gbps stuff might make a little more sense. Considering you can get a cheap 8-port 10Gbps ethernet switch for $800-- hmm, ok, 10Gbe with adapter cards is still too much...

3TB storage is pretty modest, depends how much performance you need in the end. An off the shelf NAS could work, but it'd have to be a high-end one to provide enough IOPS...

If you just use one server as SAN or share for your VMs, you will have created a single point of failure.

The best solution for your budget and needs would be IMO to go with a Storage Spaces configuration instead. Just get one or (even better) two JBOD chassis with enough SAS ports to connect each cluster node to either the one JBOD or to each of the two JBODs, and create one Storage Space pool in which you can place the CSVs for the cluster. If you get two jBODs you can make sure that the SS is in mirror mode across the two chassis, meaning that either one could fail, without compromising your availability.

It would be much cheaper than any SAN, more reliable than one server and certainly much faster than an old FC setup or iSCSI via 1 Gbit Ethernet. And in WS 2012 R2 you can even go wild with autotiering on SSDs, if you feel the need in the future...

I think planning for 6 TB of total storage (user data, 2-3 Server VMs, and 12 workstation VMs) is a good top end, even though I seriously doubt they can double the data and just have to keep it all.

For the JBOD enclosures, just get SAS controllers or actually get a decent RAID card? I was thinking RAID 6, using 5x2TB WD REDs. If I do Storage Spaces, should I skip hardware RAID and do a pool of 3 drives with a hot spare? The machines are all 2U chassis with 6 or 8 bays, I could actually install these disks internally without having to get an external enclosure, while having a RAID 1 boot volume.

For the JBOD enclosures, just get SAS controllers or actually get a decent RAID card? I was thinking RAID 6, using 5x2TB WD REDs. If I do Storage Spaces, should I skip hardware RAID and do a pool of 3 drives with a hot spare? The machines are all 2U chassis with 6 or 8 bays, I could actually install these disks internally without having to get an external enclosure, while having a RAID 1 boot volume.

For Storage Spaces, definitely just simple SAS HBAs, you don't need or want hardware RAID. However, Storage Spaces currently functions better in mirroring mode, so consider going in that direction. That's why it also works so well over JBOD enclosures for additional reliability... the cost hit from having to do mirroring can usually be more than compensated by going with NL-SAS drives and avoiding the hardware RAID cards (or controllers in a SAN).

Internal drives won't work. You need shared storage for a cluster, i.e. all the cluster nodes should have a connection to each drive/LUN.

OK, that shared storage thing is weirding me out. It's pretty cool. I'm used to x drivers/controller not x drives/x controllers where you have two controllers in separate machines connected to the same backplane and drives.

Thanks for the pointers and information, at least now I know enough to begin figuring out the other questions I should be asking.

You mention that SS works better in Mirroring than in Parity, is that because of the CPU overhead to do Parity? I could roll mirrored 4 TB volumes, then just have to add new mirrored sets and sharing mount points as they grow if that's the best way, instead of doing 3+1.

So the recommended solutions are either buy something that's more or less turnkey, or do dual-ported HBAs in each host to a pair of shared backplanes so a single enclosure doesn't become the single point of failure?