Join the Community! Creating your account only takes a few minutes.

So, not being very experienced with Hyper-V, I have approval to purchase a new (to me) server with as much cost-savings as possible. After dealing with Dell directly as well as 4 other vendors, I have the following quote for just under $9k. This will be a Hyper-V server with 5-7 VM's. I don't have the budget to implement a SAN, so I decided to do on-board storage since I was told VM's are heavily reliant on spindles.

I'd like to get some input on whether folks think this is a great deal (I think it is) or not.

26 Replies

That looks like it would be pretty efficient. We are running 4 VMs and 1 test VM (all Server 2008 and test is 2012) with nearly the same (Dell PE R710) setup except we only installed 48GB memory, not sure why we did that as it will be the limiting factor if we ever need to add more machines. I think, if I remember right, we were quoted more around 10k or so. From what I can see that looks like a pretty good value for what you are getting.

I don't have the budget to implement a SAN, so I decided to do on-board storage since I was told VM's are heavily reliant on spindles.

You've come to the right solution but by the wrong means. A SAN has no place here whatsoever. No matter how much money you had to burn, you'd not put in a SAN for technical reasons. A SAN is a negative in this scenario, not a positive.

It's a good starting point. Obvious questions would include why are you using 2.5" bays when using so few drives?

And obviously, we don't know your workload so no one can make any suggestions of if this is sized well for you. Might be massive overkill, might be massive underkill in some area. We have no way to know.

Sweet, so you agree that utilizing the local bus is the best way to maximize disk i/o?

Absolutely. It's not always an option, of course, but when it is nothing can have lower latency or higher reliability - there is just less distance, fewer moving parts... less to go wrong or get in the way. Plus it is cheap as a bonus. Win, win and... win.

The one things that worries me is that the CPU and memory resources dramatically outstrip the drive resources. Typically in a deployment of that size of CPU/memory we would be seeing 24 2.5" bays maxed out with 10K or even 15K drives in RAID 10 plus adding CacheCade to get an additional IOPS boost.

This doesn't mean that your specific workload isn't CPU and memory intensive while having no storage needs, but your storage capacity and performance and disproportionately small compared to a typical deployment while your CPU and memory are enormous.

Since you stated that you've "been told" that VMs are IO intensive, I assume that this means that you forgot to do the first step of virtualization which is "profiling the workload?"

Everything looks good. The only thing I can think of is maybe an additional NIC? That way when you create your switches every VM can have its own dedicated port. Also may help avoid the need of NIC teaming.

It's not something you absolutely have to have. But I do believe it would be nice to have in the very least for expansion later on. Something very common among people who virtualize is regardless whether they plan to or not, they will generally always end up with more servers than they originally planned, simply because of how easy it is to sysprep a template and fire up a new server.

That's an awful lot of horsepower for 5-7 VM's, to say the least. You have about 1/5th the storage I would expect for your CPU/RAM combination, and half as many GbE interfaces. I run over a dozen production Windows VM's on a generally similar spec, and I've got plenty of room for testing (2 VM's currently) and unanticipated future growth over the next 2 years.

If you are just buying one, spend the extra $2 for 16GB to have tons of overhead (what if a new version needs more space?) But typically 8GB is plenty.

Not just what size, what type would be best? I am of the understanding that a Sandisk with no class listed has better read/write speeds than a Class 10. A Class 10 may have amazing write speeds but very slow read.