SAS 2.5" - Better off with 10k rpm drives and half the spindles or twice the spindles and 7200rpm drives?

For slightly less money than as using 10k RPM hard drives, I can get twice the drives/spindles/capacity if we went with SAS 7200rpm drives.

I would believe that twice the spindles will increase IO performance, fault tolerance and double my storage capacity. The only downside seems to be less free bays for the addition of drives in the future.

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Adding drives does not increase fault tolerance, you have more points of failure with more drives. The downside is with more capacity you will add more guests which means smaller slice of IO for each guest.

It is more complicated then both answers above. First ESXi is effectively a random I/O model, and I/Os are normally 64KB in size. If you have either a hardware RAID controller or are using software based RAID 0,1,5,10,or 6, then everything changes depending on the RAID level and other RAID settings.

Since you are using the G6 then you obviously have a premium controller. If you have the BBU option, then you will be MUCH better off buying the 7200RPM drives, and configuring multiple RAID1 or RAID10 groups.

The reason is that on READ operations, the HP controller will load balance read requests between each controller. (On writes, no performance gain because both have to get written).

So in a perfect world you could theoretically have the equivalent of 14400RPM Reads and 7200RPM writes (Emphasis on perfect) doing a RAID1 or RAID10, vs 10000 Reads and 10000RPM writes doing a non-RAID config.

Azure has a changed a lot since it was originally introduce by adding new services and features. Do you know everything you need to about Azure? This course will teach you about the Azure App Service, monitoring and application insights, DevOps, and Team Services.

Plus, in scenario above, you have increased reliability because each disk mirrors the other. You have ZERO data loss in event of a drive failure. WIth the 10K single disks, you have 100% data loss in event of a failure.

Look at the warranty HP give you with the enterprise SAS drives Vs the midline SAS drives. 3 years Vs 1 year. I'd keep nearline disks for what they were made for; backup and archive and other low IOPS duties.

You're right that they're about half the speed and half the price of enterprise disks...

There are so many variables that it is really hard to totally predict. The safest answer is match your business needs. Consider cost per gb/tb of space and weigh that against performance and your true need for performance. Then factor in the importance of the data and your redundancy needs. Do you have a baseline for the current usage? Is disk I/O an actual bottleneck? Is disk space a bottleneck? Do you need to be able to survive a multiple disk failure? Questions like that are more important to this type of decision, in my opinion. Also, you have to consider how it will be used. If this is for a database server, for example, you would be well served to have a raid 1 or even a single disk for the OS, a separate raid 1 for the transaction logs, a separate raid for the actual database, etc. The "best" option would be different if this was just a DC, where the OS becomes more important to be redundant again.

As an example of the myriad of variables involved I once did two tests. These were not full scale tests with a full range of installed apps or users. Merely different configs and benchmarked.
Test 1
3 10k disks in raid 5 vs 3 15k disks in raid 5
10k disks had two smaller raids configured spliced (one 40gb raid 1 with the first half of each disk and a separate raid 1 configured on the last half of each disk) and outperformed the 15k drives with a single array configured using the entire space. This was most likely due to disk geometry.
Test 2
4 10k disks in raid 5 vs 3 15k disks in raid 5 both using all the space available
15k disks outperformed, likely because there were not enough extra spindles to overcome the drive speed differences. The overhead in splitting the data to be written to different disks has a cost. That cost usually starts to even out at about 5 disks (spindles) and get better from there.