Hi.. I'm configuring a server with RAID 10 and was wondering if anyone can give advice on the type of drives whether 2.5" or 3.5" would give the best "lasting" performance and if there are issues with SSD's in this type of configuration. I'm hoping purchase Veeam as our DR solution. Thanks - Bill

Hi Bill. There is no significant difference in the life of a 7200 rpm drive if it is 2.5 in or 3.5 in. Some may make the argument that the 2.5's last a little less because of surface area heat dissipation and the higher density packaging the 2.5 inch backplanes create thus making it harder to get rid of excess heat. The heat argument only holds water when you are looking at 15k drives that generate as lot more heat.

I think for you the focus should be what class of server are you looking to build? Are you looking at 8x3.5 in SATA/SAS backplanes or 16-24 x 2.5 in backplanes? The budget at those levels are close to a 3x spread. Also, consider that in a build the comparison is generally spindle raid10 versus SSD raid 5/6. The SSD's have so much more iop capacity that even a raid6 SSD array can rebuild faster than a drive replacement on spinning raid 10.

Some of us are starting to drift away from the OBR10 model to a more hybrid approach. I have mutiple 8x3.5 in backplane small servers and I recently did a HyperV build that split between 4 drive SAS raid10 on big platters and 2 drive SATA raid1 SSD for a high performance VM. Overall, I am very happy with the machine and my single VM on the SSD array needed more IOP's than 7.2k raid10 could deliver even if all 8 drives were used for it. Two vm's are running on the slower raid10 and they are doing fine and I can flip the 4TB drives out for larger 6-12TB drives if storage becomes an issue. It is just a big chunk of space for Crashplan host and file sharing space for veeam Endpoint Free for a hundred workstations.

2.5" drives often have slightly better longevity than 3.5", but it is trivial and I've never seen any business find it to be a meaningful factor. Lower power consumption, smaller footprint, easily to stock replacements... those are bigger factors.

Hi.. I'm configuring a server with RAID 10 and was wondering if anyone can give advice on the type of drives whether 2.5" or 3.5" would give the best "lasting" performance and if there are issues with SSD's in this type of configuration. I'm hoping purchase Veeam as our DR solution. Thanks - Bill

In my case, the drive type is constrained by the server. If you have servers that take 3.5" drives, those you can usually get in SATA in sizes up to 10GB each or more. I can get 2.5" 2TB drives for $90, 4TB for $180 - but they're consumer laptop SATA drives, not NAS or enterprise drives.

So, first determine how much space you require, then buy the appropriate drives to meet that requirement.

SSDs are appropriate for high-performance arrays. Is that something you require?

Hi Bill. There is no significant difference in the life of a 7200 rpm drive if it is 2.5 in or 3.5 in. Some may make the argument that the 2.5's last a little less because of surface area heat dissipation and the higher density packaging the 2.5 inch backplanes create thus making it harder to get rid of excess heat. The heat argument only holds water when you are looking at 15k drives that generate as lot more heat.

I think for you the focus should be what class of server are you looking to build? Are you looking at 8x3.5 in SATA/SAS backplanes or 16-24 x 2.5 in backplanes? The budget at those levels are close to a 3x spread. Also, consider that in a build the comparison is generally spindle raid10 versus SSD raid 5/6. The SSD's have so much more iop capacity that even a raid6 SSD array can rebuild faster than a drive replacement on spinning raid 10.

Some of us are starting to drift away from the OBR10 model to a more hybrid approach. I have mutiple 8x3.5 in backplane small servers and I recently did a HyperV build that split between 4 drive SAS raid10 on big platters and 2 drive SATA raid1 SSD for a high performance VM. Overall, I am very happy with the machine and my single VM on the SSD array needed more IOP's than 7.2k raid10 could deliver even if all 8 drives were used for it. Two vm's are running on the slower raid10 and they are doing fine and I can flip the 4TB drives out for larger 6-12TB drives if storage becomes an issue. It is just a big chunk of space for Crashplan host and file sharing space for veeam Endpoint Free for a hundred workstations.

SSDs are appropriate for high-performance arrays. Is that something you require?

I do not require High-Performance.. we have about 16 user on an SQL Database (used constantly through the day).. the server that these drives go in is a Windows Hyper-V Host with 2 VM's (1 Server 2012r2 DC and the Other SQL Server 2012).

markwilliams3 wrote:

I think for you the focus should be what class of server are you looking to build?

I'm actually looking at 8x2.5" BackPlane (Tower Server)

markwilliams3 wrote:

Some of us are starting to drift away from the OBR10 model to a more hybrid approach. I have mutiple 8x3.5 in backplane small servers and I recently did a HyperV build that split between 4 drive SAS raid10 on big platters and 2 drive SATA raid1 SSD for a high performance VM. Overall, I am very happy with the machine and my single VM on the SSD array needed more IOP's than 7.2k raid10 could deliver even if all 8 drives were used for it. Two vm's are running on the slower raid10 and they are doing fine and I can flip the 4TB drives out for larger 6-12TB drives if storage becomes an issue. It is just a big chunk of space for Crashplan host and file sharing space for veeam Endpoint Free for a hundred workstations.

BTW.. What is ORB10? (couldn't find any concrete Googling it) - My server sounds very similar to the HyperV that you recently configured.. This host machine will have 2 VM's (1 Windows Server 2012r2 DC, the other SQL Server 2012).. Performance is always a consideration.. but I have to have a solid backup plan and quick recovery.. hence the RAID10 consideration.

Your disk type should not be the driving factor in what server you buy ,you should get the spec you want, the CPU and ram you need then look at disk capacity and sizes based on the system you want.

3.5 vs 2.5 are as reliable as each other, consumer vs enterprise are as reliable for their given task, in either scenario you could have a disk fail in weeks or some never fail at all - I have Seagate 3TB SATA3 in my NAS, rare I have a failure and it's ran 24/7 for about 6 years.

Your disk type should not be the driving factor in what server you buy ,you should get the spec you want, the CPU and ram you need then look at disk capacity and sizes based on the system you want.

3.5 vs 2.5 are as reliable as each other, consumer vs enterprise are as reliable for their given task, in either scenario you could have a disk fail in weeks or some never fail at all - I have Seagate 3TB SATA3 in my NAS, rare I have a failure and it's ran 24/7 for about 6 years.

Your failures are based on MTBF and usage

To add to the confusion, I'm going to say just the opposite. Pick your drive first, then your server.

If I want 8TB SATA drives, buying an HP DL380 with 8x2.5" slots isn't going to do me a lot of good. And the biggest 2.5" SATA drive I can get guarantees me that the DL380 isn't going to meet my needs. Especially if I'm buying used and not brand-new custom-built.

In your case, how much space do you really need? 4x1.2TB RAID5 2.5" SAS enterprise drives in a DL360/380 will give you 3.6TB of usable space, enough performance, and a reliable configuration. 6x8TB 3.5" SATA WD Red drives in a Dell 2950 or R510 will give you 24TB of good storage space for online backups, tape drive appliance, NAS target, and so on.

Ok, small Ike mine except 2.5" form factor drives. Your SQL Server doesn't sound that big. if storage capacity is not an issue and once you consider raid5 for an SSD solution then, going back to your original question the SSD's will be more reliable than platters and substantially faster for SQL. SSD's come in three flavors - read intensive, mixed mode and write intensive. You need to look at how much your database is changing versus how much you are just looking at things to decide which type to purchase as that could have a significant impact on life expectancy of the SSD's.

As Gary, Rod, Robert and Scott will all tell you RAID is not backup. Your comment about solid backup and quick recovery in the same phrase as talking about raid10 would lead me to believe you are thinking that disk redundancy is a form of backup. It is not. Any software hiccup that corrupts your SQL database will still be corrupted if it is stored in a raid system. Backup is being able to wipe the database off the disk and restore it to where you started this morning because it died in the middle of a big import when the power went out and the UPS batteries were shot or the psu fried and as it shorted down it nuked every hard drive in the box.

The current server (that I'm replacing) has this drive config: (2) 500gb drives (raid 1) for Boot OS, and (2) 500gb drives (raid 1) for the Data.. both are about 60-70% full. Since I hope this will last them for awhile.. I was thinking a disk space of 2TB would be good! - Their DB is used constantly throughout the day (about 16 users).. so speed and reliability are key. The DB size is about 10GB.

Gary D Williams wrote:

MTBF numbers across disks and SSD's are pretty much identical now.

Thanks for clearing this up!

markwilliams3 wrote:

Ok, small Ike mine except 2.5" form factor drives. Your SQL Server doesn't sound that big. if storage capacity is not an issue and once you consider raid5 for an SSD solution then, going back to your original question the SSD's will be more reliable than platters and substantially faster for SQL. SSD's come in three flavors - read intensive, mixed mode and write intensive. You need to look at how much your database is changing versus how much you are just looking at things to decide which type to purchase as that could have a significant impact on life expectancy of the SSD's.

As Gary, Rod, Robert and Scott will all tell you RAID is not backup. Your comment about solid backup and quick recovery in the same phrase as talking about raid10 would lead me to believe you are thinking that disk redundancy is a form of backup. It is not. Any software hiccup that corrupts your SQL database will still be corrupted if it is stored in a raid system. Backup is being able to wipe the database off the disk and restore it to where you started this morning because it died in the middle of a big import when the power went out and the UPS batteries were shot or the psu fried and as it shorted down it nuked every hard drive in the box.

Hi.. thanks for the SSD info.. I will look into this further for sure. The DB is changing throughout the day.. but I do not have any exact numbers. Also, as far as DB Backup.. I am considering Veeam for this purpose and I am not relying on the RAID for this purpose. I have read many articles (and posts) stating the same.and have that message clear :)

Thanks for ALL of your posts.. this information is invaluable to me!! - I have to get this server configured promptly and install in a week or two.

Bill-AATFtech wrote: I am considering Veeam for this purpose and I am not relying on the RAID for this purpose. I have read many articles (and posts) stating the same.and have that message clear :)

RAID is not a backup. A fire or other disaster can take out your disks, that's why RAID is good for redundancy but it's not a backup. It's also why the 3-2-1 rule exists.

With SQL, you can (and should) take database dumps at regular intervals. Store these somewhere else. Use Veeam to backup the VM's, including the content of SQL. This way you've got multiple protection. Offsite backup is a very, very good idea as well.

The current server (that I'm replacing) has this drive config: (2) 500gb drives (raid 1) for Boot OS, and (2) 500gb drives (raid 1) for the Data.. both are about 60-70% full. Since I hope this will last them for awhile.. I was thinking a disk space of 2TB would be good! - Their DB is used constantly throughout the day (about 16 users).. so speed and reliability are key. The DB size is about 10GB.

500GB for an OS disk is really a waste. 100GB should be plenty. Why is C: so full?

Given those disks and your system, I'd array the drives as RAID10 as one virtual drive of 1TB. Then, create a partition for C: and one for the rest as a data drive. Install Windows to C: and use it to map the second drive to the Windows system. You'll get more space and a boost in performance as the writes can now be spread out among two drives.

500GB for an OS disk is really a waste. 100GB should be plenty. Why is C: so full?

Given those disks and your system, I'd array the drives as RAID10 as one virtual drive of 1TB. Then, create a partition for C: and one for the rest as a data drive. Install Windows to C: and use it to map the second drive to the Windows system. You'll get more space and a boost in performance as the writes can now be spread out among two drives.

Bumping those 500GB to 1TB will give you plenty of room to grow.

I believe that a prior IT person stored some backup data on Drive C as well as an software installation repository. Also, I appreciate your details here, but don't quite follow.. possibly because it's late in the day.. This server (I'm configuring) will be a Hyper-V host with 2 VM's, 1 DC and the other an SQL server. I'm not following the "I'd array the drives as RAID10 as one virtual drive of 1TB and so on. Would it be possible for you to expand on this a little? Thank you!

Sure. You install 4 x 1TB drives, for example. In your RAID configuration, you create one RAID10 array of 2TB that comprises all 4 drives.

When you boot the machine, the RAID controller presents a single virtual drive of 2TB to the OS. You create a partition on this drive of 100GB and call it C:. You make it bootable by installing Windows into that.

When you reboot and start disk management, you'll see your 100GB C: drive and an 1.9TB unallocated space. I always reserve D: for the DVD (even when virtual). Allocate the 1.9TB as a single simple volume and call it E:.

Now, when you create VMs, put them on E:. I like mine separate and easily identifiable. So if I'm creating two VM, a DC and a fileshare, my structure would look like this:

During creating, point the VM location to a top-level directory you define for each machine. No cryptic names. No directories on C:. Point the smart page file and snapshot settings to each machines directory as well. Keep all your same stuff together.

Now, inside each VM, do a similar thing. Create a 40GB dynamic drive VHD for C:. Create a dynamic E: drive VHD for your VM's application or database. Size them small -they're easy to expand, but harder to shrink.

Last Question (Robert5205).. (I will be using your guidelines above) - when formatting the partitions (or allocating them) do I need to be concerned with the Sector size when defining the partitions? (I believe that 4k is the default.. not 100% sure). Thanks.

Last Question (Robert5205).. (I will be using your guidelines above) - when formatting the partitions (or allocating them) do I need to be concerned with the Sector size when defining the partitions? (I believe that 4k is the default.. not 100% sure). Thanks.

I never have been. Unless you're attempting to fine-tune your system for something very particular, the default size is fine.

The only caveat is that your system must support UEFI to boot off a RAID-presented disk > 2TB. If you don't have UEFI, you'll have to create two virtual drives on your RAID array.