I have been tasked with sourcing a new NAS to host our vmware datastores and the question has turned to "shall we populate with SSD storage s it's so fast.?"

We are looking at the Netgear ReadyNAS 4220, we've has a 4200 for a few years and it has performed well. We are planning to fully populate with something like the Crucial MX200 1TB, giving us 9tb if usable storage on X-RAID2, with a 10gb fibre link to the switch. This will gives us a superfast datastore for sub £5.5k but is it worth it?

Are there any drawbacks I should be aware of using non enterprise drives, or will I even see any real benefit?

OBR10 is often the way to go but RAID6 is also an option over RAID5 - I use RAID6 in my lab.

Your 10GbE is purely for access to the datastore I assume?

I would be tempted to keep the old one as well and put the lower IO devices on there, only moving the heavy load ones to the SSD storage - you get the best of both worlds, enough storage for future growth and don't spend a ton on the SSDs for systems that are not going to benefit (plus not all of your eggs are in one basket so to speak) - or even use the old storage as a replica (in case you have a switch/fibre failure) - maybe even an offsite replica. (You could use Veeam for this).

You could also buy less SSDs to start with and just move the systems that would use the speed.

Just a note. When using SSD's, RAID 5 becomes an acceptable option again because the problems that plague it when using HDD's, URE's and heavy mechanical stress on the drives during a rebuild, are not there with SSD's. I am not advocating using RAID 5, just pointing out that it is a viable option again. I don't know if I would use it with 10+﻿ drives though.

Currently the ReadyNAS we have is for backup storage, there are a couple of iSCSI datastores on there, which is not ideal and is temporary, and we have found that the performance is very good, in fact it is considerably better than DAS in terms of disk latency. But that's mainly down to the H200 RAID cards in the hosts.

I guess running on DAS would give you better resiliency in terms of single point failure. Tough choice..

I have been tasked with sourcing a new NAS to host our vmware datastores and the question has turned to "shall we populate with SSD storage s it's so fast.?"

We are looking at the Netgear ReadyNAS 4220, we've has a 4200 for a few years and it has performed well. We are planning to fully populate with something like the Crucial MX200 1TB, giving us 9tb if usable storage on X-RAID2, with a 10gb fibre link to the switch. This will gives us a superfast datastore for sub £5.5k but is it worth it?

Are there any drawbacks I should be aware of using non enterprise drives, or will I even see any real benefit?

Drac

First, thanks for looking at our MX200.

﻿It seems like you're probably more concerned about the data redundancy of your array, rather than flat-out speed, so I'm thinking MX200 is probably a good drive for you. The question you first should ask yourself is, will my system be writing more than it reads, or will the system write occasionally, and read a lot. If the later, then a consumer-grade drive could fit the bill.

There are a couple "drawbacks" to using consumer-grade SSDs in your application. The most important one is endurance. That 1TB MX200 has a rated lifetime of 320TB total bytes written (TBW). This is equivalent to writing 175GB every day for 5 years. That's pretty good, but our 800GB model M500DC (DC = data center) is rated at 1.9 PB, or multiple drive fills per day!

Enterprise SSDs will typically have better "steady state" performance when the drive is full and the workload is fairly constant.

One last thing you should consider is that, generally speaking, enterprise drives will have superior data protection during an unexpected power loss. The MX200 will protect data which has already been written in the event of an unexpected power hit, whereas the M500DC will protect written data, plus any writes in progress and any data in the DRAM buffer at the time of the power loss.

We are planning on a strict 3 year replacement cycle on the drives to reduce the possibility of write cycle issues.

It would be good if you'd do an analysis on how much data you write to your drives. Remember, the age of an SSD is not measured in years, but rather in terabytes written. You may be limiting yourself by setting an "expiration date" on your drives, particularly as technology evolves. 3D NAND for example, should have improved endurance over today's "planar" technologies, so that 3 years could legitimately extend to 4 or 5 years, or more.

Also, you'd mentioned that you expect this to be a read-centric application, and if so, your SSDs may last significantly longer than 3 years. To take the extreme case, looking at data for my notebook SSD (an MX200 equivalent), I'd not expect it to wear out for 100 years or more. Even though I'm an engineer and a pretty heavy-duty computer user, I'm still not inducing significant wear to my drive. Now, your application is undoubtedly more aggressive than a notebook, but you get the picture. You're probably going to fall somewhere between 3 and 100 years! ;-)

Most of the major SSD vendors include an age tracking attribute in SMART. I'd recommend monitoring that, and replacing devices per the manufacturer's recommendation.

Hi, based on responses here and other research we have decided to go down the traditional route of storage and stick with old fashioned HDD. Having read a fairly lengthy post regarding TRIM zeroing data blocks and causing corruption.

Thanks for all the advice though, it has helped a great deal in our decision making process.

Looking at the Netgear ReadNAS again, 4220 variant as it represents good value and fits our needs. As there will only be 3 hosts running upto 60 VMs our needs aren't at enterprise level so no need to blow 10k +﻿ :)

0

This discussion has been inactive for over a year.

You may get a better answer to your question by starting a new discussion.