Data security is pretty important for photographers who tend to have large amounts of irreplaceable data so RAID arrays are a useful complememnt to a good backup strategy.

I've been having a few issues (this and subsequent posts) over the last few days which have seriously made me question the wisdom of using the RAID functionality offered by many motherboards. To set the scene here's a brief résumé from this Xbit labs review:

It is interesting to compare Adaptec’s controller series, by the way. The first series has no integrated processor and cache and is equipped with a PCI Express x4 interface. The third series comes with 500MHz or 800MHz Intel 80333 processors and with 128 to 256 megabytes of cache; the 4-port models have PCI Express x4 while the 8-port model has PCI Express x8 (by the way, we tested the third series ASR-3405 model earlier). The fifth series has transitioned to PCI Express x8 entirely and every model, save for the 4-port one, is equipped with 512 megabytes of cache (the cache of the junior model ASR-5405 is cut down to 256MB). Every controller of the fifth series uses the most powerful processor currently installed on RAID controllers: it is a dual-core chip clocked at 1.2GHz.

I think it's safe to assume that the RAID solutions offered by motherboards also fall into the category of "no integrated processor" and so rely totally on the CPU on the motherboard. All very well except that most, if not all, mainstream operating systems allow driver level access by third party code which means that such software can cause the host computer to crash. Chances are that for disk drives which aren't combined into RAID arrays the consequences of such lock-ups aren't too serious but if the computer is in the middle of a complicated RAID manipulation then maybe things can go badly wrong.

I've seen this scenario twice now on two totally different sets of hardware: my current one with an integrated controller and one with a simple Adaptec SCSI controller so I've come to the conclusion that where data security is paramount any RAID array controlled by hardware which doesn't contain an onboard processor independent of the computer's own CPU is not worth bothering with.

A lot of people seem to refer to the motherboard implementations as FakeRAID, although it's not as such, it's just not the same as a high-end RAID card.

The main difference is that a high-end solution has an onboard CPU that offloads the system and there's generally a fairly generous amount of cache memory on proper RAID solutions.

RAID 1 should be perfectly safe to run off the motherboard controller, as it just writes the same data to both drives. However, something a bit more exotic like RAID 5 might not give you the performance and safety that you'd expect.

There's a reason why "real" RAID controllers cost as much as they do. They're generally not intended for consumer PCs and thus cost a fair bit of extra money.

Just be glad that you started using RAID now and not a few years ago, as the so called RAID solutions back then weren't nearly as good as they are now.

Reading your post about your setup, it seems like your switch from RAID to AHCI might have cause a bit of an issue as you can't just go changing modes in the BIOS and hope for the best. There's a reason as to why there are different operational modes, but then again, the implementation isn't exactly great, just for the reason you explained that if you have drives connected that aren't part of the RAID array, you'll end having strange problems. You might want to take a look here as well for some of the issues that may happen in a RAID setup http://en.wikipedia.org/wiki/RAID#Problems_with_RAID

Also, RAID doesn't guarantee that you're protected from hard drive failures, these things sadly happen. I haven't had one go down for three years (knock on wood), but I had one fail with the replacement for that one fail within two weeks, that wasn't much fun.

Still, for your needs I'd invest in a real hardware RAID controller, as it'll not only offer vastly improved performance, but it should also offer a more solid RAID solution. Apart from Adaptec, 3Ware and Promise offers some high-end solutions and even Intel has some dedicated RAID controllers.

If I where you, I'd be running RAID 5 rather than RAID 10, it's a much better solution to keep your data safe. Good luck finding a suitable solution for your needs.

Last edited by thelostswede on Sat Feb 13, 2010 3:52 pm, edited 1 time in total.

If you're looking at data integrity, raid 1 is the simple choice. You don't need any processing power, only a simple controller that passes the data to the disks in the mirror. A bad crash may result in the mirror containing different data. A scenario I had once. A simple resync and all was well again. Arguably that inconsistent data write is some lost data, but no more than could have happened with a single disk based system.

Only when it comes to raid 5 does an independent processor start to look like a nice thing to have. The types of calculation required to implement parity information is handy to offload from the main CPU, and the large cache assists system throughput. But for the same argument as the mirror above, I'm not sure the crash risk is significantly worse. Raid 5 has always been a trade off for storage cost and performance.

The big benefit I see for a plug in controller over integrated is possible system portability, so you can transfer more easily the raid array to another system should that be required.

Bob, in your recent case the onboard raid configuration was rather special, operating outside the usual. Had you been running it as intended I wonder if the situation would have happened.

In that sense, I'd go for the "keep it simple" approach. Mirror means two copies, each one can exist independently by itself. It is trivial to implement in hardware or software. I don't feel the controller is a very significant risk for raid 1, other than recalling the case of a bargain basement raid card claiming to do raid 1 but later found by the owners not to actually be writing to the 2nd disk...

I just find it simpler to manage raid 1 arrays, even when raid 10 or raid 5 is possible, I'd take the reduced risk over the potential performance or economy gains.

Looking back maybe I didn't focus enough on the theoretical risk of a "crashed" CPU causing corrupt data writes to an array directly. While an independent controller would prevent direct corruption at a raid disk member level, it may not stop the OS from issuing a garbage write resulting in data level corruption either.

Thanks for your interesting reply and the time spent composing it. I'm letting this Xbit Labs comparative review guide my choices at the moment. Still some thinking to do, though, as I don't need "database server" performance but even so the Adaptec RAID ASR-5805 seems to tick a lot of boxes for me as I can't imagine growing out of it for as many years as the PCIe x8 interface exists. But at about £400 here in the UK it's a pretty serious investment.

@popo: And thank you for your own thoughts, posted after I started this reply to thelostswede. Good value, as usual.

I agree with thelostswede and popo - a dedicated 'hardware' RAID controller card really comes into its own with RAID 5 and other more complex levels, where read and write perfornance can be rpetty good. Try anything other than RAID 0 and 1 on most integrated controllers and they can become very slow.

As some of you may know, I've been using an older Promise Supertrak 8350 EX card for a few years now, and have been very happy with its performance, whether RAID 0, 1 or 5 (it does more still, but I've not gone beyond 5).

My next step-up though is for quicker throughput, and for that I'm waiting for Serial ATA 3 drives and controllers to become commonplace. I'm not sure what Promise will do, as their current replacement for my card has stuck with Serial ATA 2 and embraced SAS for faster speeds.

But yeah, the bottom line is a dedicated hardware RAID controller will perform well, but they ain't cheap. You're looking at a few hundred bucks, Euros or quid, at which point you might be thinking an SSD or two might be more tempting.

I'm personally running RAID 5 using the 3ware 9650SE-16ML. It is amazingly good. I think that if you're serious about using raid for something other than speeding up your boot drive, you'd use at least RAID 5. And doing that, you should really be using a controller. I did a lot of research and landed on the 3ware even though it was pretty expensive because of how good they are. Very reliable, very fast, lots of features.

You can run a JBOD or RAID 0/1 ok on an onboard RAID, but above that and there are too many things that can go wrong with onboard to trust to it. Its not cheap, but a controller really is the only way to go.

Here are some numbers comparing the performance of my old motherboard controllers (SAS controller running my SSD and SATA controller running four HDDs as RAID 10) with my new Adaptec 5805 controller which has charge of both my SSD configured as the boot drive and the same four HDDs as a RAID 10 array. Right away I'll add a caveat that the SSD was refreshed with new firmware and a deep erase before being attached to the Adaptec controller. All tests were under Windows 7 x64.

The figures were generated by HDTach 3 which only tests the read speed of the drives. Write testing benchmarks are generally destructive of HDD data and I've never been prepared to go there.

The difference when the Adaptec read ahead cache is enabled is pretty startling. It works by trying to spot patterns in the way data is accessed but, as I understand it, it won't have any knowledge of the filing system. If I'm right that means the read cache may be more successful at predicting the patterns of data read by something like a hard disk benchmark program but I would expect that the same success would hold true when reading contiguous data files in real world examples. That would seem to also imply that using a disk defragmenter would pay big dividends when working in conjunction with such a predictive cache but I've yet to test that.

But enough guesses on my part. The huge improvement in benchmark read speeds are self-evident but, during boot up, I also see the interval from the end of the BIOS initialisation to the Windows 7 Log On screen appearing dropping from about 18 to 19 seconds to just 16 seconds when the Adaptec read and write caches are enabled. Given Windows 7 initialisation will only be partly dependant on disk drive speed I think that's a great result and I also think the case for dumping the motherboard controllers, heavily dependant as they were on the OS drivers and CPU, is well and truly made even though the cost of the card is similar to a moderately high-end Intel CPU. And the controller and its attached disks can be fitted to a new motherboard and all the data will still be readable.

The Adaptec's predictive read cache seems to be working it's magic on the sequential read and 512KB random read tests but I'm struggling to understand the 4KB figures. That said, I do note that on the web-site's sample screen shot (below) the 4KB figures are similarly low in comparison with the rest so maybe my own results aren't too shabby.

But the not unexpected comparison between the write speed onto the SSD compared to the RAID 10 array vindicates my choice this morning to put PhotoShop's various disk caches onto the RAID 10 array, quite apart from the good sense in not thrashing the SSD. Similarly with the main Windows page file.

I get a 80mb read and write speed with my 2x300Gb HDD in raid 0. (they are pretty fragmented cos I'm a lazy ass (and use xp) so actual speed might be lower in comparison to the CrystalDisk test i did.)

They are about 1 year old SATA I 7200rpm HDDs. So it's not a reliable system. But i don't care about that on my gaming oriented PC. I've already had to format my whole PC because I somehow got infected with a rogue virus. Avast antivirus (free version) failed me

Either way I think Raid 0 does have it's advantages over Raid 1:
-You don't loose any space.
-Your hard drives work 50% less thus increasing life expectancy.
-Faster write speed
-If you store some illegal stuff on there and try destroying the HDD you'll have more chances of getting away with it because raid 0 recovery companies are few and really expensive.

In addition to the CrystalDiskMark results posted above I decided to run the ATTO Disk Benchmark. I ran two test for each disk, the top pair being for the SSD and the RAID 10 array with the benchmark's default settings, in particular the 256MB length. The bottom pair were the same tests repeated but with a 2GB length to gain a better handle on how the Adaptec controller's cache was affecting the results.

Total size of the data file that is created on the test drive = 256MB

Total size of the data file that is created on the test drive = 2GB

Here are some of the descriptions of the benchmark controls, as explained by the Help file:

Direct I/O - If this option is checked, file I/O on the test drive is performed with no system buffering or caching. Combine this option with Overlapped I/O for maximum asynchronous performance.

Queue Depth - Specifies the number of queue entries for overlapped I/O, i.e. the maximum number of read/write commands that can be executed at one time.

Transfer Size - Specifies the range of transfer (block) sizes used for reading and writing data to the test file. Transfer speeds will be displayed for each size.

Total Length - Specifies the total size of the data file that is created on the test drive.

Now you know as much as I do about what the test did. But I'll still have a go at interpretation - if I get it wrong I'd be grateful for any posts that put me right.

I think the two pairs of tests successfully demonstrated what a huge affect the Adaptec 5805's read cache can have on performance, when the data needed is already cached, with transfer speeds of over 1.6GB/s, a figure which I believe reflects the speed limit of the PCIe x8 bus used by the 5805. Note that, if it does what it says on the tin, with "Direct I/O" used for the tests, this is not data that has been cached by the Windows buffers. The controller only has a total of 512MB of memory and some of that will be needed by its own processor so my assumption is that the 2GB transfer length gives a more accurate reflection of how speedy retrieval of data not already cached can be.

A read speed of 350MB/s from the RAID 10 array is certainly not shabby and, given that a single drive on its own is capable of a "Data Transfer Rate / Media to/from Buffer(Max.) of 175 MB/sec" and a "Data Transfer Rate / Buffer to/from Host(Max.) = 300 MB/sec" (source), I think it's fair to say that, with the help of the predictive read cache, the Adaptec 5805 is pulling required data as fast as the HDDs in the RAID 10 array are capable of providing it and that in the unlikely event I needed more performance the Adaptec controller wouldn't be a bottleneck.

I believe the benchmark may not be fairly reflecting the access time of the SSD, which is of the order of 40 times faster than the array according to HDTach, so I'm happy with my decision to configure the SSD as the boot drive and to use it to host Windows 7.

The write speeds are pretty consistent between the two pairs of tests but clearly show up the limitations of the SSD drive both in terms of speed and also variability.

So, finally dragging this post back into direct relevance to the thread, I think my case is made that motherboard based RAID solutions may be cheap but they can't compete in speed terms with dedicated controllers. I can't comment about reliability until I see a spurious error from the Adaptec 5805 of the sort I was regularly starting to get from the motherboard controller, an event I certainly hope doesn't happen and, to be fair, an event I actually don't expect to happen given my experience with previous hardware. I think it's finally time to stop hand-wringing over this, conclude it was money well spent (I'm lucky I could afford one) and start getting productive again...

Not good at Atto, but here's a comparison of a single 128GB Kingston V+ SSD that I installed in a mates machine.

You're getting some crazy high performance numbers, although the 4k test will kill pretty much anything, as it's reading and writing tons of 4kb files (on the 100MB test, 100MB worth of 4kb files is about 25,000 files) so yeah, don't worry too much about the low figures, those are still good numbers. You might be able to tweak the RAID controller somehow, I don't have much experience from the latest generation of cards, but the fact that the 512kb write figures are higher than the random write is not normal.

The Intel SSD's have terrible write performance compared to more of the latest generation SSD's as they have in general been given a huge speed boost, which is related to better controllers, faster memory and improved firmwares. Intel won't catch up until next year, but then they should really catch up from what I've read. If you're running Windows 7 you might want to consider getting a drive with Trim support which is an automatic garbage collection and performance enhancing feature. You can read a bit more about it here http://www.bit-tech.net/hardware/storag ... and-trim/1I think there's a Trim enabled firmware for the Intel drives (might've been what you installed). Still, that would be in part why those numbers are so low.

.
Yes, the CrystalDiskMark 4k figures are confusing and it's interesting that your own figures show that write performance exceeds read performance - the reverse of what I saw on that portion of the test on both the SSD and the RAID 10 array. I would guess that both benchmarks illustrate how difficult it can be to separate the benchmark software's performance from the performance of the drive system being tested. The Adaptec controller's performance is currently optimised for "Dynamic" as opposed to "OLTP/database". If memory serves then "OLTP/database" optimisation might do better at lots of small chunks of data but that's not what this computer is about.

Before reinstalling Windows 7 I flashed my Samsung SSD with the latest firmware which does support the TRIM command but I still hung it off the Adaptec 5805 which actually hides the nature of the drive from Windows 7, a fact I discovered when the firmware flashing utility couldn't discover the SSD when the 5805 had charge of it.

It would have been an option to use a motherboard controller for the SSD but in the end I decided that if the apparent lack of TRIM functionality becomes an issue then Adaptec may well decide to implement its own solution in firmware (I'm an optimist) so I'm happy to continue to use the 5805 for the SSD as well as the RAID. That way I get the benefit of the controller's cache and, for the future, I can be certain that data on the drive can be accessed from any motherboard capable of hosting the controller.

Well, Adaptec is unlikely going to be able to, as currently it only works with Intel's controllers and Windows 7. Another problem is that if the OS can't tell you're using an SSD, then it won't work. Using a RAID card hides the details of the SSD to the OS and as such Trim is unlikely to work. Not saying it's impossible, just unlikely, but we'll have to wait and see.