just a quick question, if iam going for hardware raid card and raid 5
shall i put the O/S on the raid array or on a seperate drives, whats the pros and cons, thank you

Having two logical drives, like two RAID-5s, on the same set of drives will result in head thrashing and performance could suffer. This includes but isn't limited to the boot drive.

But having your boot drive on a RAID-5 array isn't a bad thing if you have it on a separate set of drives. Personally I would probably use RAID-1 for improved write performance (because of frequent accesses to swap and registry files.)

The 4 x 1.5TB Seagates have now been running against Winthrax for over 12 hours on the new volume without a single blip.

I ran a quick test with them against IOMeter (16 I/O per target, 64kb; 100% Write, 0% Rand and 64kb; 100% Read; 0% Rand) and they faired much better than the WD1001FALS did on the 1680ix. The average write speed was just over 300MB/sec, while the average read speed was 25-35 MB/sec higher. I'm very satisfied with those numbers in a 4 drive array.

I hope I'm not being premature in saying that this combination looks like a winner.

Let me know if you guys have any other performance/tests you'd like me to run; I'm going to leave Winthrax up for now.

EDIT: I should probably mention that the very first thing I did when receiving the card was to update the "out of the box" firmware from 1.43 to the latest version (1.45) from the website.

... Let me know if you guys have any other performance/tests you'd like me to run ...

I'd be very interested in its recovery from failure. Pull a disk out while it's running ... and then put a spare in to see if all is okay (and how long the rebuild takes). I believe you only have the 4 drives for now ... but if you put the extra drive on another PC and formatted it it should work fine as the "spare."

What would be the speed difference for a Raid 5 that runs on the built in raid on a motherboard (generally speaking) and a Raid SIX array that runs off an Areca 1230 with 1GB of memory?

Is it worth paying for the card? I would like the added redundancy of the Raid 6 but not sure if the 600 dollar card is worth it.

Thanks

George

Read performance on an optimal RAID-5 or RAID-6 "should" be identical. It should be equal to the performance of one drive times the drive count. Processing power has very little to do with the read performance.

The big difference will be in write performance. Each random host write will be converted to 4 RAID-5 IOs or 6 RAID-6 IOs. So that alone will indicate that cards of equal performance will see a 50% degredation in random writes. Again, CPU performance doesn't have much to do with this aspect of write performance - it's all about drive access times. However the more interesting aspect that does affect performance is the check disk calculation. RAID-5 requires an XOR operation. This is relatively easy to do in hardware or software. I'll assume that the motherboard RAID you're referring to has hardware XOR. If not, CPU utilization will increase, but performance won't necessarily change. RAID-6 requires a different operation that is in one way similar to XOR, but is much, much more complex. It can't really be done in software without a TON of CPU power. If the RAID-6 is done correctly in hardware (on the Areca board) then the overhead of RAID-6 check calculations will be roughly the same as RAID-5 check calculations. Unfortunately this isn't always the case.

So to try to answer your question, best case the Areca will be 50% slower on RAID-6 random writes. The hardware design on the Areca board is based on the Intel Sunrise Lake which has a very good RAID-6 design, but typically the software overhead is a little more complicated, so I'd guess that the Areca is a little worse than 50% slower - maybe 75% slower.

Now, on RAID-6 writes with a good write-back cache, the difference in RAID-5 and RAID-6 can be much closer - as little as 10-20% different.

To try to summarize a performance estimate for motherboard hardware that I'm not familar with:

Compared to motherboard RAID-5:
RAID-6 reads will be about the same.
RAID-6 random writes will be about 75% slower.
RAID-6 sequential writes will be about 15% slower.

BTW, this assumes an identical number of drives and an IO load sufficient to show these differences. In real life applications where you have long delays between IOs or low queue depths you may not see a performance difference.

What would be the speed difference for a Raid 5 that runs on the built in raid on a motherboard (generally speaking) and a Raid SIX array that runs off an Areca 1230 with 1GB of memory?

Is it worth paying for the card? I would like the added redundancy of the Raid 6 but not sure if the 600 dollar card is worth it.

Thanks

George

BTW, whether $600 is worth it isn't really a performance question. The question should be "what would you do if you got a bit error during a rebuild"? Will the RAID-5 indicate which data has been permanently lost, and if so, will you be able to recover it from your backup? Depending on the drive count and drive capacity, the chance of a bit error is probably MUCH greater than the chance of a second drive failure.

Boater, so what do you suggest? Raid 5 or 6? I will also be backing up on my external NAS but like you said, if I dont' know IF anything is rebuilt incorrectly, I can't very well fix it..and with my luck, it will of course be something important..

Boater, so what do you suggest? Raid 5 or 6? I will also be backing up on my external NAS but like you said, if I dont' know IF anything is rebuilt incorrectly, I can't very well fix it..and with my luck, it will of course be something important..

If you've got a good backup, and you don't need 24/7 access, then I wouldn't use RAID-5 or 6. It's just one more layer of software that may not work when you need it to. And one more layer you're going to need to extensively test before using. If you can't live with multiple drive letters, then use Windows concatenation or RAID-0 on your data disks, and just leave the boot disk alone as a single C: drive. I would probably lean towards Windows concatenation or RAID-0 rather then the motherboard version.

If you're a person that absolutely has to use cool technology and don't mind screwing with it constantly until it works (and there's nothing wrong with being one of those people), then I would pay the extra money for an Adaptec or Areca card. If you don't need to build a RAID-6 with more than 16 drives then I would go with Adaptec since they have MUCH more OEM experience. (I admit I worked for Adaptec, but I try not to be biased.)

It's been a while since I "played" with dynamic drives, so before I mess with this I thought I'd simply ask and see if someone knows the definitive answer ...

If I simply install a bunch of drives and use Windows dynamic drives to concatenate them all into a single drive letter, what happens if one drive fails?? My recollection from a couple years ago is I lost everything -- is that right?

I don't mind buying an Areca or Adaptec if it's really needed ... but for the price of an Areca (~ $1200 with the RAM & battery backup) I could buy 9TB of storage (6 1.5TB drives). I'm beginning to think I should just create a large "drive" (~ 8-10 1.5TB drives) using Windows concatenation (or perhaps a RAID-5 array) and replicate it on a 2nd machine that I only turn on for backups.

I think $175 - 225/hour is pretty standard for technical consultation/prof. services. We're closer to the top end of that where I work, though it can fluctuate wildly depending on the specifics (Technical trainers are like digital pimps. In a recent training class I attended for a vendor product the trainer's rate was $600/hour).

By this weekend I will have the recommended parts lists up for two different media storage servers, with all the major guesswork having been removed for people.

Did the original poster ever come up this that list?

I am going for cheep, most likely software based server that can hold 10-20 TB (of 1.5TB drives) and stream BD quality High Def to one or maybe two HTPC's. Since I will only have movies on this server I can deal with loss since I have the discs in the cases, but it would suck to have to do them over.

All of this talk of RAID-1 RAID-4 RAID-99 etc is kind of going over my head I just need someone who knows to tell those of us that have no clue what to buy to get started. Of course I hope this to be something that has already been tested together. Oh just incase this post doesn't sound this way, you guys amaze me with your knowledge of this stuff...

I'd be very interested in its recovery from failure. Pull a disk out while it's running ... and then put a spare in to see if all is okay (and how long the rebuild takes). I believe you only have the 4 drives for now ... but if you put the extra drive on another PC and formatted it it should work fine as the "spare."

Done!

I left Winthrax running as I removed drive #4 from the array (to simulate a failure during access), with no errors detected. I created a composite of multiple test files first and copied them to the array with calculated md5sum's to compare against (pictures, iso's, zip's, etc.).

Just for fun, I paused Winthrax and ran a few tests against the degraded array (with only 3 drives inserted). The write speed was very good, at around 290 MB/sec over a 10 minute period, while the read speed was around 350 MB/sec for a 5 minute period. The array's speed was virtually unaffected by the lack of a drive.

What's even more interesting is that running the same tests while the array was rebuilding (after reinserting the formatted 4th drive) I observed similar results. Write speeds hovered at around 280 MB/Sec over a 20 min period, and read speeds were at around 330 MB/Sec.

I'm going to leave it alone now (turn off winthrax/iometer), and let it chug or else I'll be waiting until christmas.

I'll let you know whether the test files pass the md5 test once the rebuild is finished

As a side note; I formatted the 4th drive on another computer since you specifically requested it. However, removing a disk from an active array reduces it to a hunk of garbage almost instantaneously as it's now out of sync (this might not be true if no data is being written to the array). I was writing to and reading data from the array as it "failed" in my case, so it definately would have been out of sync and the Areca card would have rebuilt it even if I hadn't formatted it first.

I am going for cheep, most likely software based server that can hold 10-20 TB (of 1.5TB drives) and stream BD quality High Def to one or maybe two HTPC's. Since I will only have movies on this server I can deal with loss since I have the discs in the cases, but it would suck to have to do them over.

All of this talk of RAID-1 RAID-4 RAID-99 etc is kind of going over my head I just need someone who knows to tell those of us that have no clue what to buy to get started. Of course I hope this to be something that has already been tested together. Oh just incase this post doesn't sound this way, you guys amaze me with your knowledge of this stuff...

If you're fine going with a 16-drive system and don't really require a high performance server, unRAID might be a good fit. odditory seems to be busy at the moment but maybe you can look at one of renethx's recommended builds in the meantime?

If you're fine going with a 16-drive system and don't really require a high performance server, unRAID might be a good fit. odditory seems to be busy at the moment but maybe you can look at one of renethx's recommended builds in the meantime?

I would second this.
I plan on rebuilding my system with a Supermicro server style board
NORCO RPC-4020 4U Rackmount
2 Supermicro AOC-SAT2-MV8
Later, if or, when unRAID supports more drives, use the 6 onboard SATA ports or go with a 4 to 8 port areca for drives that require higher performance.

Quote:

Originally Posted by fly4christ78

Since I will only have movies on this server I can deal with loss since I have the discs in the cases, but it would suck to have to do them over.

I would never recommend huge arrays of unprotected drives, striped or concatenated, even if the data is archived somewhere else. Time is money... and it could take allot of time rebuilding an unprotected array should a drive fail.

On the other side of this, the cost of maintenance in time (for expansion or repair) can make the pricey hardware attractive if the width (size) of array is very large.

I see allot of this based on
Budget
Width (size) of array.
Performance expected and usage patterns. (archive storage, movie viewing, high performance hd streaming or video editing and amount of users).
Design of how you want to view or manage data. (I want one huge array/filesystem view or I can deal with managing segmented data) (I want to expand easily or I can deal with a little work)
Platform choice and/or willingness to use a different platform. (I must have windows or I could do windows or linux).

Sometimes there are segments of these points not presented and it goes along way to making objective recommendations.

I'm a proponent of unRAID for those who want cheap accessible storage and do not mind managing data on a disk level every now and then. Additionally, if a person wants to reuse varied or mismatched spare disks in an array. It's a very cost effective solution.

I'm a proponent of the software based array if budget is a concern and it is going to be a medium amount of storage (let's say 4-9 drives) and windows is a must. The VST software interests me greatly in this area.

If high performance on a very large array is a requirement, then hardware raid controller is paramount.

I'd like to say thanks for the help and be done with but I have to admit and this is due to my reading comprehension (I'm much better over the phone or speaking to someone so to say) that I'm lost now

George

Well, I dumped a lot of info on you because I wasn't quite sure where you wanted to go with this. Maybe could read it all again, and start thinking about what your goals are. Then we can revisit all the different methods we discussed. Feel free to ask a question for the second or third time. Maybe some of the other folks on this thread can answer questions differently, in a way that may be easier to understand. But don't give up. You're right on the cusp of understanding all this!

I'm still undecided about what I want to do. I'm thinking I might want to use the server to covert blueray and hddvd into mkv containers also. If that's the case then Unraid would be out. I was thinking of using flex raid, but I'm not sure if I trust that enough yet. I don't really need 800m/b of throghput just fort streaming dvd's and hd content. I do need some type of pretection thogh. So I'm kind of conflicted. Do I have my server do double duty and go hardware raid or to I just let it be a storeage server only and use unraid. I like the though iof unraid, because you can use drived that are mismatched in size. As drives get bigger and cheap I can use the larger size without having to replace all of the drives in the array. I've seen a lot of people mention bit error on rebuilds on raid 5 and 6. Is bit error as big of a concern on say a 16 drive unraid array? Are there any special issuses that come with unraid? I had origanally dismissed unraid because of it's 16 drive capacity. Really, 16 should be more than enough. 2tb should be cheap in a year I would guess. If I did just 16 drive at 2tb each thats 30tb os storeage, one for parity. That's like 500 or 6000 dvds. Well I still have some time to think. I can't wait to see those suggested systems.

I'm still undecided about what I want to do. I'm thinking I might want to use the server to covert blueray and hddvd into mkv containers also. If that's the case then Unraid would be out. I was thinking of using flex raid, but I'm not sure if I trust that enough yet. I don't really need 800m/b of throghput just fort streaming dvd's and hd content. I do need some type of pretection thogh. So I'm kind of conflicted. Do I have my server do double duty and go hardware raid or to I just let it be a storeage server only and use unraid. I like the though iof unraid, because you can use drived that are mismatched in size. As drives get bigger and cheap I can use the larger size without having to replace all of the drives in the array. I've seen a lot of people mention bit error on rebuilds on raid 5 and 6. Is bit error as big of a concern on say a 16 drive unraid array? Are there any special issuses that come with unraid? I had origanally dismissed unraid because of it's 16 drive capacity. Really, 16 should be more than enough. 2tb should be cheap in a year I would guess. If I did just 16 drive at 2tb each thats 30tb os storeage, one for parity. That's like 500 or 6000 dvds. Well I still have some time to think. I can't wait to see those suggested systems.

"I do need some type of pretection thogh."

Protection from what? Do you need 24/7 access and want to protect against a drive failure? Or do you need protection from data corruption, in which case you just need backup?

I've seen a lot of people mention bit error on rebuilds on raid 5 and 6. Is bit error as big of a concern on say a 16 drive unraid array?

The bit error comments apply equally to unraid. The more bits you have (drive count times drive capacity) the more chance you have of bit errors on rebuild. (Bit errors result in data loss.) So, yes, bit errors should be a concern on 16-drive arrays. As long as the RAID stack (including unraid) tells you which data is lost, then you can "simply" restore it from your backup.

While the debate over whether RAID has it's benefit or not, will continually rage on, and yes, RAID is NOT a backup, BUT, (and this is my opinion only), if you have anything approaching 8-10TB or more, I just think I'd be very very scared running it as JBOD/disks only, with no protection i.e. parity.

Now, UnRAID certainly has it's advanatges, and it is very much an attractive storage platform for "media server" duties, the only downside is that you can't run any other apps on top of it. Well, that, and it's abysmal performance. Yes, you can argue that it can read/stream just fine, but write performance is just as important, especially if you have the ability to run third party applications on top of the O/S. Since you can't do that with Unraid anyway, it kinda becomes a moot point.

"My" strategy is to have a single, powerful enough server for a lot of varied tasks in the house. I don't want to build a server, and a few months down the line, when I need to add some backend functionality, realize that I can't. For that very reason, Unraid is out for "me". Now, I could run a pure Linux platform as such, and it'd be just fine, but the third party app support for Linux is just aint there yet. In time, I'm sure it will be, but not at present.

That leaves us with Windows and it's different flavors. The only reason (for the most part) anybody would consider running a Windows "Server" O/S for their server, would be the software RAID part. But that sucks, plain and simple. It's not expandable, growable, doesn't support hot spares, and in general is very hobbled RAID implementation. Why? BEcause Microsoft WANTS it that way. They need their business partners to be able to sell you all those hardware RAID cards and/or enterprise volume managers for thousands of $$ They could very easily make the software RAID in Windows Server be at least as good, if not better than Linux, but they choose not to. It's not a technology/manpower decision, it's a business decision. They just choose not to.

So, that kinda leaves us nowehere. Unless you're willing to buy a hardware or even fakeraid RAID card, you're gonna HAVE to make compromises SOMEWHERE. You just can't have it all, there is no magic solution.

Me personally, I built my new server so that I could:

- Run all of the backend functionality "I" need, like AD/DC, DNS, DHCP, SQL Server, TFS etc, all on the same machine.
- Run any batch processing that I need, like, video encoding, audio transcoding, metadata processing etc all on the backend, on the same machine.
- Run my storage of course
- Act as my VPN server. (I hate things like webguide , I just VPN in into my server, and I have access to all content remotely, and so does the family. My wife's laptop (and mine of course) is configured for VPN as well, and it's a single click operation to have the network mapped drives show up. She listens to music from the server in her office all day long).
- Lots of other home automation things that I won't get into.

Point being, I just can't do all these things on Linux, Solaris, Unraid, FreeNAS etc etc. But that's ME. I have different needs.

YOU need to sit down and make a list of YOUR requirements first. Consider it a cost/benefit exercise. Once you have listed all your requirements, create a matrix of the available options and their repective costs. Then do your cost/benefit assessment and make a decision on the platform.

As I said, there is no magic solution (or even solutions) that will auto-magically solve all your storage needs.