First, I’d like to thank all my fellow AVSers for your thoughtful and informational discussions and posts. Without this wonderful community I wouldn’t have ever built a HTPC let alone a media server.

Many people have discussed builds based on the Norco 4020 case but I have yet to come across someone’s actual build description with pictures of the build using this case. Having been first inspired by gshipley’s 7TB unRaid server build and others to build my first/second media server , I decided to post pictures and build thoughts on my third media server build (hopefully this one will last me for awhile). I think (like many on these forums) that I am a bit of a perfectionist and tinkerer with some mild OCD thrown in, lol.

One suggestion I want to give anybody contemplating building a media server is to figure out exactly what you want to build that meets your budget, needs, and other restrictions and stick with it. If at all possible, give yourself as much expandability and fault tolerance as possible.

While my old file server was running server 2008 running raid 5, I often worried about losing two hard drives at one time and thus losing the entire array of data. The simplicity and limited loss characteristics of unRaid made it my choice for my third server iteration.

Parts list:

I’ll give some general thoughts here, and more in depth discussion of the parts in the build discussion.

Case:NORCO RPC-4020 4U Rackmount Server Case
Thoughts – This case has been much talked about in many threads. While it is a bit loud, I really like it and after experiencing the ease of hot-swappable bays now first hand, I can tell you the extra money is worth it. One thing I haven’t seen mentioned much (even in the case description on Newegg) is that this case will easily hold 22, possibly 23 HDD’s. While there are only 20 hot swappable bays, there is an OS drive space on the top, plus you could put a drive in the floppy and CD-Rom slot.

Power supply:CORSAIR CMPSU-750TX 750W
Thoughts – I really like the quality of the Corsair brand of P/S’s. They are also single-rail so you do not have to worry about balancing power between your rails (if it’s even possible to do so given that an inordinate amount of our power is going to our peripherals aka our HDD’s). Furthermore, there are 8 molex and 8 sata connectors (most P/S’s come with 6). Finally, what’s not to love about a P/S company that packs their P/S’s in their own cloth bag, a la Crown Royal. NOTE: The price and availability of the Corsair 850W is now very close to the 750W. You shouldn’t need it, though why not, they’re both very efficient P/S’s. Though in all honesty, you should be fine with even the 650W.

Motherboard:SUPERMICRO MBD-X7SBE LGA 775 Intel 3210 ATX Server Motherboard
Thoughts – First, it’s been used and works with unRaid. Second it has two PCI-X 133 MHz slots for my two 8 port PCI-X cards. Third, it takes DDR2800 memory and regular Intel processors. A few other nice features are the internal USB plug to make it easy to have unRaid and it has Intel Ethernet which is implemented very well. CAUTION: This mobo DOES NOT have any PATA connectors despite what the Newegg description states. This wasn't a problem for me and I knew it going it, but just be aware.

CPU:Intel Core 2 Duo E8500 Wolfdale 3.16GHz 6MB L2 Cache LGA 775 65W Dual-Core Processor - Retail
Thoughts – Okay, I’ll be the first to admit that this is way overkill for an unRaid build. I’ll probably disable one of the cores (The supermicro board allows you to do this easily btw). However, I wanted it because it gave me the freedom to change my mind down the road and run another OS with plenty of power and feature set. According to my Kill-a-Watt though, this build doesn’t use much power despite my extravagant CPU choice.

SATA card:SUPERMICRO AOC-SAT2-MV8 64-bit PCI-X133MHz SATA Controller Card - Retail x2
Thoughts – A lot has already been said about this card. Until they make a reasonably priced many port card that uses PCIe, this will be the go to card IMHO. It’s also known to work in unRaid. Although I am using unRaid and only have to worry about the PCI-bus bottleneck when doing parity, I didn’t want that to be a bottleneck. While I could still theoretically reach it on the PCI-X bus with this build, it’s gives me a much higher ceiling and in real-world use PCI-X should be plenty of bandwidth. (Two 8 port cards = 16 drives. Both cards share the PCI-X bus which is 1064 MB/s. So: 1064 MB/s / 16 drives = 66.5 MB/s per drive). This should be plenty given unRaid’s tested, real world throughput. Plus unRaid right now only uses 16 drives. So: 16 drives – 6 sata ports on mobo = 10 drives on the PCI-X bus. 1064 MB/s / 10 drives = 106 MB/s per drive.

Hard drives:Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM 32MB Cache SATA 3.0Gb/s Hard Drive (bare drive) - OEM x 6
Thoughts – Largest capacity drives available at the time, they’re only $20 to $30 more than the 1TB drives. While they first had some problems, the current generation seems to be working well. So far, after a week of running, I have not had any problems with them. With my choice of unRaid I wasn’t too worried about a drive loss just in case they still had some growing pains. I also have ten 750GB HDD’s from my old server that I will put into the new server as soon as I migrate all of the data to the new server. This will give me 15TB of storage. (~14TB in real storage)

Flash drive:Patriot Xporter XT 4GB Flash Drive (USB2.0 Portable) Model PEF4GUSB - Retail x2
Thoughts – Proven to work with unRaid. Again, 4GB is probably way overkill for unRaid, but I wanted to give myself lots of room in case I wanted to add features to my unRaid build. The cost difference between these and a 1GB or 2GB drive is negligible IMHO. They have great reviews and good reliability. They are very solidly constructed. Also, if your mobo choice doesn’t allow internal mounting (or you don’t buy an adapter to do so) they also come with an extension cord so you can zip tie it inside the case.

SATA to Molex Adapter:ABS SATA To Molex Power Adapter - Retail x2
Thoughts – The P/S has 8 molex connections. The Norco backplane has 10 molex connectors (though I have read you don’t have to attach all 10, why not err on the safe side). These are a bit fat, so if you use them on the backplane, you have to use just one per horizontal row, two won’t fit side by side. It’s a very tight fit though, so I’d recommend using extensions instead anyways.

OS:unRaid Pro
Thoughts – Media server OS’s have been much discussed and argued (heck, I’ve changed my media server OS quite a few times, even after trying out many of them). For my needs unRaid fits. Its main attraction for me is the raid 5 like HDD fault tolerance with the ability to lose more than one drive and just lose the data on the lost drives.

Not needed, so didn’t buy:Sata cables: The board comes with 6 for its 6 ports and each sata card comes with 8 for their 8 ports.

Wish I would have bought:Molex and Sata power extenders: I moved the fan directly across from the power plugs on the backplane to make room. Even so it’s a very tight fit using the big, fat cabling from the P/S. I wish I would have gotten some thinner extenders so it wouldn’t be so tight.

3 pin fan to molex converter: Because I moved the fan I needed to power it. Luckily in my spare parts I had one of these. I’d recommend getting two so you can move the fans at both sides of the case to give more space for the power connections and the sata connections. (though the sata fit okay).

80mm fan grills: Again, if you move the fans, get 2 extra fan grills. I used a fan grille from the middle fan because with my cable management there were no wires near that fan.

Build Discussion and Pictures:

The Parts

The Case

Ready to Rock n Roll

The mobo

As you can see, everything is laid out and the case has been opened.

First I positioned the mobo over the case and saw which risers need to be moved or added. The Norco comes with a nice little tool to loosen and tighten them.

Then I installed the CPU and heatsink outside of the case (makes it a lot easier IMHO; plus I hate to put the downward force needed to install an Intel retail heatsink onto a mounted mobo). Dunno about anyone else, but I hate the new plastic push pin retail heatsinks…

If you notice in the picture, I left the mobo on the foam that it comes with to help cushion it. Then I installed the RAM, making sure to populate it to take advantage of dual channel mode.

Next I put the mobo into the case and screwed it down.

If you’ve never built a computer before, alternate adding the screws around the outside of the mobo and don’t tighten them down until you have them all screwed in. Also, I first use one of the risers to test all of the screws I’m going to use to make sure they are the right length and size.

The Norco case made this easy though by separating out all the screws in different bags.

I use bowls to hold screws that are loose. Otherwise I’ve found that they disappear rather quickly.

Now that the mobo is installed, I next installed the P/S. Then I decided to unscrew the fan divider as you can see.

If you’ve managed to hook up the power and the sata without doing this, then I bow to your small and nimble fingers.

NOTE:

If anyone has just removed the fan holder and perhaps modded the case to blow larger 120mm fans over the HDD bays, I’d be very interested to hear about it. Any info to make the case quieter while still having good airflow would be greatly appreciated.

As you can see from the above picture, the power cables are a very tight fit. I had to move the fan to the other side. I took a fan grille from a middle fan as there are no cables near it. As I discussed in the parts list, if you can get some thinner extenders I think it would work a lot better.

Next I attached the case cables (such as the power switch, reset switch, etc.) to the mobo. Then I put in the SATA cards, attached the sata cables to the backplane and then the cards (zip tying as I went). After that I re-screwed in the fan divider.

One nice bonus of the motherboard is the internal USB connection.

Now that everything is hooked up, it’s time to put in the 1.5TB drives.

As you can tell from the picture, you screw the HDD’s into the bottom of the tray and then just slide them in and they should connect. As another poster has said, sometimes you have to jiggle them a bit as you slide them in to get them to line up.

This is the main appeal of going with the Norco case; it has 20 hot swappable bays for a reasonable cost. If you can put up with the noise and the size, it’s definitely the way to go.

For adding a HDD in a regular case, you have to power down, open up the case, unhook the drives in the cage you’re adding too, unscrew and remove the cage, add in the drive and screw it in, put the cage back in, screw the cage in, re-hook everything up, close up the case, and power back on.

For adding a HDD to the Norco you push the tab of the bay over, remove the cage, place the HDD on it, put in two screws, and then push it back in.

For starting out, if you’re on a budget, or if you need a tower solution, however, I won’t argue with a coolermaster centurion or stacker case and some 4 in 3’s or 3 in 2’s. But the cost for the Norco really isn’t that much more. (can you tell that I’m a hot swap convert)

It’s Built, Let’s fire it up

I hooked it up to a keyboard and mouse and LCD. While you can run it headless (without those things) you will need them to configure your bios.

The only thing that stands out from bios configuration is that it saw the unRaid USB stick but it had it in the “excluded from boot” category (or something like that) so I moved it to first boot device.

Next I ran into some problems as it wouldn’t boot from my unRaid stick. It turns out as I tried to setup the unRaid in Vista there is a problem with the LimeTech directions. If you’re interested, you can read about how to fix this on my post in the unRaid support thread here.

After figuring that out, it booted unRaid without a hitch. I opened a browser and setup my server and proceeded to build the array.

As you can see, each hot swappable bay has two lights. Green means it’s powered, and blue means the drive is being accessed.

Final Thoughts:

Other than a few minor problems and suggestions that I mentioned earlier, everything went very well. So far I’ve been very happy with the server. I currently rip Blu ray to .iso (I may change and convert to .mkv but right now I have plenty of room). I’m able to play them flawlessly over my GbE network (DLink Dir 655) from my unRaid server to my HTPC by mounting the .iso using Virtual Clonedrive and playing via PowerDVD.

UPDATE (3/6/09):

I love having all of the storage available. However, as we are living in an apartment right now, the server was just too noisy for it to be practical. Hence I decided to quiet it down.

In hindsight, I highly recommend doing as another poster did and use three 120mm fans across instead of the five 80mm fans. The fan holder could be easily modded with a dremel and drill (do this outside of the case though of course and I recommend doing it before you build. I plan on doing this when I put together a similar build for a friend in a month or so… I’ll post about it when I do. I didn’t want to do this with my build as the build was already done.)

For whatever reason my stock Intel cooler was very loud, I do not know if I got a bad fan, or it’s due to having a server motherboard (typically servers are in an environment where cooling > noise). To be in the same room as my server was for any length of time gave me a headache. I was lazy and didn’t want to uninstall the mobo and various other things to replace the heatsink with a better one so I originally went with a Thermaltake push-pin heatsink. This was a mistake as I had seating problems with the push pin heatsink. So I got the Cooler Master GeminII S which screws down. It works great and is extremely quiet.

I also realized that the Delta fans that come with the case are very loud so I replaced them with these Coolermaster 80mm fans (I had them laying around and they are comparable in CFM to the Delta’s).

It’s much quieter now (I’m writing this with the server within 4ft of me and while not as quiet as my main rig it is quite tolerable).

At the end of a parity check drive temps are between 35 to 39 degrees Celsius (which is the same as they were before I made the switch).

What kind of read and write speeds do you get? Have you benchmarked it? Also what was your total cost on this one? I was thinking about getting PCI-X cards and what not but there are a couple PCIE 8 port cards that are about the same price. I figure for future proofing I should probably stick with PCIE in case some day I want to add a nice Areca raid card or something lol.

I haven't benchmarked it yet. I will and post the results back here. In all honesty though, I do transfers at night when I go to bed, so the write performance wasn't worth spending a lot more money on, or more importantly, going with a higher throughput solution (such as raid 5 or raid 6) and losing the limited loss of data that comes with unRaid. (I'm actually coming from a server 2008 raid 5 implementation that just screamed in network transfer, but I wanted more fault tolerance.)

Regarding read performance, it plays back Blu ray .iso's fine over the network, so, the read performance is fine for my use. If I have to do some major manipulation of a file and I can't do it over the network, I'll just transfer it back to my work machine (though I don't foresee having to do this much).

Total cost was around 2k (including HDD's). Though I splurged on some of the parts. I could have dropped that to 1700 without much trouble. Without the HDD's it was around 1200. So, you could easily put together a similar build (sans HDD's) for 1k.

Yeah I forgot that board had the PCIE slots as well lol. I looked past it because it only had 2 of them but then again I think the board has 6 onboard SATA ports so 2 8 port cards would put you up above 20 so maybe it is enough. The 8 port card was

Sounds like your choice of OS and a raid solution is more important than what hardware you pick. (both my and your solution should provide plenty of bandwidth).

unRaid might not be for you, though I vaguely remember seeing some posts where people have streamed 3 different HD streams from an unRaid server, though I dunno for sure. You could look that up here or over on the LimeTech forums (heck, you can always post over there about it if you can't find anything).

If you need insane transfer speeds, you going to need to use a raid 5 or raid 6 solution (whether you pick hardware or software is dependent upon your needs). unRaid has a theoretical max transfer rate of one hard drive, so for the 1.5 TB that's 120 MB/s .

Again, my memory is somewhat faded about this, but I think real world the max is 80 MB/s per drive.

Damn good description of your build. I have this case as well and thought the damn backplanes were screwed up as my hard drives were not detected. It turns out, I was an idiot and never screwed in the hard drives.

The only crappy thing about this board is the lack of instructions. Yes, it seems like a simple step, but I totally screwed it up

Just realized I never posted back with read and write speeds. My write speeds vary between 10 - 25MB/s. Usually it is in the high teens to low twenties (around 16 - 21 MB/s).

Regarding read speeds, I'm getting around 60 - 70 MB/s using Vista SP1. I have not done any of the optimizing that they talk about on the unRaid forums. Honestly, this is more than enough for my needs.

Just realized I never posted back with read and write speeds. My write speeds vary between 10 - 25MB/s. Usually it is in the high teens to low twenties (around 16 - 21 MB/s).

Regarding read speeds, I'm getting around 60 - 70 MB/s using Vista SP1. I have not done any of the optimizing that they talk about on the unRaid forums. Honestly, this is more than enough for my needs.

Are you writing directly to the Parity-checked array, or via a cache drive for increased write speeds? (I'm guessing the former?)

You are correct, directly to the Parity-checked array. I usually send a large write to the array before bed, so I didn't see the need to use a cache drive. Just for curiosity though, what kind of speeds are you getting with a cache drive?

what's the backup solution? some people say the original discs but ripping all those discs are a bitch!

I'm not sure exactly what you mean by "what's the backup solution?" I use unRaid's built in fault tolerance to safeguard my data (including Blu ray, HD DVD, DVD, and TV Show rips). If I lose only one disc I don't lose any info. If I lose two discs, I lose just the info on those discs. That's why unRaid appeals to me. (though also have a WHS box for other uses)

If you want to be a stickler for terminology, the fault tolerance of unRaid is not of course "backup". In that case, yes, I am using the original discs kept in storage bins in my parents basement as "off site backup". It would be a pain to re-rip all of those discs, but I'm hoping I never have to. One of the major drawing points for me to unRaid, was if I do lose more than one hard drive I should only have to do an incremental restore (aka re rip) of the discs I lost.

The other solutions like WHS or another implementation using an analog to raid 1 or raid 1 were just too hardware expensive IMHO for my needs. unRaid was the perfect solution for me, just enough redundancy and fault tolerance versus cost.

MikeSM thanks for the headups about airflow. I honestly didn't consider that. I'll remember that for my friends upcoming build (he is only going to populate it with 8 drives to begin with).

However, it's mostly a moot point as I now have 10 more drives in the case (the ten 750 GB drives from my old server 2008 build). I couldn't put them in initially as I needed to use the unRaid server's storage space to transfer the server 2008 files to before I could use them in the unRaid server.

Just out of curiosity, what other operating systems have you used before?

So far, I've only needed FreeNAS as I've been using old PC parts to keep my costs down. Unfortunately, those PC's are pretty much full so I'm going to have to build a new system pretty soon. I was thinking of going with FreeBSD though. Do you have any thoughts/comments/knowledge on that OS?

BTW, nice post of your build. I also like the comments you made along the way.

Xolo, first, thanks for your kind words, I'd tried to write an informative article, detailing not only what I did, but why I did it. I hoped it could be used by others looking to build a server; give them a bunch of information condensed together that I had learned over the years (and from my 3 different media server builds ).

I looked at FreeBSD, but I never used it. It's very hazy now why I decided not to, but I think it was due to it's limited hardware support. I think there was another reason, but I can't remember now.

I may have to try it now in one of my "play" boxes though... I see it has ZFS now and I think ZFS is great and will be happy when they get all the licensing worked out and bring it finally to Linux. While Nexenta and ZFS on Fuse seem promising, for me they were not mature enough to use them on my media server.

All that being said, if you do use FreeBSD either post back to this thread or PM me with your thoughts. Likewise do the same if you have any questions about my experience with any of the other media server OS's I tried.

After using many different OS's and deciding on server 2008 (which was very stable for the months I used it) I changed over to unRaid when I upgraded to the Norco case due to it's increased ability to keep my data safe after multiple hard drive failures. To me, for a media server, that was the most important feature. unRaid is the only mature OS that I'm aware of that implements something like Raid 5 but with a single parity disk rather then writing the parity info across all the disks. (FlexRaid is also promising but not mature enough for me.) I had a few scares with my Raid 5 array where I would boot and a drive or two would be missing. It turned out to be from two failing drives which I RMA'd after I had used other drives to repair the array. After experiencing those scares, unRaid was a no brainer for me. Again though, everyone has different needs for a media server and that's why they should choose the OS that best suits them.

I think WHS is great, and have a box running it, but ultimately found it too hard drive expensive to use it as a media server with redundancy.

Linux and server 200X are great with their own strengths and weaknesses, but Murphy's Law made me move over to unRaid for my media server. (I have crap luck, I figured even if I went with raid 6 I'd get a bad batch of drives and lose 3 after a few months of use, after the array was almost fully populated, lol). Granted, if I did raid 6 arrays of around 8-10 drives I'd be very comfortable putting my data on them, but for a home media server build for my personal use I didn't find this economical. To me, my build and choice of OS was "good enough".

I looked at FreeBSD, but I never used it. It's very hazy now why I decided not to, but I think it was due to it's limited hardware support. I think there was another reason, but I can't remember now.

Tim,

I'll have to look into hardware limitations for FreeBSD. That's what I was planning on using. I know there are HUGE hardware limitations with FreeNAS, so I knew that I would have to use something totally different.

One question. About your flash drive....did you ever think about using an Emphase FDM 4000X bootable IDE flash disk drive? They are available up to 8Gig in storage capacity. And with it being directly on the primary IDE, less set up for the BIOS.

I've been using those for my NAS boxes, and I only needed the 1GB models (could have gotten by with even a 512MB for the FreeNAS OS but wanted some extra space just in case). The do require a molex power connection though.

I would think after spending all that money on this server that the no brainer solution would be to stop using a software RAID solution and opt for a good hardware raid controller(s).

Hardware RAIDs are quicker, use less system resources, and more stable and reliable then software RAIDs from an OS.

This setup is like buying a lamborghini but opting out of the V12 engine for a 6 banger instead.

That is a nice build. I have to agree the 20 MB/sec writes would drive me batty, especially after spending all that dough. Of course, it would have been even more $$ with a HW RAID card.

For those that are interested in HW RAID, but suffer from sticker shock, here is one of the cheapest 16-port RAID cards on the market I am aware of. It is made by LSI, a reputable mfg, and includes an Intel IOP80333 processor. It does all levels of RAID, online capacity expansion, RAID level migration, etc. It sells in the mid-$600 range. Caveat: I have not personally used this card, I have an even cheaper Highpoint 8-port card with similar features and an Intel IOP80341.

I think the OP is aware of his speed limitations and is comfortable with them. Also, the hardware could easily be upgraded should he wish.

What would be the cheapest way to build around that LSI card and what OS would you use? I've been leaning toward the unRaid due to ease of use, reliability and a large support base; however, I'd be willing to consider a hardware based solution if it's not TOO much more expensive. My current plan almost mimics the OP's though some of the parts (cheaper CPU and less RAM).

That is a nice build. I have to agree the 20 MB/sec writes would drive me batty, especially after spending all that dough.

That's my point entirely - spend all the money and then limit yourself to a software RAID

I run an adaptec 5405 HW RAID controller on my web server - I can't speak highly enough about it. With it's dual core processor this thing is a beast.

For a setup like the OP's server the adaptec 51245 probably would work but I feel is far to expensive for a home server, but an Areca ARC-1260 would work just the same and give him MUCH better performance hands down.

That's my point entirely - spend all the money and then limit yourself to a software RAID

I run an adaptec 5405 HW RAID controller on my web server - I can't speak highly enough about it. With it's dual core processor this thing is a beast.

For a setup like the OP's server the adaptec 51245 probably would work but I feel is far to expensive for a home server, but an Areca ARC-1260 would work just the same and give him MUCH better performance hands down.

That's not completely true anymore for Unraid. Since it supports the use of a cache drive, you can simply copy data to that drive at typical network/hard drive speeds, and that data is automatically transferred to the data disks on a nightly basis.

Additionally while the ARC-1260 is a nice card, the 8-port version is ~$800! I built my entire Unraid machine for that much. Not everyone has the money to spend $2-5k on a home media server. And even if you do, I'm not convinced it's always the most practical choice given people's actual needs.

In addition to my reply above, since I'm solely building a home media server and will most likely not stream more than 1 HD movie at a time, would there be ANY need for a hardware solution with the extra expense? I don't plan on having the need, or ability, to support multiple HD streams for a at least 3+ years.

In addition to my reply above, since I'm solely building a home media server and will most likely not stream more than 1 HD movie at a time, would there be ANY need for a hardware solution with the extra expense? I don't plan on having the need, or ability, to support multiple HD streams for a at least 3+ years.

Many software solutions can easily stream multiple HD streams. I know for a fact Unraid is currently capable of this. Reading from your server is more a limitation of your network capabilities, while writing is a characteristic of your server components (to put it very broadly).

Sorry for the thread hijack! ; however, one more question, I currently run normal 10/100 routers and cards and Cat 5e cables. Should I upgrade that to 10/100/1g when I finally build the server and will I need to change over the 5e to 6 cables? I'm still leaning toward a nearly identical build as the OP solely for cost purposes at the moment, but at least I'm opening my eyes to HW raid for future consideration.

I would think after spending all that money on this server that the no brainer solution would be to stop using a software RAID solution and opt for a good hardware raid controller(s).

Hardware RAIDs are quicker, use less system resources, and more stable and reliable then software RAIDs from an OS.

This setup is like buying a lamborghini but opting out of the V12 engine for a 6 banger instead.

unRAID is not representative of all software RAID implementations. It is possible to get significantly faster write speeds than what has been mentioned here. With Linux md RAID6, I experience read and write speeds ≥100 mbytes/sec using desktop hardware; with only gigabit ethernet, I see no reason to try tuning any further (or migrate to a faster file system than ext3.)

Hardware RAID is not inherently more stable or more reliable than software RAID.

"System resources" aren't so important on a dedicated file server.
On my file server, software RAID6 calculations are not the limiting factor. Bandwidth (~240 mbytes/sec per array) is a significant bottleneck. During writes, more CPU time is taken by the ext3 file system driver than the software RAID implementation...

The op (Tim) put a lot effort and work into detailing and documenting his build as well as provided excellent pictures and parts links. It was his CHOICE to use unRaid. You need to respect that and not try to derail and pollute his thread.

There seems to be an ongoing crusade by a select few to ram raid down everyone's throat whether they want/asked for it or not. And then once that happens another wave of posts come in starting a holly war debate between hardware vs software raid.

While raid has it's uses, the fact is raid setups such as raid 5 and 6 are considered LIABILITIES with respect to safeguarding data, but this isn't the thread to discuss this. At the very least respect the op, show some kindness, and start your own raid discussion thread.

p.s. Tim, nice work. Thanks for all the details and pictures.
I'm sure it will help a lot of people.