Welcome to the (un)official guide to building a media storage server. Why is a "Storage Server" relevant today? Because media collections have outgrown what a few harddisks installed in an HTPC can handle. A centralized approach to media storage is especially relevant for people in homes with multiple displays (televisions) and multiple HTPC's, but also very cost effective for entry level HTPC setups. Never having to handle physical media again is a great perk.

Building a storage server doesn't have to be expensive - you can start with something like a 20-bay Norco RPC-4220 case ($300), PC Power & Cooling 750Watt Power Supply ($115) and low-priced motherboard/RAM/CPU, then just add a few drives to begin with: the beauty of adding a drive-at-a-time each time your storage is almost full is that you can leverage falling harddrive prices rather than populating all the bays at once. Just an example.

Hey odditory, great thread idea. I don't want to appear selfish, (especially while being selfish), but I'd like to suggest that one of the reference profiles we work on in this thread be a "Streaming Server". The goal would be to define a system that has both maximum HD multi-streaming capabilities, (able to server many HTPCs concurrently), and minimal power requirements, (so that it can be left on 24/7... without contributing significantly to global warming). Fundamental components for this system would be ~20 drives in a RAID array or JBOD and a Mobo with onboard admin video and 2 NICs. CPU and MoBo should be as minimally powerful as needed to manage the streaming capabilities while minimizing power demands.

Part of the thinking for this "Bit Beast" is that it will be an island of stability in a sea of HTPC change. I suspect we'll continue to see great changes in all the AV technologies, (requiring frequent updates to our HTPC builds), so separating this server from all of those changes will make it easier to keep it running safely and efficiently. My 2 cents worth anyway... though it does seem like there are many others with this same interest...

Hey odditory, great thread idea. I don't want to appear selfish, (especially while being selfish), but I'd like to suggest that one of the reference profiles we work on in this thread be a "Streaming Server". The goal would be to define a system that has both maximum HD multi-streaming capabilities, (able to server many HTPCs concurrently), and minimal power requirements, (so that it can be left on 24/7... without contributing significantly to global warming). Fundamental components for this system would be ~20 drives in a RAID array or JBOD and a Mobo with onboard admin video and 2 NICs. CPU and MoBo should be as minimally powerful as needed to manage the streaming capabilities while minimizing power demands.

Part of the thinking for this "Bit Beast" is that it will be an island of stability in a sea of HTPC change. I suspect we'll continue to see great changes in all the AV technologies, (requiring frequent updates to our HTPC builds), so separating this server from all of those changes will make it easier to keep it running safely and efficiently. My 2 cents worth anyway... though it does seem like there are many others with this same interest...

See, in that one paragraph, you've put in enough "requirements", that there can be potentially many many different solutions. How do you decide which one is the right one?

For e.g., let's talk RAID/JBOD. With a target of 20 drives, your controller selection becomes way more muddled than if you needed only, let's say 8 drives. For the most part, there are NO controllers other than hardware controllers that offer that many discrete ports (not getting into port multipliers, and/or SAS expanders). However, 20 drives <> hardware RAID. What if you wanted to run Linux software RAID, but wanted to have 20 or more disks? Do you still go for an Areca/Adaptec etc? It'd be a waste of money isn't it, if you are not using the cards to their full potential? Or you could get multiple 8 port cards....

Now, that becomes even more muddled, because now your slot specifications make the motherboard selection so much more complicated, and in turn the CPU selection. Most motherboards that have multiple "good" (i.e. atleast pci-e x4 if not x8...or hell PCI-X) slots, are server class motherboards, but they take server CPUs, which, yes, do come in low wattage versions as well, but they are REAL expensive...

And now, if you're running software RAID, your CPU can't be a dinky little CPU, since it has to do some work, so back to the drawing board...

Hey odditory, great thread idea. I don't want to appear selfish, (especially while being selfish), but I'd like to suggest that one of the reference profiles we work on in this thread be a "Streaming Server". The goal would be to define a system that has both maximum HD multi-streaming capabilities, (able to server many HTPCs concurrently), and minimal power requirements, (so that it can be left on 24/7... without contributing significantly to global warming). Fundamental components for this system would be ~20 drives in a RAID array or JBOD and a Mobo with onboard admin video and 2 NICs. CPU and MoBo should be as minimally powerful as needed to manage the streaming capabilities while minimizing power demands.

Part of the thinking for this "Bit Beast" is that it will be an island of stability in a sea of HTPC change. I suspect we'll continue to see great changes in all the AV technologies, (requiring frequent updates to our HTPC builds), so separating this server from all of those changes will make it easier to keep it running safely and efficiently. My 2 cents worth anyway... though it does seem like there are many others with this same interest...

What you've described to be a 'streaming server' is no different than the type of server we'll be talking about building in this thread - your examples are what just about everyone wants to do! Example one of my 20-drive Norco RPC-4020 systems with a 750Watt P/S only draws 146 Watts when idling and the array is spun down. That's like 1.5 100watt lightbulbs left on 24x7 - I can live with that.

Anyway stay tuned for the various profiles of media servers we'll be creating - any of them will fit your description.

See, as far as I'm concerned (And that's just me, doesn't mean I'm right), hardware RAID cards are always an overkill for a "home media server". Sure if you're doing other things on this server, like work stuff, or some serious video encoding or the likes, then a hardware RAID card makes perfect sense, but if the objective is to basically have "fault tolerant" storage to store digital media, for home viewing and distribution, there's absolutely nothing wrong with "software" raid, whether that's Raid stack based, or pure software raid in Windows or Linux.

CPU's today have become so much more powerful and so much more cheaper than compared to even 3-4 years ago, that it's almost shameful to let all that power go to waste, and spend money on a hardware RAID card.

An Areca 1680ix retails for $1250 or so. What?? TWELEVE HUNDRED DOLLARS?? I'm sure it's needed in a business critical environment, but for a home media server, 3 Supermicro AOC-SAT2-MV8 cards, and Linux (or even Windows Server) software Raid, will give you more than enough raw performance for streaming digital media throughout your home, if not part of the neighbourhood..

The downside to a lot of these software raid implementations, is flexibility. For e.g. Windows software RAID won't allow you to expand a RAID-5 array, you have to delete the array and recreate it, and Windows won't do RAID-6 at all.

Now Linux.... is waaay better at RAID than Windows, and it does almost everything that hardware raid does, but it does have a downside. It's a bi*ch to setup Unless you are very very familiar with Linux and terminal windows and command lines working with Linux can be a daunting task.

So, it all boils down to cost benefit, time, your technical abilities, money...the usual suspects.

Maybe there is market for pre-installed/raid6-ready Norco 20 bay case?

Some popular NAS solution is also very expensive. 4 bay could cost up to 1k.

Sure there is, but understand that most of the audience for this type of product is....right here (among other places). And we as a general rule, try to compare the "cost" of a product, with rolling our own. ANd rolling your own in most cases will win on cost, because the cost of carrying inventory, assembly, testing, infrastructure costs, marketing costs, support costs...all of those add a premium to the "roll your own" cost, and I'm not sure how many are willing to pay that premium.

IMO high availability is not a requirement for a home server, and the main reasons for using RAID are its other qualities - drive pooling, easy storage upgrade path, and hardware redundancy. These can be achieved using alternatives such as WHS, FlexRAID, unRAID etc, and I feel this would also be a good place to discuss these.

Ok, well I'm thinking I'd like to add another raid using a empty norco 4020 case and an Areca 1280ml 24x adapter.

I'd like to put the adapter in the current server, but I'm wondering what kind of cables I'll have to order...

Going by the pictures on Newegg, the 1280ml, doesn't come with fan out adapters. Hmm, so it looks like I'll have to order a few of those, right?

What should I do about running the cables from one case to the other? Could I get away with 1 meter internal fan-out cables going from one case to the backplane of the second? Is there enough length for that?

Thanks kapone. Clearly this thread is going to be both fun... and valuable.

Quote:

Originally Posted by kapone

See, in that one paragraph, you've put in enough "requirements", that there can be potentially many many different solutions. How do you decide which one is the right one?

Exactly. I actually was trying not to specify an implementation. Just one of the basic scenarios that we might try to focus on. Another one might be the "power server" that does it all with multiple CPU chips, hw RAID, 4 NICs, etc. with minimal worries about power use. (Pretty close to odditory's original build) A 3rd scenario might be an "all-in-one" system that is both storage server and HD HTPC/DVR. (This is less attractive to me because I don't think it's a great idea to mix all the dynamic AV stuff that changes almost monthly with the data that I want to be very safe for many many years. This scenario would also be hard to protect streaming performance without a lot of hardware overkill... but some people might find it economical from a space/money standpoint.)

Quote:

For e.g., let's talk RAID/JBOD. With a target of 20 drives, your controller selection becomes way more muddled than if you needed only, let's say 8 drives. For the most part, there are NO controllers other than hardware controllers that offer that many discrete ports (not getting into port multipliers, and/or SAS expanders). However, 20 drives <> hardware RAID. What if you wanted to run Linux software RAID, but wanted to have 20 or more disks? Do you still go for an Areca/Adaptec etc? It'd be a waste of money isn't it, if you are not using the cards to their full potential? Or you could get multiple 8 port cards....

To start things off, I specifically did not want to (yet) specify how the disks would be "tied together" to form a single bit bucket. (But I do think it is important that there be a single logically addressable media store.) I have been looking a lot at the areca cards to manage my 20 drives, but if software RAID and a lot of CPU bandwidth can handle all the concurrent (4-8?) streams in this scenario, I think that would be great to explore. For power savings, staggered spin-up and inactive/timed spin-down are also important... and I'm not sure you get that without hardware assistance.

Quote:

Now, that becomes even more muddled, because now your slot specifications make the motherboard selection so much more complicated, and in turn the CPU selection. Most motherboards that have multiple "good" (i.e. atleast pci-e x4 if not x8...or hell PCI-X) slots, are server class motherboards, but they take server CPUs, which, yes, do come in low wattage versions as well, but they are REAL expensive...

And now, if you're running software RAID, your CPU can't be a dinky little CPU, since it has to do some work, so back to the drawing board...

Storage architecture is never simple.

Exactly why I'm here and listening to people who know a lot more about this than I do! Hell, if this weren't so muddled, it would be easy and there'd be no interest in this thread.

Again, at this point I am not so interested in the implementations... that's something that I think we'll discuss, experiment with, and evolve over time. All I was trying to do is state a scenario that I thought many other people would love to build. (A big fat file server that can handle the real time streaming requirements for 4-8 concurrent HD movies without burning anymore power than it has to. I think we should also stress data protection, (RAID 6, backups, whatever), because it would be a huge headache to have to re-rip (and convert) all the movies that will go into this system.)

One final note, I think power frugality is a big deal... even if you don't care about global warming... which I suspect most of us do. My electric rates are .33/kwh, and will likely go up considerably over the life of this system. At my current rate, if this system burns 400 watts/hour, in a year that is $1156 in electricity... just for this one box. Without going through all kinds of other options and future possibilities, that's already a lot of motivation to spend money up front to cut down to the minimum power level that will still do what we need. All I'm trying to say is that "expensive" CPUs, MoBos, or RAID controllers may not be as expenisve as they initially appear... and I'd hate to discard them out of hand for their initial cost when they would make a lot of sense over the life of the system. (Sorry, climbing down from my soap box now... )

A standard gigabit connection is 1gbps or roughly 125MBps. Taking into account transmission losses, I doubt you'll break 100MBps over the wire in a sustained manner, going continuously, whether that's a single transfer or X number of streams that make up 100MBps.

Even BD tops out at no more than 50mbps, assuming you were to stream a full rate BD with full rate audio.

That means a single gigabit connection can support almost 16 streams going at the same time, even taking into account transmission losses (50mbps * 16 = 800mbps = 100MBps)

If your raid/storage solution can saturate a single gigabit connection, it's more than enough bandwidth for streaming, for even a very large house.

You can saturate a gigabit connection with almost anything, stack based raid, software raid, hardware raid, a single super fast FC drive...

So, then it all boils down to cost benefit, time, your technical abilities, money...the usual suspects.

Now Linux.... is waaay better at RAID than Windows, and it does almost everything that hardware raid does, but it does have a downside. It's a bi*ch to setup Unless you are very very familiar with Linux and terminal windows and command lines working with Linux can be a daunting task.

I actually setup my Fedora software RAID5 arrays using Webmin both times Windows guy that I am.
I did have a play around with the command line though, removing disks and rebuilding to ensure I knew what to do in the event a disk should fail BEFORE copying any data onto them.
For me a PIII 700 and a basic 4 port SATA card is adequate and fast enough for what I need.

For what it's worth, I would definitely like to see the cost/benefit analysis of using a good TOE GigE add-in NIC vs. onboard. After seeing several benchmarks showing the Intel Pro 1000GT dual port hitting 91MBps on each port, I would imagine that adding in even a basic "server" grade NIC will remove the gigabit ethernet bottleneck so you can focus on other challenges.

nVidia needs to remember what a hard launch is and apply it to what's left of their motherboard chipset department. That is all.

For what it's worth, I would definitely like to see the cost/benefit analysis of using a good TOE GigE add-in NIC vs. onboard. After seeing several benchmarks showing the Intel Pro 1000GT dual port hitting 91MBps on each port, I would imagine that adding in even a basic "server" grade NIC will remove the gigabit ethernet bottleneck so you can focus on other challenges.

Yes, add-in NICs do help...and a lot. But so does Intel I/OAT (available in the 5400 chipset, and some versions of the 5100 and 5000 chipsets as well).

This is a Vista workstation to Windows Server 2008, with I/OAT available and configured on the server. The server has dual onboard gigabit ports (Supermicro X7DWE).

There's going to be a lot of input, obviously, about the form and structure to this thread before we even get into substantive discussion. I think this thread is a fantastic idea.

For me, I just set up a Windows Home Server. I used an internal five port multiplier and an internal 2 port sata card. This gives me the option of using 10 drives. Its in a Lian-Li case with no DVD drive, an os hard drive, and a 650 watt PSU. I stream BD fine over my wired, non-gigabit, network. My pictures, docs, music, movies, and recorded tv are on the server. Its totally fine for my needs.

I didn't want to take the time to figure out Linux. My eee pc is enough for me, and I love it, but I didn't want to learn in any greater detail, terminals, commands, etc., as kapone mentioned above.

Thus, it seems to me that prior to any hardware talk, we should consider the OS options for folks. Some people just want something that they can turn on and forget about, build fairly cheaply, and can operate smoothly. The OS talk could incorporate this as well as the redundancy/backup/RAID functions of the given OS (e.g., WHS's folder duplicaiton, vs. UnRaid, vs. Server 2003, vs. Drobo, vs. Fedora 9, etc.). I think this would be helpful.

Honestly, the hardware part seems to be the easier part of the equation for me.

Odditory, I am on board with this thread-fantastic idea and you rightly strive to mirror the tenor and focus of renethx's thread. The reason its so successful is that its succint, he or she is responsive, and its easy to read/discern/and use. A chart would be useful perhaps!

I look forward to contributing in any way I can!

Alex

EDIT: For example, in kapone post above, he shows the copying speed on his large file. No way will I get to those speeds with WHS pm a 100mb connection (if i should be, please let me know why I'm not!). But, for some folks, it may not be that big a deal. As long as movies, tv shows, BD can be streamed, maybe its not a deal-breaker. I mean, don't get me wrong, if its easy enough to setup (and purchase) to get 85mb/s speed, then I have some reconfiguring to do.

A 3rd scenario might be an "all-in-one" system that is both storage server and HD HTPC/DVR. (This is less attractive to me because I don't think it's a great idea to mix all the dynamic AV stuff that changes almost monthly with the data that I want to be very safe for many many years. This scenario would also be hard to protect streaming performance without a lot of hardware overkill... but some people might find it economical from a space/money standpoint.)

This is the approach I have taken. The advantages are space, cost effectiveness, energy efficiency (one server instead of server + HTPC), little need for 24/7 operation, just turn it off with the remote when you're done watching.

The potential downsides are heat and noise. I paid attention to fans and use a well-ventilated case, so this has worked out fine.

While I agree that HTPC silliness does prompt one to often reconfigure and install/uninstall software (especially at this early stage in the Blu-ray follies), there are ways to mitigate the resulting issues:

1) Norton Ghost I can't say enough about this program. I keep backup images of my OS drive on the RAID5 array. Before I install a potentially-destructive or buggy piece of software I create a recent image. If the new SW fails miserably, I don't even bother with uninstall (which we all know does not always work completely), I just re-image the hard drive.

2) I set up Windows Vista to store 'My Documents' and other key folders like email on the RAID array. A drive restoration does not touch my data, just the software.

3) I use hardware RAID. Strange happenings with the OS or software just don't have much of an effect on the array.

4) I keep a backup of my most critical data (family photos and movies) on a separate drive in addition to the RAID array. This is actually just a second partition on the 1 TB OS drive. Norton Ghost automatically keeps it up to date.

I believe this approach strikes a good balance for my needs. A minimum of cost, complexity and energy use while storing 4 TB with ports to expand to 8 TB without port multipliers.

All I was trying to do is state a scenario that I thought many other people would love to build. (A big fat file server that can handle the real time streaming requirements for 4-8 concurrent HD movies without burning anymore power than it has to. I think we should also stress data protection, (RAID 6, backups, whatever), because it would be a huge headache to have to re-rip (and convert) all the movies that will go into this system.)

+1.

I think data saftey is most important here.

For a 10-20TB server, I couldn't image how long it would take to re-rip all the bluray.

IMO high availability is not a requirement for a home server, and the main reasons for using RAID are its other qualities - drive pooling, easy storage upgrade path, and hardware redundancy. These can be achieved using alternatives such as WHS, FlexRAID, unRAID etc, and I feel this would also be a good place to discuss these.

I agree. Grab some old hardware you have lying around and install WHS. Done. It's what I use and it works flawlessly. And the drive pooling along with the ability to use drives of any size make it very flexible.

I don't know about others but I have a ton of old drives all of different sizes. So this works out great for me. I have a coolermaster stacker case. An old P4 socket 478 board and some PCI promise add on cards and I can connect a dozen drives.

Trying to build the server using Windows Home Server. I heard there is no software RAID working with WHS yet? So better to go with Win2K8? or better to stick with WHS?

Can anyone let me know the other components i need. I am pretty confused on what i need to make sure to connect 20 HDDs. Because i am not going with Hardware RAID, i am trying to cut down the cost here by not buying 16 Slots RAID Cards etc..

What i am trying to achive here is, Use the server as DAS to my HTPC. For Bluray rips from my desktop upstairs using as NAS to copy the DVDs and Bluray from Upstairs to Basement where the HTPC and DAS will be located in my Home Theater area
Any input is appreciated"

For me, I just set up a Windows Home Server. I used an internal five port multiplier and an internal 2 port sata card. This gives me the option of using 10 drives. Its in a Lian-Li case with no DVD drive, an os hard drive, and a 650 watt PSU. I stream BD fine over my wired, non-gigabit, network. My pictures, docs, music, movies, and recorded tv are on the server. Its totally fine for my needs.

EDIT: For example, in kapone post above, he shows the copying speed on his large file. No way will I get to those speeds with WHS pm a 100mb connection (if i should be, please let me know why I'm not!). But, for some folks, it may not be that big a deal. As long as movies, tv shows, BD can be streamed, maybe its not a deal-breaker. I mean, don't get me wrong, if its easy enough to setup (and purchase) to get 85mb/s speed, then I have some reconfiguring to do.

I get a sustained 50-60MB/s transfer speed on my WHS. I have a gigabit network and I also had to ditch the onboard ethernet and instead got an Intel Pro 1000GT card for a few bucks. Oh I could stream HD fine with the onboard ethernet. But the Intel just offered that much better speed.

Quote:

Originally Posted by honeybrain

Trying to build the server using Windows Home Server. I heard there is no software RAID working with WHS yet? So better to go with Win2K8? or better to stick with WHS?

There is no raid at all. And it's not needed. It has duplication to protect against drive failure. It's easy to get up and running and you don't need to use drives of the same size. People have talked about running RAID within WHS but it's not supported. And doing anything that isn't supported may result in data loss and absolutely no support from microsoft should something go bad.

[quote=
There is no raid at all. And it's not needed. It has duplication to protect against drive failure. It's easy to get up and running and you don't need to use drives of the same size. People have talked about running RAID within WHS but it's not supported. And doing anything that isn't supported may result in data loss and absolutely no support from microsoft should something go bad.

All my video is on my WHS. Including HD video. It works great.[/QUOTE]

That sounds good. All i need to make sure the data was protected when needed. And replace disks without loosing data. I am not interested either. I only ask this question because i am aware of these RAID Concepts nor WHS concept either

I'm just about ready to begin ordering parts for my home server. I was planning to use the 2008 version of the 15 TB (16 HDD) Home Media Server, recommended by renethx in the 'Guide to Building a HD HTPC' stickied thread as a starting point and adjust it given current offerings:

Quote:

Originally Posted by renethx

15TB (16 HDD) System

The first system is a 16 HDD system in a tower case which is also rack-mountable by removing foot stand and top cover and attaching rackmount bracket (optional accessory).

If you build multiple servers in future, choosing a rack-mountable chassis from the beginning may be a better idea.

Supermicro AOC-SAT2-MV8 is a PCI-X card. However it works fine with a PCI slot and the performance is good (at least as good as a single disk; it's natural considering the bandwidth of PCI is 133MB/s, higher than most single disks).

Is there anything obvious about that build that is in definite need of changing, either because of better hardware or more cost-effective alternatives? It would be used mainly for storing media files to stream to a Vista Premium PC and MC extenders. I expect to use Windows Home Server, but would consider a Linux-based solution (for the server or even the PC, ala MythTV) if that could work better/cheaper in this situation. Thanks in advance!

While WHS has it's strengths, it's a colossal waste of space as far as storing any decent sized media collection is concerned. Duplication is about the worst thing you can do for large media collections. While the ability to use different sized drives is great, you'll probably be better off just selling or donating those drives, and upgrading to larger ones.

Why? Spindle count. The more spindles you run, the more power it takes. 10TB with twenty 500GB drives, is half as efficient as ten 1TB drives, and 1/3rd as efficient if using 1.5TB drives, as far as power consumption is concerned.

Take an example of a simple 7TB target of usable space.

- With WHS and duplication, you'll need 14 1TB drives, and will still need a controller with 14 ports, even if it's dumb. Cost = (14*$130 approx = $1,820 + the cost of a 14 port controller. That controller would at the very least be $180 or so, whether it's a single controller or two 8 port controllers). Total cost = approx $2,000.

- Even with software RAID, take eight 1TB drives, a cheapass Supermicro MV8 eight port SATA controller, and you have 7TB of usable space. Total cost = 8*$130 + $94 for the controller = $1,134.

While WHS has it's strengths, it's a colossal waste of space as far as storing any decent sized media collection is concerned. Duplication is about the worst thing you can do for large media collections. While the ability to use different sized drives is great, you'll probably be better off just selling or donating those drives, and upgrading to larger ones.

Why? Spindle count. The more spindles you run, the more power it takes. 10TB with twenty 500GB drives, is half as efficient as ten 1TB drives, and 1/3rd as efficient if using 1.5TB drives, as far as power consumption is concerned.

Take an example of a simple 7TB target of usable space.

- With WHS and duplication, you'll need 14 1TB drives, and will still need a controller with 14 ports, even if it's dumb. Cost = (14*$130 approx = $1,820 + the cost of a 14 port controller. That controller would at the very least be $180 or so, whether it's a single controller or two 8 port controllers). Total cost = approx $2,000.

- Even with software RAID, take eight 1TB drives, a cheapass Supermicro MV8 eight port SATA controller, and you have 7TB of usable space. Total cost = 8*$130 + $94 for the controller = $1,134.

Yes, but when you already have a bunch of drives of different sizes lying around it works out well. I have nowhere near that kind of money to buy that many 1tb drives. And WHS has numerous other features that I also find useful such as the ability to do bare metal restores quickly, the remote access via a web browser, it's great. And right now I only need 3tb of usable space which I have and I still have 5 empty drive bays.

Sure duplication is not as efficent as RAID but that's fine. I don't duplicate everything, not the stuff that I can replace easily. RAID has it's own faults such as bit rot.

I also buy drives in certain price ranges. Back when 160gb drives were in my price range I bought some of those, then 250gb drives came down in price, then 500gb, then 750gb and now 1tb are down around what I would spend. So I have a bunch of all these drives. If I don't use them for this they would sit in a closet and collect dust. And I certainly can't sell used drives and get any amount of money from them that would actually help me.

So whatever storage solution I use it must use drives of different sizes and it must pool them together.

"I don't duplicate everything, not the stuff that I can replace easily."

Understood, but what defines "replace easily"? All of my DVDs, HD DVDs, BDs (among lots of other things) are ripped to my server. If the server goes up in blue smoke, can I replace the rips? Sure, but it's gonna a pain in the butt and is gonna take a LONG time to rip them again. Doesn't mean, I don't have the "ability" to do it, I just don't have the time or the inclination to spend that much effort again.

That's the whole point of "fault tolerant" storage, isn't it? To not have to re-rip/copy/move stuff again, if a hard drive dies (and they will). So, if you're using WHS and NOT duplicating your data, you're at the mercy of the life span of a single disk. While that may be acceptable to folks, it's not acceptable to "me". I have seen too many HDD failures over the years.

Yes, ZFS "seems" to be a very very good solution as well, but it has it's drawbacks as well. And a lot of them.

I'm just about ready to begin ordering parts for my home server. I was planning to use the recommended by renethx in the 'Guide to Building a HD HTPC' stickied thread as a starting point and adjust it given current offerings:

Is there anything obvious about that build that is in definite need of changing, either because of better hardware or more cost-effective alternatives? It would be used mainly for storing media files to stream to a Vista Premium PC and MC extenders. I expect to use Windows Home Server, but would consider a Linux-based solution (for the server or even the PC, ala MythTV) if that could work better/cheaper in this situation. Thanks in advance!

Welcome to the thread, JimsArcade. Yes there is something VERY obvious as not cost effective: WHY spend over $600 to get a tower case and 3 x 5-in-3 expanders to get 15 drive bays, when you can buy a Norco RPC-4020 case for $289 and it comes with 20 drive bays built-in? If you need rack rails, add the Norco RL-26 rail kit for $37.

I hate to sound like a Norco cheerleader but they've got a ridiculously low pricepoint relative to other ways of achieving that amount of drive bays. I think this case has become the easiest decision when putting a parts list together. Back in January when I first bought a Supermicro 24-bay case for around $1000 I thought *that* was the deal of the century since prior to that, cases with as many bays were multiple thousands and usually only from bigger name vendors.

As for the rest of the parts list compiled by renethx, I think it's a bit dated and there are better alternatives - I'm working on this new list right now. One of the things that's important to me is a motherboard with as many PCIe slots as possible (at least 3 or 4) for multiport SAS/SATA cards. I also have a personal preference for Intel-based CPU's over desktop-class AMD chips when it comes to running server O/S's (unless we're talking the more expensive AMD Opteron chips which are good).