Posted
by
CmdrTaco
on Monday January 16, 2006 @05:14PM
from the what-do-you-recommend dept.

It happened again- a machine on my home network died. Taking with it tons of data. It's mostly backed up. No huge loss. But I finally think it's time to get some sort of network raid disk. A unified place to safely store data accessible to the numerous machines on my home lan. So now I pose to Slashdot readers- what are your recommendations? I'm looking for something with RAID and SMB sharing. At least a quarter TB, probably a half, but with some room to grow. What have you used? What works? What fails?

As CmdrTaco, I'm sure you have money coming out of your ears that you've harvested from the pseudo-religion that is Slashdot.

But for those of you with fewer fiscal resources, I will tell you the stories of my friend and me, a.k.a. The Master Rebaters.

My story is a simple one. I love music. I have over 1,000 CDs and have spent a lot of time meticulously ripping them with my friend CDex [sourceforge.net]. So, I have some 350-400GB of data that I would like to archive. There are a multitude of possibilities but, since I'm short on cash, I opted for a simple $13 RAID 1 controller [geeks.com]... I know, I know, I'm going to catch hell for using such a crappy generic product. And I know many people who will tell you that VIA is crap when it comes to RAID controllers. Maybe you're one of them. If you are, I hear that the brand Promise provides excellent RAID controllers, you'll just pay a whole lot more for them. A couple of these babies [newegg.com] in RAID 1 [wikipedia.org] and you're set.

My friend, however, opted for a huge and expensive RAID 6 array controller made by Promise. Then he waited and waited until there was a 250 GB Maxtor rebate at CompUSA [compusa.com] or Outpost [outpost.com] and went in and bought five with cash. Then he filled out the rebates for relatives and played the waiting game. Huge initial investment but he received a lot of money back slowly. Result, a 1.1 ~ 1.2 TB RAID array. He got a lot more storage and more efficient use of the disks since a RAID 6 with striping allows for drives to be rebuilt in the array.

What he wasn't planning on was the logistics of what he would have to do to his Antec case as a result of all these drives. Fans. Airflow. Heat. These all became huge issues for him--especially in the summer. I'm not sure what your situation is with a case but I made no alterations to my case.

Now, there's a lot of things I skipped over that you can take into consideration, like SATA or ATA? 7,200 RPM or 10,000 RPM? 8MB or 16MB buffer? Striping size? etc. Honestly, those issues aren't worth my time to mess with. Sure sure, I'm losing precious ms seek/read time on my disks but I'm not that motivated.

In the end, if you're only looking for half a TB, do what I did. Those 500 GB drives will only get cheaper and if one blows, just pop another in. And if you really need that room to grow, grab the nice RAID controller that supports RAID 0-6 and just use two 500GBs leaving the other three slots open for the future when you might buy them and RAID 6 it.

First of all... almost nobody sells RAID 6 devices. I'm aware of only one company that does and it's not Promise. It's an odd ball configuration and I can't see where it would be all that useful. The common RAID configurations are 0, 1, 5, 0+1, 10, and 50.Second Promise can never be considered a "nice" controller. It works, it's fairly cheap, but it's consumer grade stuff.

I wouldn't bother with a cheap RAID controller. Go with md raid in Linux. It's free, you never have to worry about finding the same

RAID6 [wikipedia.org] is not at all an odd combination. It uses a 2D parity scheme to ensure that there is no data loss in the array if there are any 2 drive failures simultaneously.
Although I agree that I've never seen RAID6 controllers from promise, Newegg has some [newegg.com] from Areca; though you'll pay quite a bit for them.

Lots of people seem to mention the 3ware cards, but at that price I'd rather get the nice Areca ARC1220 instead (which is also PCIe - no PCI-X req'd)

I'm looking for a similar solution, but even though these cards look very nice, I'll definitely go with software RAID5 too, those controllers are too expensive... I'd rather spend the extra money these controllers cost on more storage (that 500$ will buy around 1TB).

By the time your $3000 tape drive "pays off in the long run", I'll be buying 5TB disks for $150.Redundancy and mirroring is the way to go these days. It doesn't much matter if your drive fails if it's just one drive in a RAID that is mirrored in other places on other completely independant RAIDs. In any case, ATA/SATA drives don't fail all that much more than SCSI, at least in the first 3 years*.

Even at work where we store multiple terabytes of business critical data, we use SATA. We just keep 3 independ

Whatever you do, you MUST be protected from accidental deletion and corruption. That means you need a backup, which RAID is not. Now assuming you maintain a separate backup, why waste disk space on a separate "hot" backup, which RAID (not 0) provides? If this is home use, you don't care about the downtime required to restore from background in event of a disk failure.

If you're like me, you don't want to buy a bunch of identical disks at once for home use. Instead, you have a range of larger newer disks, and smaller older disks. . This means the disks you want to use are NOT all the same size, as required by RAID (AFAIK). Instead, you can use LVM [tldp.org] with linear mapping to combine smaller drives into one larger one, even if the physical drives are mismatched sizes. Create one logical volume for live, and one for backup, and do nightly updates of the backup. You probably don't want/need to compress the backup if the bulk of your files are already compressed media files.

I have mod points, but I feel it's more important to just correct you. He already has everything backed up and the LVM idea doesn't do anything to help his situation.

He does care about downtime. Downtime = time spent restoring. With a RAID level > 0, all he has to do is replace a drive and tell the raid to rebuild. He's done in 5 minutes. It would take that long just to queue up a restore job for the tape.

There are some great posts on this topic in a past Slashdot discussion (Taco should've done his Googling ffs, it was only 2-3 months ago that the discussion in question was on Ask/.)The discussion in question http://ask.slashdot.org/article.pl?sid=05/11/26/03 37226 [slashdot.org]

The basic idea:

Split drives into small partitions, say 20-25 GB each. Since most drives available now are a multiple of 50GB, I suggest going with 25GB or 50GB per partition. Make software RAID devices out of sets of these partitions, one on ea

Why is this so hard?Problem: Disks DieProblem 2: They aren't freeProblem 3: I can't lose any data at all ever even the stuff from the last 24 hrs.Oh wait, home users and 99% of business don't have problem 3 even if they think they do.So you get your self a nice cheap low power box with a big disk(s). Since its on 24x7 maybe even consider some some of the 2.5 inch disks or if you need speed (which you don't) get the 10k 2.5inch disks.So you get a few external cases with firewire and/or UBS2. Set up cron jo

LinkSys NSLU2 [linksys.com]. Plugs into your home network. (10/100) Then you get yourself 2 IDE drives and 2 USB 2.0 enclosures then plug them in. Then you can set it to periodically back-up one drive to the other. Sure, it's not as bullet-proof as RAID5. But it's dead simple, cheap, and it just works. Failure recovery is dead simple. Also, the system is has some of the same flexibility as the Buffalo Teraserver. (Plug in your friend's USB 2.0 drive when he comes over.)

Also, with this scheme, you can delete a file and change your mind. (Recover from the back-up before the weekly copy job.)

And, if this is too simple for your geek quotient, it's Linux-based [batbox.org] and hackable [tomsnetworking.com]!

I had a similar experience with the YellowMachine. It was advertised as 1TB RAID, but the fine print reads "on some models" and the ones I could actually find only gave 650G in RAID5 configuration. No big deal. But it's slow. SLOW. It's got an ARM processor that runs at 100 bogomips and it has 64M of RAM.Mounting is slow. ls is slow. I got it to store media files but I find I can't play mp3s from it unless I tell my player to cache the whole song (and forget about crossfade). It came with telnetd running an

Hardware VS Software RaidThe $13 card you purchased is software Raid. Promise cards are mostly hardware RAID. I recently purchased a Promise FastTrack S150 SX4-M for less than $100 hardware RAID5 card compared to the $30-50 software RAID5 cards. I'm pretty satisified with the purchase but unfortunately there isn't room for much upgrade. I currently have 4x160GB in a RAID5 configuration giving me 480GB of space and 1 disc of redundnacy.

Some useful links to tell you the difference between software raid and ha

Interestingly enough, you'd spend more on your electrical bill over the course of 2-3 months than you've "saved" by not buying some new gear. You do get an alternate heating source out of the bargain...

Not to mention that the cheapest floppy drive I found on quick newegg and froogle searches was $5.00 (And that's not even USB, but let's go with it.) Let's just assume that with that sort of volume you can pick them up for $1.00 a unit. That's still $150,000 just for the drives, and you still have to worry about cables, floppy media, hubs and controllers (I believe one USB root hub can only control 127 root devices, uncluding hubs, so let's say about 100 floppy drives and 30 hubs per controller, means 1,500 USB controllers plus 45,000 hubs (powered of cours). This means you have to run power to every floppy drive, or at the very least to every hub, so you have quite possibly several miles of cable to deal with, and some way to generate all this electrical power (Hint: your standard home fusebox probably won't handle it. assume half an amp per device at 12 volts, that's 6 watts, that's almost a megawatt of power assuming good efficiency, so expect no less than about 1000 amps in your standard household 120v current just to power the drives.) and then dissipate the extra heat. And a place to put a square of 400X400 floppies, plus all the auxillary equipment won't be free.

So, we're easilly talking on the order of... a million dollars in equipment, labor and other expenses.
Oh, and this is just talking about RAID 0. If any of those 150,000 floppies fail the whole array fails. Even with massive redundancy you will still need at least a full time employee going around swapping in floppies when one fails. Not to mention you'd need to multiply all the original costs by the amount of redundancy, plus overhead (we're talking having to hire managers and middle managers to coordinate the whole process.)

Why choose a card (and the requisite set of drivers and/or other software) instead of a box that manages the RAID for you and presents a single drive to the host (like Raidweb [raidweb.com] boxes)? I don't work for Raidweb, but I know some of their customers and the people I know are satisfied with the devices.

If a home media jukebox drive fails, who will be at home to replace a drive with a cold spare? Do people normally build their card-based systems with fallback power supplies and a hot spare?

> Yes, but he was saying that it has 2 SATA ports, and RAID is not supported on the PATA ports. So how do you get 4 drives into the array?

Partitioning, man, partitioning. I have a nice 4G drive partitioned into 4 1G disks and am running RAID 5 on it. Not only are read speeds increased, but if one partition fails I'll only have to replace that partition. Amazing!:)

You have 4 extra ports to expand your RAID if you need too, or you could get bigger harddrives. I think 3Ware cards can support up to 2TB of HD space - so that gives you some expandability. Plus you have a RAID5 which has fault tollerence built in.

Matched drives give you better performance but they are not technicaly required. Some raid cards might have checked for this but none that I have worked with. 3ware specificaly does not use a chunk of a drive so that different drive sized can be accomodated I have a 4 drive 2 maxtor 2 WD raid 5 on a 3ware 95xx and it works fine. Cheep windows mirror and stripe software "raid" controlers probably have this issue but it should work fine putting a larger drive in to replace the failling unit, as there logic

Write performance: insigificant. He said it was for archival use, so presumably it's a lot of reading and not so much writing. Besides, any reasonable RAID should be faster than a single disk, and with just two or three drives you'll be fast approaching the upper limit of gigabit ethernet (I'm presuming that Taco's house isn't wired with infiniband, though I suppose it might be).

Multi-disk failure: Well, you can still lose your RAID-10 if two disks from the same linear array fail, so you're spending a lot of money and not really gaining a whole lot - the 33.3% figure only applies to a 4-disk RAID-10.

If you've got 4 disks and are concerned about 2 failing, go RAID-6. You get the same capacity as the RAID-10 would get you (capacity * (n-2)), and you also have a 0% chance that 2 failed disks will take the array down. To increase capacity, you just need to add one disk at a time, too (after the initial 3), as opposed to the RAID-10, where you have to add in [at least] pairs.

ReadyNAS [infrant.com] is reported to be a better choice than Buffalo. There is a Tom's Networking review on ReadyNAS 600 [tomsnetworking.com] that compares the two fairly well. It costs a bit more (~1100) for the same amount of storage, but it's worth it if the quality is that much better. Also, I've been told you can have two of them where one remotely backs-up the other . . . which allows for disaster recovery where the physical location of the original is destroyed.

I have the ReadyNAS x6 [infrant.com], and I love it to pieces. It just sits there and serves my media (runs SlimServer out to my Squeezebox, no more PC involved). It's been up a couple of months with no problems at all, although I'm starting to fill up.

For backups I run some nice Plan 9 [bell-labs.com] magic - the Venti [bell-labs.com] archiving file server. No-hastle incremental backups, snapshots of previous days, identical-block compression, and so on. It's been ported to Unix (and so runs on my Mac), and provides more peace of mind (coupled with the raid) about my data than I thought possible.

I don't think CmdrTaco submits his entries on his own blog to other editors for review.That said, I think he specifically wants the latest and greatest from the slashdot crowd, probably because he values the opinions of those here greater than those off the random internet as a whole.

Personally, I bought an lower-grade PC a few years ago, stuck a big drive in it, installed Xandros 2.0 (because I wanted to try Linux, and Xandros was easy for this hardware-not-software-not-computer-tech engineer to install).

I've been looking on-line trying to find this sort of possibility and the only prefab system I've found that has configurable RAID in a consumer NAS is the Buffalo Terrastation [buy.com]. I've seen lots of NAS devices but basically they are all just a single hard drive with a network connection.

I have not used one of these and do not know if it's any good, but like I said, I haven't seen any other options for a prefab system. I've priced out what it would cost to roll my own system like this and it ends up being only a tad more expensive to get a prefab device. Actually, I think the price dropped on the terrastation so I'm not sure that's true anymore.

Also, if you get something like this, you should seriously consider upgrading to gigabit Ethernet if you haven't already. I have a network mounted share for most of my files and it works pretty well, but when I try to do things like synchronize my ipod against it, it totally crawls. Having a networked file server works better if it doesn't feel like your files are on a network.

1. SMB only? NFS is faster and plain better, but only for mac/linux.2. Noise/size/power constraints.3. Price.

SMB only, moderately cheap, quiet and small, go for a teraserver from buffalo networks. Easy to setup, runs decently, 4x250 drives that can be raid-5'd into a 750 array. Costs about $800.

A good midlevel solution is an nforce4 motherboard, with 4 250 sata drives, total cost around $600 w/ cpu mem, etc. You need a decent case though, and it will be noisier and louder. Plus side is better performance, full customization, and ability to use it as a router or such. You will have to configure it yourself, and likely throw windows on it because the nforce raid support is tricky on linux for a novice.

I use a heavier 2tb solution myself with a HW raid card, but for most purposes a sw raid is better, and the performance difference is almost never noticable. Personally I recommend the buffalo if you don't need nfs, just for the size, quietness, and convenience.

I'll skip the whole you forgot about the inventors of nfs [sun.com] whining and just point out that better is highly subjective. First of all you can do password authentication with samba. With NFS its by uid only. While thats convient if your exporting home directories where all the machines can trust each other and are running UNIX, if you want users to be able to mount or browse shares themselves then samba is the way to go. Also samba allows you to share print

Have you tried SFU? It's really crappy. Troublesome to setup and use with major reliability problems. It's workable in a pinch but clearly designed to discourage using it. It just says "We have this feature but look how crappy it is compared to a pure Windows platform." all over it.

Figure out what your data's work, come up with a budget for protecting it, and go from there. Without a reasonable budget, nobody can intelligently recommend specific solutions.

For myself, I'm using a VIA Epia motherboard (quiet, extremely low power consumption) with an Areca 4-channel SATA RAID controller w/ four Seagate 7200.9 250MB Drives in an hot-swap enclosure with extra cooling - configured in a RAID-5 array (all of my data), and two WD 160 GB drives on the IDE channels in a RAID-1 configuration (

As seen on Saturday over on RootPrompt [rootprompt.org],
Inventgeek [inventgeek.com] is running an article
The Poor Man's RAID array [inventgeek.com], written by Jared Bouck.
It's built out of SCSI drives and a RAID controller card.
The appliances that the company I work for ships use dual SATA drives,
the Linux MD driver and LVM2 though. I still haven't worked out whether
that rumours that SCSI drivers are better built and have a greater MTBF
are true - they certainly cost a lot more for smaller capacities.

What self-respecting geek doesn't get the warm fuzzies at the mere mention of the RAID. With the rising GB to Dollar ratio, we felt it was a good time to feature a project that takes Pure Geekieness(TM) and mixes in a good helping of do it your self. Where else are you going to store all those MP3s (legally obtained, of course)? On a single 200 GB Drive? Or a RAID 5 Array? Take you pick, I know where I will be storing mine.

As for the solution, the cheap and easy option nowadays is to simply use stock motherboards - most will accomodate 4 SATA drives and up to 4 PATA drives with no extra work - and run Linux with software RAID on them. It's still a problem to boot from a RAID disk, so one can be set aside for that purpose. Motherboards have GigE nowadays, so speed is not limited by the network link. 300 GB drives are cheap, making a 1.5 TB server affordable if you acquire it piecewise over the course of a year or two.

It happened again- a machine on my home network died. Taking with it tons of data. It's mostly backed up. No huge loss. But I finally think it's time to get some sort of network raid disk. A unified place to safely store data accessible to the numerous machines on my home lan.

RAID could help with downtime, but is not a substitute for backup, really. Tape backup is still very expensive (high inital cost), and DVD's are limited in both quality and storage capacity. Well, I use both, but then my storage needs are slight since I burn my most important data to a DVD-RAM disc every night.

RAID (Redundant Array of Inexpensive Disks) gives an opportunity to use multiple drives to give better performance, capacity and/or redundancy than one can get out of a single drive alone. While a full discussion of the benefits and risks of RAID are outside the scope of this article, there are a couple points that are important to make here:

* RAID has nothing to do with backup. * By itself, RAID will not eliminate down-time.

If this is new information to you, this is not a good starting point for your exploration of RAID.

No, RAID is not backup- but RAID does let you not need your backup (read this carefully before you flame me for saying you don't need backups; if your computer explodes and takes out all the drives in your RAID array, you're SOL without backups, but if you are reverting to backups because one hard drive failed, RAID will/would have fixed your problem.)

Your assumptions are too narrow. Again, RAID is not backup. What do you do if the RAID controller card goes bad? Or a defect PSU toasts the hardisks? Or you delete the wrong file (RAID won't help you here...)

Most home users are better served with having an extra harddisk that they backup to (may recover accidentally deleted files) than RAID. There are many programs to do that automatically. Of course, burn (high quality) DVDs regularly of the most important data.

I have the same situation. At least one of my hard drives dies every year. They just get worn out. So what I've done is build a server out of an old Pentium-233 I had laying around. It's not a speed demon, and only has UDMA-33. But my network is only 100mbps anyway. But it does the job perfectly fine for things such a subversion repository/samba server. It runs Ubuntu 5.10 without an X environmeny, and has a single 300GB hard drive that I fully intend to beef up with an identical drive in RAID-1 at a

The requirements need to be a bit clear-er. Do you want something that sounds like a small mouse? A rider lawn mower? How about a two-story jet engine turbine fan? Are you willing to spend 500$? 1000$? Important things to consider.

The only non-obvious thing (i.e. a lot of people are telling you to do the wrong thing) is that you should use software RAID instead of hardware RAID. The cheapest CPU that you can buy, will still be 99% idle.

A less non-obvious thing (but some people still forget it) is that you want a well-cooled machine, because heat is what kills hard disks. Get a nice case; pretend you're building a machine that you wanna overclock like an 31337 h4xx0r, but then of course, don't really overclock it.

Oh yeah, and keep an eye on/proc/mdstat -- when your first disk dies, you want to know it happened, instead of finding out a year later when your second disk dies. (I use a lil' python script that displays the array status on a VFD using lcdproc. But there are lots of other ways to deal with it. Just make sure you deal with it somehow.)

I use a 3ware 7810 and 4 250GB IDE disks in a RAID 5 configuration. The controller can be had for $200 or so on Ebay and works quite well (though you may want to use a 7850, somewhat better RAID 5 performance).

Buffalo makes a product called "Terrastation" (or something similar in naming).It can do RAID5, and can expand using USB2.0.The only problem is that it uses Western Digital hard drives, which are in my experience proned to sudden death syndrome, more suddenly than newborns. (I've had about 9 out of 17 WD drives die, mainly when the partition table mysteriously disappears....of course, IBM deathstars have the all-time record....7 out of 9 and not just partition table death....but entire drive death)I dunno i

I have used a Promise and 3Ware controllers on server setups (and they worked great), but now that software RAID has matured in Linux, I plan to save some money for my home setup and use software RAID.I found an external case for 4 drives, without hot swap (which I'm told doesn't work that well with Linux software RAID anyways) for like $150. A 4 channel SATA controller is like $60, and the Multichannel bracket the external case's output back into 4 single SATAs is like $80 or so.

Recently I was also shopping around for a storage solution. At the store, I saw a promising looking device called the Netgear SC101. You pop any two IDE drives into it, plug an Ethernet and power cable in the back, and you have yourself a NAS. Because you can pick out your own drives, you can even do a terabyte in a cheaper and much smaller unit than 4 x 250 GB units like the Buffalo Terastation.

Unfortunately, where this device failed for me was that it doesn't just share the stuff as a SMB share like a real NAS box does. It uses some weird proprietary protocol, and only machines with the right drivers installed can talk to it at all. Such drivers aren't available for Linux, or Mac, or BSD... even versions of Windows that are old (98, ME, etc.) or 64-bit won't work. It has to be a 32 bit version of Win 2k3, XP, or 2k with the right service pack level for the drivers or no data for you.

No self-respecting geek would want a device with such limited compatibility. If a piece of network equipment only lists Windows in its compatibility, that normally means the manufacturer only officially supports Windows, or maybe you need Windows to set up and administer the thing. When even many versions of Windows can't access the device, it's a junker. I took it back the next day, and will start researching hardware purchases more carefully in the future.

In short, Netgear's short-sighted decision to use some strange proprietary protocol instead of SMB turns this unit from something I would have strongly recommended into that gets a definite thumbs down.

Infrant Technologies [infrant.com] has two great products, the ReadyNAS 600, and the ReadyNAS X6. The difference is that the X6 does all of the configuration for you and the 600 is more user controlled.

I own the X6 and love it.

- It's GBE is very fast.
- It supports raid-5 with up to 4 drives. (mirroring on 2 drives)
- You can just keep adding bigger drives. so it'll be highly expandable down the road.
- Supports SMB, NFS, FTP, etc.

What exactly are you worrying about - and will RAID protect it all? I think maybe not. Some things RAID will *not* help with:1)Theft of the machine2)PSU failure in the machine (this happened to me, and fried every single drive with 240V on the 12V rail!)3)Lightening (could kill every machine in your house)4)Fire.5)HDD failure.6)Catastrophic OS failure (filesystem corruption, conveniently mirrored), or a worm/trojan/virus.

RAID does give you convenience, slightly better performance, and ease of repairing the

ReadyNas X6 [infrant.com] is very nice. It has support for upto 4 SATA drives and can grow the raid array if you want to only start with 2 drives. I would recommend this with the SATA 400GB western digital raid drives.

I went with a 3ware 7506-12 PCI-64 card.It supports up to 12 parallel IDE drives.For drives, I did the rebate thing, kept buying Seagate 300GBs over a few monthswhenever Frys or some other shop had them for under $100. (they can be had for that now without rebate)I have a Dual A2200 Motherboard , Tyan Thunder K7 S2462 with 64bit PCI slots.Right now I've got a 6 drive array that dupes all my individual drives, I'm testingthe system for reliability and stability before I move live data to it.I will have to mo

We just got a couple Buffalo TeraStation [buffalotech.com] units at work. The software that comes on the CD is a peice of junk, but the unit itself seems good. The major drawback I've heard about it is that it's really slow in RAID5 mode. Not too big a deal for us, as it's a cache sitting in front of tape, so it's still a faster backup medium. It's obviously running Samba in the background, but it doesn't support NFS mounts. I don't know if that's a big deal for you or not.

The other company I've heard about is Infrant [infrant.com]. Similar setup to Buffalo, only instead of being mistaken for a Bose subwoofer, it looks like a small radio circa 1920. It claims an impressive set of awards, but I don't know if it's any faster in the RAID5 department than Buffalo.

But, for home backups that are occurring overnight, and if you're not pushing 100+GB at a time, you're probably good with either. They're both, depending on capacity, between $800-$1,500.

This might not be perfect for the original poster's needs, but it works great for mine which are somewhat similar. Basically the linksys NSLU2 is a little box with an ethernet port and two usb 2.0 ports. it runs a variety of linuxes, mine runs Debian. You can learn about the open source side of the device here: http://www.nslu2-linux.org/ [nslu2-linux.org]

You can hook up several hard drives (or other usb toys) via a usb hub. Performance is not great, but totally fine for storage of music and movies if you only have a few users on your network. It supports samba, ftp, nfs, http, probably any other way you'd like to access the files. You could do software raid or some other type of mirroring/backup if you'd like.

The main reasons I really like this thing for an at home server:

Silent operation, no fans in the nslu2 and you can get fanless enclosures for the HDs

Takes very little space away from your home office

Very small power draw

Easy to add/remove drives without any reboots

Can power off drives that aren't used frequently, then turn them on when needed

I was amazed at how quiet my office became after replacing my PC file server with this guy and PC firewall with a wrt54g. I could actually hear the gf talking again, which is the only downside so far.

I can't believe anyone would recommend anything else for a geek besides the NSLU2!!

It runs based on Linux, so you can replace the firmware [nslu2-linux.org]Not only do you have a NAS device, which you can mirror disks on, but then you can basically add on whatever you want, eg Firewall, web/mail/file server, music center, VOIP PBX, use NFS as well as Samba etc.

So I have an NSLU2 at home. Had it for about a year. The length of time the thing has been actually useful is maybe two days. Let me give you the counterpoint...

- Silent operation, no fans in the nslu2 and you can get fanless enclosures for the HDs

Make sure it's an aluminum case at least. And be prepared to try several different ones until you find one that works well.

- Takes very little space away from your home office

No, other than the six thousand cords you've got hanging off the back of it to plug in these external drives.

Oh, and don't accidentally disconnect a cord. The NSLU2 doesn't support anything approaching to Plug and Play. You'll likely damage data on the drive, but the most annoying thing is you gotta shutdown and restart the whole thing.

- Very small power draw

True.

- Easy to add/remove drives without any reboots

Not in my experience.

- Can power off drives that aren't used frequently, then turn them on when needed

Again, not in my experience. This is most likely going to lock up the whole thing so it stops responding.

The other problems with the NSLU2 besides the speed(might as well hook it up to a 10baseT hub, cause it can't fully utilize 100baseT), is that if you do try to transfer a large amount of data(say 15 gigs of MP3s) more likely than not the whole thing will lock up on you.

In short... The NSLU2 is unreliable, for a variety of reasons mostly having to do with software, but also having to do with the external drives and the lack of support for hot plugging USB devices. The NSLU2 is slow. The NSLU2 is a pain to manage on the table because of all the cords hanging out of the thing. The NSLU2 is not well supported by Linksys, they periodically release firmware updates but 9 times out of 10 they don't help. The NSLU2 is particular about what type of USB enclosure you use, as well as even what drive, so it's hit or miss whether it will work.

To be fair, I did look at buying a Netgear SC101, and everything I have read indicates that it's even worse.

I ended up just taking my drives and sticking them on my computer and leaving them there. I thought it would have been nice to have this running all by it's lonesome in another room with some batch scripts periodically replicating data over to it. But it's simply not reliable enough.

I've been meaning to try to sell my NSLU2 on ebay. Maybe someone who wants to install their own copy of the nslu2 Linux on it can have some fun. But it's not a good device for a SOHO server, that's for certain.

The difference between what you experience and what the parent poster experienced is the fact that you were running stock firmware, and the parent poster was running unslung or debonaras, or any one of the other replacement firmwares for the slug.

The linksys firmware might suck, I don't know, having scraped off the linksys dreck immediately upon plugging the device in.

You might just give unslung a shot. It's easy, and fixes most (all?) the complaints you've mentioned.

> Make sure it's an aluminum case at least. And be prepared to try several different ones until you find one that works well.

I have 4, two are adaptec and 2 are two different compusa house brand cheapo things. Never had any problem, didn't know that it mattered or I might have skipped the cheap ones.

> No, other than the six thousand cords you've got hanging off the back of it to plug in these external drives.

Well.. its not 6,000... but it does take enough to be a little bit of a hassle. I have 4 hds connected, so of course there are 5 usb cables (one per HD and one from slug to hub). And then you have power cords for the slug, hub, and HDs.. thats 6 all together. 11 cords, and then you have the ethernet out fron the slug for 12 total I think. 6000>12, but still I guess if you dont like cords, 12 could really freak you out.

> Oh, and don't accidentally disconnect a cord. The NSLU2 doesn't support anything approaching to Plug and Play. You'll likely damage data on the drive, but the most annoying thing is you gotta shutdown and restart the whole thing.

Not on my slug. ReiserFS avoids any nasty problems with damaging data, and I have only restarted my slug once (new kernel) since I installed Debian over 6 months ago. Of course I haven't tried disconnecting the root drive, that would probably not work out well. But all the other drives are regularly turned off or disconnected. no problems at all. You need to use disk labels since the device numbers can move around a bit, otherwise its been perfect.

> The other problems with the NSLU2 besides the speed(might as well hook it up to a 10baseT hub, cause it can't fully utilize 100baseT), is that if you do try to transfer a large amount of data(say 15 gigs of MP3s) more likely than not the whole thing will lock up on you.

Man you must have gotten the crapmaster slug from hell. I literally filled the first 300GB drive I connected in a single ftp session, not one error or problem. I regularly dump large amounts of data to/from the device and never have seen any problems. Sure, its not exactly fast, but plenty fast for 2 users to watch divx off of at the same time. The only time I saw a performance problem was when I tried to do a native compile of a new kernel on the box itself.. took several hours due to excessive swapping and during this time video was choppy every once in a while. Still better than I thought it would be.

> it's not a good device for a SOHO server, that's for certain.

All I can say is "sorry about your luck". This thing rocks as a SOHO server!!

1. Buy a large tower case. Or use an old one. Whatever. Make sure there are lots of 3.5" drive bays.2. Put in some kind of crappy, low-heat motherboard and CPU. Use the Celeron 300A you bought back in 1998. Whatever. Pop in 128MB RAM or so.3. Buy a large, name brand PSU. Enermax, Seasonic, PCP&P, something like that.4. Put in some kind of crappy boot drive. The 10GB drive that probably went with the Celeron 300A will be fine. Load Linux or Windows Server. Whatever makes you happy (yes, Windows Server will run on 128MB, especially if it's not doing anything but serving files).5. Install a multiport IDE or SATA controller. Sil, Promise, Via, whatever. They're all OK. You want to be able to handle at least four drives. I prefer SATA at this point, 'cause I like big drives.6. Speaking of big drives, 250GB disks are dirt cheap. Buy four of those. I prefer Samsung and Hitachi drives. We're using spanned 250GB drives 'cause 500GB drives by themselves cost four times as much.7. Configure some a nice spanned, mirrored volume (RAID10 or the like). Two copies of a 500GB volume will be just fine. I prefer to use software RAID, in case I have to move the disks to another machine that doesn't have the same controller, but if you have a hardware option for RAID10, more power to you. Remember that RAID mirroring doesn't protect you from your own stupidity and cheapo PCI disk controllers never do RAID volume management.8. Or don't mirror, and just use the second volume as a backup destination for the first.9. Stick the resulting PC in a closet someplace. Administer with VNC or SWAT or RDP or whatever makes you happy.Total cost for this project is probably $500 or $600, almost all due to the hard disks.

Alternatively, you could use an NSLU2 + a 500GB drive in a USB enclosure. That would also be a $500 setup, and there's no redundancy there.

A few month ago I finally came across Infrant's ReadyNAS X6 box. Specs read like just what doctor ordered - everything I wanted seemed to be there. I got it and after 3 months of use I am not disappointed. I purchased 4 300GB Maxtor MaxLine drives and got about 850GB of NAS disk space. I use it as a primary storage for MythTV, backing up two laptops [rsync], and (obviously) the rest of my data which is now much safer on RAID. The box runs Infrant's custom Linux distro and (I think) Motorolla 350Mhz CPU. It has a dedicated XOR chip. Array upgrades are seamless - you can start with just a single disk, then to RAID 1 (add another disk), then RAID 5 (3 and 4 disks).

The only thing that I was hoping would be better was write speed - I get about 15MB sequential write and 25MB seq. read speed. After some digging, I get a feeling this is actually a problem with network card not being able to keep up with packets. If that's the case, I might be able to pop another network card in one avaliable PCI slot.

As far as price goes, Infrant's box and 4 300GB drives cost me under $1K USD which seems quite reasonable. I highly recommend taking a look at this unit if you are considering purchasing NAS.

BTW, I am in no way affiliated with Infrant, just a satisfied customer:)

The ratings and reviews on their homepage http://www.infrant.com/ [infrant.com] say it all. This thing blows a Terastation away in terms of ease of use, supported protocols, and goodies. Buy an empty ReadyNAS X6 from http://www.eaegis.com/ [eaegis.com] for $579 (no tax, free shipping). Fill it with two of whatever drive is dirt cheap this week (cough-newegg-cough). Here's the kicker...ReadyNAS will expand the drive array automatically each time you add a drive. So buy a couple 300GB's for $100 each and you'll have 300GB of mirrored storage. A few months from now, you run out of room, you just drop in another 300GB drive and now you've got 600GB of redundant storage. Add another drive and you'll have 900GB with redundancy. Still need more room? Replace those 300GB drives one at a time with higher capacity drives and watch it automatically resize the set to use the extra space. Without ever having to rebuld the array! Trying to backup a TB of data so you can move your NAS from 300GB drives to something higher really sucks the big one.

Of course it does CIFS(SMB). But it is one of the only NAS products to support Apple File Protocol, which is a must for networks with Mac/OS X users that insist on using filenames with colons, slashes and question marks and other things that make CIFS/SMB explode. It also supports NFS and rsync for the UNIX/Linux crowd and both FTP and HTTP for the web browser crowd (hi, grandma). It also streams in both flavors of home media server protocols (UPnP and the HMS) so you can buy a $100 Linksys media extender and watch anything you have stored on your RAID. It also has a SlimServer plugin for streaming music to those SlimServer devices that you can hook up to your stereo or a cheap pair of speakers.

It's also supports Gigabit with Jumbo Packets (write only currently) so you can copy 200GB of HD camera footage to the NAS in a couple hours instead of a couple days. The RevB case is cable-less with just thumbscrews between you and swapping a drive. It also holds the drives vertically because who is the idiot who thinks stacking heat factories horizontally on top of each other is a good idea. Also, I can't tell you how many RAID products only lets you specify an alert SMTP server name but no authentication information, which means e-mail alerts don't get delivered (boo Promise, boo 3Ware). ReadyNAS has its own MTA so the mail gets through without a problem, and it can also let you set login/password to authenticate to your ISP's SMTP server. It looks nice, clean, and it certainly not the noisiest thing I've had in my room, although I will be happy when future firmware lets you put the drives to sleep so the case fan can be completely turned off when you aren't using it.

I spend three weeks shopping for a NAS for my network, and I'm glad I looked past everyone telling me Terastation. I've had this ReadyNAS X6 for a few weeks now and I love it. I'm already shopping for a second so I can recycle the old drives from all my other rag-tag household systems into one nice neat package.

Oh, I alsost forgot...just about the coolest feature is that the thing has a PCI slot and two USB ports. This means that you can add a wireless card or a firewire card if you want to use firewire storage devices or a wireless USB adapater or even USB storage devices and printers!

For USB printer connected to the back, the ReadyNAS works as a print server. If you add USB storage (almost everyone already has a USB drive kicking around somewhere) then that storage is available as a volume on the ReadyNAS. You obviously can't use it for part of the RAID but it is fantastic for loading up a drive of movies to take over to someone's house or bringing data from other homes/offices to backup on the RAID.

The ReadyNAS can also be configured to automatically copy data from any flash storage to a specified directory. So you have a camera with a CF or SD card, right? Get a USB card reader, and every time you plug your camera's flash card into it, it will copy the pictures over to your/Pictures volume so you can pull them up on your Media Center in the living room.

Since the underpinnings are all Linux, it's a sure bet that the PCI and USB ports will provide all sorts of cool amazing things as time progresses. I fully expect that you'll someday be able to add a second NIC and have the ReadyNAS function as a firewire...sorta like that big ugly yellow banana slug NAS that was reviewed on here a few months ago.

Once you understand that RAID is reliability strategy and are prepared to have appropriate backup measures in place, then RAID 5 becomes an attractive option for the home network. I've recently looked at several options.

LaCie Biggest Disk [lacie.com] - Cheap but of questionable reliability. Since RAID systems should be reliable above all else, I would rule this out.

Buffalo TeraStation [buffalotech.com]: An interesting product but again reviews are pretty mixed.

In my case, a three disk RAID 1 solution proved more appropriate than RAID 5. I value high reliability on the home system and wanted to use a rotating third disk as a backup in the event of catastrophic data loss (e.g. house burns to ground). FWIW, I also use a DAT for differentiatial backups. For many users this may be overkill -- sacrificing three disks plus fixed hardware costs to greatly reduce potential data losses -- but for priceless coding projects and digital pictures, this might be good for you as well.

For some users working with video or having large audio collections, much larger disk systems may be desired. First make sure that you have an appropriate mechanism for backing up a terabyte or three. Then, the Vanguard V5 may be an excellent solution if the $2-3k price is acceptible.

get a 1 or 2 u rack mount case, a couple samsung 250gb drives, mini-itx mobo, openBSD, and roll your own. i started out in '98 using linux for my personal home server(redhat>suse>debian>redhat>fedora>openbsd), and without a doubt openbsd has been the most stable and the least problematic... i've been using it for the last 18 months, and the only reason for rebooting was when i experienced an extended power outage, when i moved, and when i added a new hard drive (because of a noisy case fan). http://www.doink.org/geeklog/public_html/article.p hp?story=20051212224355152 [doink.org]

BIG TIP!!! get a frackin' UPS! i'm currently using an ancient APC smart 2200, but i've had fewer flakey problemsthe last three years i've been running with a UPS, and i think alot of it is just having clean power... of course my sysadmin chops might have gotten better as well, but i'm pretty sure clean power goes a loooonnnng way.

finally, as far as file sharing is concerned, i prefer netatalk cause i'm a long time mac user(as is my wife) and i've been a sysadmin in the graphic arts for a long time. netatal 2.0.x works very nicely on openbsd. but you should run whatever file sharing (netatalk, smb, nfs) is most conducive to your client OS.

i can't tell you which backup/archive is gonna be the best for you... if i could run legato networker on bsd cheaply, i would. i'm leaning towards bru for the time being, but i'd like to explore amanda some more.

I've been putting together the specs for such a beast. I decided to go with SATA for cheap drives and "SATA-II" (or whatever you want to call it, since there isn't a standard name for NCQ and 3.0Gbps support) for future-proofing.

1) The natural first choice was 3ware. 12 port SATA-II controller [3ware.com] (9550SX-12), for about $800. 3ware products are very well supported on Linux. The only downside is that it's a PCI-X device (this is NOT "PCI Express"!), and PCI-X busses are generally only found on very high end motherboards for servers and workstations. Any athlon motherboard or single-processor opteron board claiming to have PCI-X is lying, they really mean PCI express (AMD chipsets did not support PCI-X at all until around the time dual opteron motherboards were being created)

So since I didn't want to spend $500 on a motherboard that had built in scsi raid, support for 16GB of ram and dual opteron processors just to use that $800 card, I looked around some more...

2) And found a serious contender, the 12 port Areca 8x PCIe ARC-1230 [areca.com.tw] (also about $800). While most low end motherboards don't provide an 8x PCI Express slot, they DO provide a 16x slot which will work just fine for this card (after all, this will be the fileserver, so a motherboard with crappy built in video will do, we're not playing Doom 3 here). Linux drivers are provided as source, even including a kernel tree patch which will build the driver into the kernel rather than as a module, making booting directly from the RAID controller easy.

Slap the Areca into Tom's Hardware's 37 watt computer [tomshardware.com] (motherboard has built in GigE, but pentium-Ms are 32 bit processors, making giant files/filesystems a pain. An Athlon 64+cheap mini-ATX can be had cheaper, but uses more power), add in a stack of 10 watt 400GB WD Caviar Raid Edition 2 [wdc.com] drives, and you're set for a very low power fileserver with a lot of storage.

Now, my turn to "ask slashdot":

Where do I get a 250-300 watt powersupply with 12 SATA power connectors?

Alternatively, do the SATA drive cages (like 3ware's RDC-400-SATA [3ware.com] (pdf) have their own SATA power connectors built in and use standard molex connectors on the outside? Do I need special cages to support 3Gbps drives (ok, not a serious problem for now, but futureproofing)? 3ware's website says it'll work, their product PDF doesn't.

> Where do I get a 250-300 watt powersupply with 12 SATA power connectors?

You don't need to. All the current drives have molex power connectors too, right? If you are unsure, check the specs. Hitatchi's OEM data sheets are great in that regard, since they tell you everything.

Then get a bunch of molex Y-adaptors, they're really cheap. I haven't seen SATA power Ys yet, but hopefully that's just a matter of time.

Take a good look at the current requirements for the drives though. At 12 drives you're heading into the region where most PSUs won't supply enough current. The startup current for 12 current hitachi sata drives is 1.8*12=21.6A at 12V, and most PSUs are only rated at 12-18A.

Also, watch 5V too, the current draw at "max r/w"-load is 1.3A on both 5V and 12V (on those hitachi drives). Even beefy PSUs in the 600+W range most of the time only have 20-30A at 5V, even when they have 3x18A 12V. That's probably enough for 12 drives, but if you want to scale it up you can run into stability problems.

I know this, since I just put together a machine with 18 drives in it, and had lots of power trouble at first.

..I'm getting an old Compaq rackmount server with a boatload of disk for nothing. Apparently, it's not longer useful because it isn't a multi-gigahertz platform. LOL!

My plan is to slap Solaris 10/x86 on it, fire up SVM and do a RAID 10 disk set with two hot spares. Hopefully, that will last me long enough that Sun T3s will come into affordability for homeusers.

Why SVM? Well, simple -- I use it all the time at work, and it will require minimal effort to make work. Assuming, of course, that SVM on Solaris 10 x86 is works the same as it does on Solaris 9/SPARC. Last time I ran Solaris x86 (version 7), I don't think it had the option to run DiskSuite (now called SVM).

In other words, "I want one really good basket to keep all of my eggs in." What... did they stop teaching problem-analysis in the CS dept after I graduated from Hope? {smile}

Might I humbly suggest that you buy/build/salvage a pair of inexpensive computers, each with a fair amount of RAM, a hard drive (or RAID 0) of your desired capacity*, and the fastest NIC your switch can handle. (Forget the fancy RAID controllers, and of course anything better than a PCI VGA card is wasted.) Install the OSOS of your choice on both. Turn on Samba on one of them: that one's your file server. On the other one, set up a nightly cron job to synch (without deletion) the shared directory on the first machine to its local copy of that data: that's your redundancy.

This solution effectively protects you from fried electronics, accidental deletions, and even small fires if the boxes are in different parts of the house (and if the whole house is going up, you get your choice of which box to run back in and rescue), scenarios in which the really-good-basket approach will still scramble your eggs.

*Consider getting different brands to reduce the likelihood of near-simultanous failure. I've had multiple drives from the same lot start failing within months of each other, and you don't want to have a second drive failure while you're still browsing for a replacement for the first.

I don't know about all you wackos with the 600$ CPU's and 600$ RAID controllers at home, but I have better things to do with my money. Like invest it rather than spend it on useless trinkets, theoretical seek-time figures and unused gigaflops.

Here's my brew:

1. Old PC. Any one would do, probbably even a good'ol P1. 128MB RAM is more than enough. I consider this FREE. I run a dual-PIII-450MHz that I have lying around.2. 4x[BIG-SATA-DRIVE]. How big? When I built mine, highest bang-for-buck was 250GB. So I went with 4 of those.3. 1x PCI SATA controller.4. 1x PCI GbE NIC.

[3] and [4] are peanuts. [2] is worth, what, 500$?

The entire rig will easily give you ~10-25MB/sec, which is, for any home use I can consider including pumping 10GB files over the network, plain enough.

Plug any crap old 2GB or greater IDE harddrive in for sport (or two and do yourself a RAID1 configuration).Install Linux.Install SAMBA.Configure RAID.Set up healthchecks that email you if something in/proc/mdstat is wrong.

[OPTIONAL]1. Grab several old IDE drives. Not neccesarily same sizes.2. Stick them in some other box (I did it on my windoze box cuz that's where I had case space).3. Configure a RAID0, or better yet, a spanned volume. Use windoze dynamic disks, use LVM, whatever makes your boat float. Set up a compressed filesystem if you think that would help any. Usually, with the kind of things people store on huge arrays at home, it won't.4. Do a daily dump of everything from your RAID to your backup array.

Maybe these are too big for your needs, but EcoByte [ecobyte.co.uk] makes very nice black-box storage boxes based around Linux and 3ware controllers. They offer excellent performance, SMB etc file sharing, web configuration etc etc. We use them at work and they are great. I guess initially you could just buy an empty box and populate it with the hard drives you need and then expand it further as you need more storage space.

Yeah I always worry about that too: if your RAID card dies, it could hose the disks attached to it as well (not likely, but possible). For a cheap solution I simply use a drive in an external drive case (something like $30 at CompUSA) and connect it via USB. Just mirror what I care about, the hard part being that I need to remember to bring the drive home (I leave it at work normally) and mirror now and then.