Posted
by
timothy
on Friday May 14, 2010 @07:59PM
from the watch-and-release dept.

i_ate_god writes "I download a lot of 720/1080p videos, and I also produce a lot of raw uncompressed video. I have run out of slots to put in hard drives across two computers. I need (read: want) access to my files at all times (over a network is fine), especially since I maintain a library of what I've got on the TV computer. I don't want to have swappable USB drives, I want all hard drives available all the time on my network. I'm assuming that, since it's on a network, I won't need 16,000 RPM drives and thus I'm hoping a solution exists that can be moderately quiet and/or hidden away somewhere and still keep somewhat cool. So Slashdot, what have you done?"

What you want is cheap 5U rack servers with either OpenFiler [openfiler.com] or FreeNAS [freenas.org]. Personally, I like openfiler better. iSCSI is going to be the way to go unless you want a thick OS on the server and all the other admin issues that come with that. Plus, with openfiler you can still do block level snapshotting and change replication. Also, I've heard good things about Open-e [open-e.com] as well. And if you want to mess with ZFS, there's OpenSolaris.

What you do is get yourself a huge (4 or 5U) barebones server from newegg or a cheaper place. Make sure to get a couple of good SATA RAID controllers. Not FakeRAID! SAS would be better, but the drives are a lot more, even for the nearline drives that are basically SATA drives with a SAS interface. Adaptec makes some real SATA raid cards, and there's 3Ware as well. You don't have to worry a lot about the cache, but if it isn't battery backed, you're going to write though it anyway. Who cares, you have 16 spindles! Load it with a bunch of drives. They don't have to be the biggest, anyway more spindles means more performance. 16 500GB drives is going to be fine, for instance, because then you can take 1/3 of that for RAID 6, have some hot spares, etc. Get the slowest drives you can, maybe get a little SSD to use as a boot drive (there are small ones for around $100). You could even boot from a USB key if you feel like the hassle. You don't need a ton of processor. A celeron would probably work, but you probably do want something 64 bit so you can put a bunch of RAM in it as you get more advanced.

Also check out Storage Search [storagesearch.com]. Not a very well designed site but tons of goof info under iSCSI and SAN and NAS.. If you're rich, you might try out an EqualLogic, they are around $28,000 for 8TB but pretty slick.

Does using RAID controllers actually provide superior price:performance to using software RAID? Last I checked, the processors on most cheap RAID controllers were slower than dogshit and using md under Linux would give you better performance than basically any of them, at the cost of some CPU. But since CPU is cheaper than RAID, it probably makes sense. For example, going from a Phenom II X3 720 to a Phenom II X6 chip of the same clock rate takes the CPU from $100 to $200. How much would it cost to go from four crappy RAID controllers to four good ones? It would probably cost you at least $400.

The answer is probably to just go ahead and install Debian on a machine with as many CPU cores as you want to blow money on, and to use software raid. Put lots of system RAM in it, which the OS will automatically use for disk buffers. Current versions of grub work fine with USB keys, because they can use UUID for the groot, and the UUID never changes. If you want it to boot quickly, find a motherboard with coreboot support. If you want external disks you can use firewire cheaper than eSATA, if you get the external disks or just some enclosures at a good price. It makes maintenance a lot easier, but involves substantial power waste due to all those inefficient wall warts.

P.S. OpenSolaris is circling the drain, please don't suggest it to anyone for anything.

I've heard one too many sad stories about old on-disk RAID structures not being compatible with the new version of the old failed RAID card. I prefer the md device since it has been consistent for quite a while and the on-disk format is well documented.

When a drive fails, SW RAID doesn't allow one to boot the system unless one has (1) Used RAID-1

True enough, but (honestly) how hard is it to use RAID-1 with hot spare(s) for your boot partition, and RAID5/6 for everything else? (answer: not very, I'm doing this at home.)

Setup the BIOS to use the two mirrored disks successively while booting.

Most modern motherboards already do this. Contraiwise, even if you have to go out of your way, it's *still* much easier than screwing around with driver disks for HW cards when installing.

SW RAID often exhibits very poor performance when a drive fails as the underlying drivers want to try to make the operations against the disks work and will often expend a fair amount of effort to retry operations and wait for extended periods of time to force the disk operation to work.

Never seen this happen. "bad" drives on SW RAID mark out just as quickly as those on cheap HW controllers.

Standard disk controllers do not support hot swap. So when a drive fails, replacing the drive involves shutting down the server, swapping the drives, and then bringing it back up.

I'd consider 1TB small today. And this guy probably as well; You can't put a lot of 1080p movies on one 1TB disk.

My own setup is a box that has two thecus N5200Pro NASes NFS mounted. One has 5x1TB, the other one has 5x2TB. Both are RAID-6 arrays. I know I throw away 6TB of storage but I'd rather spend a couple extra bucks than loose my episodes of Dharma & Greg.

If something goes wrong on the 5x2TB array I'm up for a 2 day array rebuild though, praying no other disk fails as well. The newer Thecus NASes

You'd have to have one hell of a bit torrent hobby/debilitating movie watching problem to need more than 2 TB of video on tap on a hard drive for entertainment purposes.Unless you're doing HD video editing, or you like to keep a copy of every picture ever taken by your 8+ MP DSLR in RAW format, few people actually need that space. You might be able to fill 100GB with installed video games but the average person who is buying a 1TB drive is probably upgrading granny's computer and thinking "well hey,

250 movies that you watch every year, in addition to the ones you rent, or go see with friends, or simply non-movie stuff you watch like sitcoms and/or live events like the news/sports? You must only work 2 hours a day to keep up with your busy viewing schedule and still have time to sleep, shower and spend time with other humans (they exist outside of movies, you know). 10 movies that you re-watch year after year I can understand, but 250 just blows my mind. Do you schedule that a year in advance? What hap

When I hear a question like this, I usually recommend heading over to the NCIX forums. There's some crazy guy over there - death_hawk - building a 100TB array. [ncix.com]

What I did was a bit less ambitious. A regular old NAS running off a cheap non-RAID SATA card in a case with lots of HDD bays.

For interest, I'll throw up a build that easily scales to 12TB. Since you mentioned noise, I'll prioritize that instead of capacity. I'll use a case geared for silence, a fanless mobo/cpu, a quiet PSU, WD Green HDDs, and a ridiculously cheap SATA card.

*1 Only six will be filled. 6 SATA ports.*2 Case still requires fans/airflow.*3 A NAS probably only needs 512MB, but 1GB is cheap. A Win7 NAS may benefit from 2GB.*4 Must be capable of spinning up 6-8 HDDs at once.*5 Must be flashed with new non-RAID BIOS to avoid silent data corruption for > 1.0TB HDDs; disk read/write speeds around 30MB/sec, in my experience, on ext2. (but running with a VIA CPU - not dual-core Atom)*6 Must be specially formatted under Windows and Linux. (Most distros only support 4KB sectors when the drive reports 4KB - these report 512b to maintain XP compatibility)*7 May have longevity issues. (too early to say right now - lots of complainers, which reminds me of the 7200.10 days. A heck of a lot of those chirping barracudas perished early)*8 Please verify SATA card support first. Ubuntu and FreeNAS work fine with this card, but I've never checked if Win7 has drivers. Do note that you'll have to flash it. *9 If that's a problem, buy a more expensive card. (which may give better performance, and SATA2 support) Promise [ncix.com] makes nice non-RAID SATA cards.

Please note: A solution like this will take 12+ hours to set up. It's highly likely you'll blow a whole weekend, even if you know what you're doing. You may have to try multiple distros to get proper Atom D510 support, unless you go with Windows. When I put mine together, atoms weren't available affordably, so I went with a cheap VIA board. Ironically, Ubu

I've got room for 30TB of data storage in two machines for a total of 60TB. However I have only populated them to around 12TB right now, I don't add drives till I'm out of space!:-) Not what I would call massive yet but getting there!

I have a lot of storage for the same reason that the OP does, and I PREFER 5400 RPM drives. They run cooler and are still faster than what I need.I prefer WD Greens, but Samsung EcoGreen works well too. I buy the green ones because, again, they run cooler.

1.5TB drives have been cheapest $/GB for a while now, though I suspect 2TB will take its place, especially after the 3TB drives hit the shelves.

Yes, but using NTFS would be a bad idea for this, far from "best solution" territory as mentioned by the submitter. For a massive home storage system I wouldn't recommend using Windows for the server. Set up a gigabit LAN and a samba file server with the multi-TBs of locally attached storage drives. If you want RAID, use software RAID. Add a UPS, configure NUTS, configure hardware monitoring, smartmontools and RAID monitoring (if you have RAID).Yes it's a fair bit of work that seems unnecessary when you cou

If a program opens a new file and them immediately seeks to the end of it to fix it's size then NTFS will look for a continuous block of free space to save it in. NTFS caches all writes so it can wait to see what the program actually does with a file before committing it to disk.

It also has a system designed to reduce the fragmenting effects of small files by being able to store their data in the same block as their metadata.

The only caveat about that particular solution is the lack of redundant power, poor serviceability in the rack (may not apply like you said), and slow speed.

Their solution achieves the density it does because they are using SATA multiplexers, but that effectively creates bottlenecks and lowers overall speed. It works for BlackBlaze's application requirements, but YMMV.

Protocase.com makes the enclosure and will sell it to you for a pretty reasonable price. Getting all the parts is not such a big issue. I think we estimated we could build one without drives for less than $3k.

If you don't have it in a rack, then serviceability will be a lot better for sure. Rackmount solutions require cable management and heavy duty slide rails, and wide aisles, in order to gain access to the drives. The backplanes are parallel to the ground, facing up, and require taking the top off to access. Not exactly IT friendly.

Since the person in the article is not using this in a datacenter, cooling is going to be an issue. I suspect BackBlaze survives due to hot-cold aisles and plenty of airflow. Sticking one of those enclosures in a closet without ventilation/cooling is a recipe for disaster.

I second this. I just bought my 2nd DNS 323 after the first one got full. (its cheaper to buy 2 DNS 323 which are 2 bays, compared to getting the 4 bay version). I shoot a lot of pics, and at 20-25 meg per RAW file, it adds up quickly.
The first is stacked with 2x1.5 TB drives, the 2nd with 2x2TB drives. I've got them running in RAID 1 for hardware redundancy. Great little devices and cheap (150$ Canadian). With the latest firmware you can even run NFS, and a native bittorrent client.

Personally I'm using a Synology solution at the moment, for my NAS. They offer a relatively low cost feature rich hardware, with low power and depending on the HDD's you use, lower power consumption than that of a always on PC.I've been thinking about later on upgrading, since a general rule of thumb, you can never have to much storage. For HD BluRay images I would recommend making sure the network isn't the bottleneck and use gigabit ethernet, as I'm finding on my aging 10/100 switches it's not cutting it

Agreed on Synology. A little more expensive than build it yourself, but they have solutions that scale easily to 20TB. Synology + 1TB or 2TB WD Green drives + GB Ethernet and you can have 20+ TB of media on line very quickly.

My personal storage solution consists of a 4U rack case with a computer with a c2d CPU, gig-E NIC, a few gigs of ram, a bunch of 7200 RPM disks and FreeBSD on the system disk (I also have the system disk mirrored just in case). All the storage disks are then pooled using RAIDZ. Pretty simple yet powerful. Just don't expect too much in the way of performance.

I did something similar. I made a 4U machine, 4GB ram, basic cpu, gigE, 6x1TB HDDs and an old 60GB system drive. No RAID (as I wasn't going to mirror and didn't want to lose storage) but set up samba and access my drives from my windows box.

I keep looking for the old school full height 'desktops' at a bargain store or so. Search newegg. 3.5" external works just as well as internal.

This [newegg.com] has 11 3.5" bays and 3 x 5.25" bays. With a 4 in 3 linked above you could have 15 hard drives in a case for $100. Or if you care about hot swappability This one [newegg.com] has 20 hot swap bays (at 3x the cost).

If you want more performance, get some SSDs to work as the ZFS "cache".

..., are the movies you download compressed at all? You say you run out of slots, how big are the drives you're putting in the slots? Personally, I let Netflix do the storing for me. I have a few TB's but never come close to filling it up.

It occurred to me once that a person could write a FS driver that did something like that, but my immediate next thought was "Naaah.. You're insane, man." The fact that somebody has actually done it makes me giggle uncontrollably. I have to stop letting sanity get in the way of ambition -- I coulda been there first!

Hmmm. With a couple thousand "reflector bots" I wonder how much data you could store in the form of IRC messages flying back and forth.

I used to work for ABC news and we never kept archive footage always accessible like you want. If we wanted something that was really old we'd have to dig it off a tape, an unplugged hard drive or powered off computer, or we'd have to find another news agency that had the footage and grab it off of a satellite feed. And this was a 24/7 TV news station responsible for national news programming where we would be tracking stories for years. If we didn't need a system where everything was instantly accessible then you needing it on an individual level might be overkill in my opinion.

I have over 30TB of music, movies, and raw video footage on my home computers and I just keep everything on separate external hard drives. I label the drives, back them up twice each, and then keep an index in a.txt file that is easy to search through. So if I want a 1080p backup copy of Blade Runner I search 'Blade Runner' in the.txt file and I see it's on drive 'A' and then I plug in drive 'A' and dump the movie on my computer. I also keep an external drive that has backups of every TV show I own on DVD. So if I want to watch The Wire then I plug in the external drive labeled 'TV' and have at it.

First off, judging personal wants and needs by the way a giant corporation acts is hardly reasonable. ABC has cost/benefit to consider when trying to keep data available, and it's probably easier/cheaper to do it the way you say they do, rather than implement a fully digital, fully available storage system.

That being said, the solution is SIMPLE. If you have a bunch of hard drives with data you want, you put together a low end PC, install it into a server case, and fill it with hard drives and SATA controllers. When it's full, you build another one. You have 30tb of data, mostly not accessible. I have 10tb of data accessible from any internet connected computer on earth, and it's twice as much storage as I actually use. It cost me about 500$ to build and deploy a personal storage server, and it doubles as an HTPC. ( I already had most of the drives, and some parts) It's likely most people here have enough hardware laying around to implement a basic storage server. There really isn't any reason not to do it. As a bonus, since it's not a machine you need to access directly most of the time, you can hide it in a closet and forget all about it.

Sure, you could buy a premade NAS/SAN or stand alone data box. However, they are costly and not any more suited for the job than an old machine, or low end new system. At least, not in a personal environment. If you actually require robust data storage, I'd suggest a NAS, from any number of sources. But now we are talking about 4k worth of hardware, and requiring proper power systems to be added if you really want longevity out of it. However, that's overkill for a home storage solution, no matter how much data you have. Simply because you don't need enterprise class data serving, when only one or two computers are accessing the data.

If you don't know how to build and deploy a system with lots of drives accessible over a network, then you probably started at the wrong website for help. You want DELL/HP/IBM small office sales line.

For his "downloaded" 720p/1080p movies, its reasonable to assume that these are most likely reencoded to mp4 / mkv files or ts streams, probably between 2-15gig each. An external USB2 Harddrive should be able to keep up with the transfer rates. As such, you could probably go with something such as a USB hub and tons of external harddrives.

But I agree with you. I have DishNetwork DVR with the external Harddrive option. I currently have three external Hard Drives filled with movies. I keep a spreedsheet on th

I'm fairly sure you can tunnel SATA over IP. Not sure where you'd get an IP-to-SATA adapter, though, or how much such a device would cost. But it would give him fully networked storage (ANY box on the network would see all drives as though locally connected) without having to use a network filesystem and potentially be as extensible as an IPv4 private network range. But it's heavily dependent on price as to whether it's worth it.

Well, from what I understood there are two modes - one which will give you only time slots so 5 drives each get 1/5th of the time. That's the cheap variety. The other variety is traffic based, you can't exceed 3 Gbps but you can get the cumulative read/write speed up to that point. The SATA spec site has more [serialata.org].

That's only accurate under the assumption that a single drive can max out a 3 Gbps line. I'd like to see a reasonably-priced consumer grade HD that can pull THAT off.
It doesn't really matter anyways as the ultimate bottleneck here will be the network at 1Gbps. Five drives evenly using a 3 Gbps channel would still be allowed 62.5 MB/s each, and that's still pretty good for network transfer.

I'm in the process of building a 5-bay SATA port-multiplier solution right now. What I've learned thus far is: * Most commodity motherboard chipsets don't support port multipliers. You'll need an expansion card.
* If you have this much data, look into ZFS and RAIDZ2 for reliability. Avoid RAID5.
* The bigger the disk, the longer it takes to rebuild a degraded array
* FreeNAS is at an inflection point. If you're not scared, use PCBSD directly instead to serve your data.
* Y

I personally set up a downloadserver that also functions as a media server to stream the content to other devices. I put in a couple 5400 RPM 1,5 TB drives, they use less power, generate less noise and heat than a regular 7200RPM drive but since you're not running any applications of them, you won't really notice the difference in performance. Prices have gone down a bit so the sweet spot for $/GB might be at the 2TB mark now. If you don't want to go for an entire computer, maybe a NAS solution would be best for you, with the same 5400RPM drives. A NAS will have less room for the disks if you really want *massive* amounts of storage, and also you usually must purchase one + the disks. The PC you can build from spare parts lying around. I personally put gentoo linux on mine, but you also don't exactly need top of the line equipment for a nice windows XP install. The NAS however will have outputs directly for your TV and will take up less room and power.

Linux Software RAID5 has worked very well for me. Performance is decent (perfectly fine to play back and transfer 1080p video). I got one way back when 3x320GB was enormous and had a 1TB drive before they were remotely available.

Now I'm seriously considering 6x2TB for a 10TB RAID for my next server replacement. No need for an SSD for booting either, just set aside a tiny RAID1 partition (mirrored across all drives) for/boot and you're set. It boots and operates fast enough.

The only issue I see with that is in order to access any of your data all drives must be spinning - that's a waste of power. This is caused by data being striped. What happens to your data if two drives die at once? Or three? Do you lose it all or just those drives? Can you use standard recovery software on the drives?

Windows home server, 1TB 7200rpm main drive with seagate LP 5900rpm drives, lock it away and never have to think about it till you need to drop another drive in.

The reason for the fast main drive is that with WHS when you copy data to it, it stores it on the main drive first, then schedules it to be distributed out to the storage drives the next time a "storage balance" is done.

Works fairly well, its based off windows server 2003 at the moment, but if you can wait till the end of the year they have a server 2008r2 version coming out soonish.

This. Even ready-made resellers have pretty small devices built on Win Home Server that can take a LOT of drives. Mine supports 4, but there's many models that can take 8, 12, or more drives. The OS is rock solid and has a lot of neat features, like being able to access your network from an SSL secured web app (built in) from anywhere with indexed search, and its easy to develop plugins for (though there's a ton available already) to extend it.

That's a waste of drive space. The system I use puts parity on one drive, the OS on a USB, and the rest of the drives are standard format drives that can be mounted under Linux should I need to try data recovery (ReiserFS). Data is not stored redundantly and I can use any size drive I want so long as the Parity drive is bigger or of equal size. With 16 2TB drives I get 30TB worth of storage per server...

unRAID - worth looking into at least. Won't have some of the ability of the Home Server to run apps on it

Another happy WHS owner here. I do recall reading that one of the service packs (there have been three) fixed the requirement for a big first drive - files now copy directly to the storage drives.

That said, I still use a fast system drive, and the rest are a mix of 7200 and 5400 rpm drives (depending on what was cheapest at the time).

Bought the original Coolermaster Stacker case. The front of the chassis is solely 5.25" drive bays - eleven of them - technically twelve if you mod the case to move the power+usb front panel elsewhere.:)

Oh, and despite being based on Server 2003, one of the nice things about WHS is that unlike the former it doesn't cost an arm and a leg.

Hehe, I use a thermaltake armor full tower, with an extra 2 "icage" units installed, 10 drives in the front and 3 in the back, all cooled directly with fans (except for one of the front bays).

Currently only have 11 drives in it, but there is still room to grow.

Its survived 3 drive failures (data was redundantly stored) and a motherboard failure (24/7 constant operation over 3 years managed to kill the caps on the motherboard with 2 months to spare on the warranty), so it is indeed rock solid:)

More than that, you might not need even 7200 RPM drives. There are large capacity "green line" drives from some manufacturers, 5400 RPM, that might be perfectly enough.

I'm sure other posters will have much better recommendations as to how the overall setup should look like, but for whatever it's worth from me - stay away from consumer NAS solutions, they have usually quite small transfers (and I guess its important to you, with files being rather big). Large tower with plenty of space inside + Atom motherbo

Correct, 5400drives are fine for viewing 1080P video and actually so is 100meg ethernet but transferring data is slow so go GigE. Consumer NAS are indeed junk, a friend just emo-raged and pitched two Drobo onto Amazon's sales board. He stormed down to Fry's and bought $600 worth of hardware and drives to build an unRAID and is now quite happy with his new appliance that no longer needs care and feeding nor smokes interfaces. ATOM systems can be done but finding a board with enough slots and enough SATA is n

Even if you "need' a 16000 RPM drive, just make it for your local drive that you play your videos directly off of. Use 5400 for all the other ones. Just move your file before watching it. Sure, if you're an impatient baby and want to watch something within 5 seconds of it entering your mind, then you might have to wait 5 minutes if the file is 4.5G. Then again, it's the type of waiting you can go pee or make your snack during.

There are large capacity "green line" drives from some manufacturers, 5400 RPM, that might be perfectly enough.

Do they work in RAID? or do they randomly stop responding to the raid controller and then get dropped from the raid, triggering a rebuild, to show up a few minutes later, to trigger another one? http://en.wikipedia.org/wiki/TLER [wikipedia.org]

I download a lot porn, and I also record a lot of masturbating videos. I have filled two computers with porn already. I want access to my porn at all times, especially since I maintain a porn site. I don't want to have swappable USB drives, I want all my porn available all the time on my network. I'm assuming that, since it's on a network, I won't need 16,000 RPM drives and thus I'm hoping a solution exists that can be disguised or stashed away and not overheat. So Slashdot, what have you done?

I've tried a variety of approaches, but overall I've been happiest with just buying a NAS box.

I have a Synology DS209 [newegg.com], and I've been very satisfied. It's a relatively cheap way to get 2 TB RAID 1 storage with really simple backup to an external USB drive. If you need more storage, you can buy NAS devices with more than just two bays.

A contrary opinion. I have had a Drobo since the original release and it has been nothing but a disappointment. Drive incompatibilities, an extraordinarily high drive failure rate (at least 1/quarter)and a very confused partitioning scheme. Not something I'll repeat in the future. Oh, and data loss that had to be corrected via a firmware update. In short if I'm spending the money for Raid - I don't want to lose data. Period.

I looked at a Drobo - but being on a budget, I kept on looking elsewhere. I don't doubt they deserve those reviews, but they are not cheap. And if the Drobo itself dies... good luck getting the data off those drives without another Drobo handy.

I suppose if you like fiddling and want to tweak, then building your own is fun and all but if you just want something that works, is most likely quieter and uses less power than one you build yourself, then I say a standalone NAS unit.

I have a QNAP which I love - Synlogy, D-Link, Thecus, Buffalo, etc etc there's a lot of choices out there in 1/2/4/8+++ drive bay sizes. They will typically have various RAIDing options, spiffy web management interfaces, etc that make 'em pretty plug and play.

I have two of these servers now. Each server can hold as many as 16 disks (possibly more actually as the programmer keeps bumping that up) with one disk reserved for parity. Data is NOT striped and parity is ONLY stored on the one drive. If a disk fails I lose no data, if two fail I lose two disks of data but nothing else. No hot spares or any other crap. If a disk isn't being used it goes to sleep and saves me heat and power. Disks can be ANY size but the parity disk must be as big or bigger than any of the data disks. Runs on a pretty decent selection of hardware although keeping the list of what works and what doesn't up to date is apparently tough since hardware changes so fast. It's Linux based but pay for play, yes he's followed the GPL. It's not super expensive and it boots from a USB drive to be web administered. I use full tower cases with SuperMicro 5n1 trays, 2gig of memory, Celeron CPU, power saving PSU, and supported mobo that have onboard video and GigE which you WILL need.

Their forums are a big help and active, users are working to expand the capabilities of these NAS and the programmer is working on making that easier too. Check it out, I've not found anything better yet and with some of the newer versions of SAMBA in the code it's pretty fast too! Perfect for a HTPC but not so great for a big transactional database

Preferring Western Digital drives (for no particular reason) I have a pair of 1TB My Book Essential Edition external USB drives as well as a 2TB My Book World Edition network drive (which I got form a guy for like half the price).Anyway, the World Edition has a USB port that allows me to connect the other two drives to it using a USB hub and it shows them as network shares in addition to its own folders.Another nice thing about the World Edition is that it runs Linux so there's neat stuff you can do with it

This is something I've always wanted to play with. It's a little expensive (for a home user) to get into, but it's extremely scalable. If I moved all my DVDs and such to on-line storage, I think this is what I would opt for. It can be run in all sorts of RAID configurations, doesn't require matched sized hard drives, and it can all be racked up very nicely.

4 drive bay, USB, FW400/FW800 and eSATA. Will take 2tb drives, RAID 0, 1, 5 and 10. Comes pre-populated or unpopulated, the latter is what I got and added my own drives. http://www.macsales.com/ [macsales.com] No financial connection, just a satisfied customer (they have great tech support!)

This is obviously not a build-it-yourself storage array, but is a good option if you want a commercial out of the box solution.

Really, that's all you need. A good map-drives script that maps all your drives to all your computers so everybody has access to everything. I'm almost out of drive letters, but I basically have your goal. No solution. Just using what's in the operating system already.

Have another script that you run to index things. Basically, a dir/s command [add filesizes to the end if you can]. There's your index of where everything is. Use grep to access it quickly, or load it all up in a text editor and find to acc

2) Standalone NAS device. Everyone so far seems to recommend different makes so I'll carry on the trend and suggest Thecus [thecus.com]. Just slot in the drives and you're ready. Install the SSH module and you also have a Linux server too.

It should be obvious that you need a NAS. Buffalo do devices called Linkstations that can house a single drive storing up to a terabyte (or put in your own disk), or they have Terestations I think they're called with network attached RAID storage. These are all extremely quiet and unobtrusive when compared to doing this with PC hardware. They can be accessed over SMB, NFS, FTP or with other file transfer protocols.

What you do after that depends on how geeky you want to be. I have Freelink running on my L

I'd highly suggest you check out unRaid [lime-technology.com]. It's an inline expandable raid system which allows up to one drive to fail without losing data. I've been using mine for quite a while and I love it!

Get a cheap bunch of 1.5 TB drives for up to 18TB total. If you say home I assume you don't mean 99.9% redundancy. You can buy a new PSU or motherboard or whatever and have it delivered and that's okay. Softraid two drives in RAID1 for 1.5 TB less storage. If you need more protection then upload it to some offsite backup - any external disk or second machine is still vunerable to theft, fire and whatever. It works for me, though I only have ~10 TB due to due of old low-capacity disks.

You specified quiet and hidden (small), not cheap, so I'd go for a NAS device the synology DS1010 can do 5x2TB (8TB with redundancy), and if you need more it can be expanded with 5 bays more.A cheaper option would be to take some old hardware and toss a NAS distro on it, but I'd expect more hassle and noise from that solution

any sort of network accessible drive is going to be relatively slow. if you are copying large files that will be important. if you expect to use the large drive for your working sets, as opposed to just for storage, that will be crucial.

the truth is that you probably won't be happy with anything less than a eSATA interface.

I've been running off an Acer EasyStore H340 for about a month. I'm very happy with it; it's very cool, quiet (if anything else is making noise, you won't hear it; in a closet, you definitely won't hear it) and plenty fast for most households. 4 hotswap SATA bays, eSATA, USB, and GigE. I can push 75MB/s via NFS to and from it (reading and writing from RAM), which is plenty for streaming video. It comes with Windows Home Server and it's headless, but I popped the drive into another box and installed Ubuntu with an SSH server. Worked like a charm. I'm also running a 2TB software RAID1, mt-daapd (iTunes) and squeezebox servers. I'll probably put Samba on it too.

The only thing I'd change is that a dual core Atom would be better. I actually haven't run into a bottleneck yet, but I wouldn't try reencoding videos on it. I believe the dual core model will be out this month. No affiliation with Acer; I'm just geeked because this is just the quiet, cheap server I've wanted for years. Sounds like sharing your other computers via NFS (automount) or CIFS plus one of these would address your needs; if not, maybe the info will help somebody else.

I have a similar setup to previous post. Server running Linux with a 3ware RAID card running RAID 5. 8 500GB disks with ~3TB of usable disk space. This has been running flawlessly for over 5 years. I have a a movie collection, music and network share for 3 HTPC in the house. Works very well but I wish my RAID card supported the ability to power down the disks and save of power/heat when not in use.

I am almost at capacity on the RAID volume, so to expand I have another RAID card that I can put in the server and create a new volume or replace the 500GB drives with 1TB or now 2TB disks. Replacing the disks would save power and heat, but I would need to backup and restore 3TB of data. Adding another RAID card is easy, but crates more dives that I can't turn off and eat up power.

I am actually thinking about building a new server with the thought of being able to add an many SATA ports as possible (via SATA cards) and then us port multipliers. The use a software based file system or RAID that allow me to add drives of different sizes to the volume. Similar to ZFS but more open. This would make growing the system much easier and allows me to power down drives when not in use. I would still get plenty of performance for my needs.

The other thing that I am doing that most people don't think about, is I backup my entire NAS to another server. I took another old PC that I had and put a 4 port SATA card in it and four 1 TB disks and run Linux and software RAID on it. Each night it powers up and runs a script to back up the primary NAS. I do this just in case something catastrophic happen to my primary NAS and I also use it when I moved to larger disks on the NAS previously. I use rsnapshot to look for changes on the primary NAS' file system and only back up data that has changed. It also keeps the lat 3 months of files that have been changed or deleted, if I need to recover something. When the script is finished, it powers down the backup NAS and wait until the next night to run again.

Having done that in the past, I'll say that buying a Drobo was worth the cost. Granted, I hunted around a bit to get a good sale price (it's not too difficult... though the FS is brand new so maybe not on that model yet), but unless you really enjoy tinkering with getting samba shares set up and working properly, sometimes it's just easier to buy your sanity.

Don't get me wrong - I wish they were cheaper. But their system worked better and more reliably than anything I ever put together, and I'm by no means incompetent. And their BeyondRaid tech, while proprietary, is pretty damn cool and works incredibly well. Being able to mix drives and not waste tons of storage space is a huge advantage that (as far as I know) I'm not going to get anywhere else.

Unless you're on the cheap and/or like tinkering with homebrew solutions, your better off using a professional product like Drobo. For most people that have simple storage requirements, the idea of "Set-it-and-forget-it" is the mantra they want to adhere to.

I have a Drobo and a DroboShare. The DroboShare runs a slimmed down version of Linux so a network attached Drobo uses typically uses samba. The benefit of having one is NOT the ease of software set up. The reason I love it is the ease of drive management and small hardware size.

Due to the small size and slick style I keep mine in my TV cabinet. I've done the measurements and no PC case on newegg can fit in this same space, never mind something that can house 4 harddrives.

The other thing that is so valuable about a Drobo is how well it manages it's RAID array. They call it BeyondRaid but I hear it's just a as many normal RAID arrays as it needs to organize the drives to both optimize space and maintain redundancy. Also you can pop harddrives in or out while it's on and it will automatically restructure the RAID on the remaining drives to still be redundant with out any need to shutdown or stop sharing data. I recently needed to test this out for my self. I popped out my 4th drive, plugged it in to my PC, formatted it and started moving data from my Drobo to the harddrive I just removed from it while the Drobo was still restructuring. I expected a huge mess, but everything worked exactly like the advertised. I was kinda shocked.

FYI the reason I did that swap out was because I foolishly formatted my Drobo as NTFS. This worked ok but I had one to many problems talking to it from my Linux PC. The permissions were all messed up over samba. New folders and files I created on the Drobo were root access only for some weird reason. So I decided to format it as ext3. Since the DroboShare runs Linux this is the best option for a shared drive and works fine while talking to mac and windows as long as you do so over the network.

www.qnap.com/pro_detail_hardware.asp?p_id=127 and you get 4 drive slots, for 600, with a better feature set than the Drobo. Dual GbE nics, 2 esata ports, 26W active w/ 4 drives vs 56W with the drobo. Anyways, thought i'd point out that the drobo is quite a bit overpriced like you mention.

In fact a friend of mine DID! His Drobo fucked up so often he nearly threw it out a window! He pinged me one night raging about it yet again. I told him to head for Fry's and when he got back with $600 worth of hardware he was good to go. His hardware booted unRAID luckily and when he was done and the parity all setup and the disks built he had a solid system that replaced TWO Drobo for $600. Sold his Drobo to folks on Amazon and was ahead in the money dept! The software he's running will support 15 data dr

Plus I recommend the drobo as well with its
cool "Beyond Raid" system that let's you just pull out the smallest
disk in your array and plug in a new one. Anyone who's ever rebuild
or updated a traditional RAID array knows what an improvement that is.