gadian:First thing I ever learned how to replace on my computer was the memory. The second was the hard drive. These are the basics, like being able to change your own tires. If you can't change a hard drive or memory yourself, you don't need to own a computer.

American Decency Association:BumpInTheNight: finnished: BraveNewCheneyWorld:If you backup that often, then you'll only be out of action for some time while the data is transferred. If you went with a raid 5 array, then you'd have no downtime when a drive fails. You'd just want to be sure to replace the failed drive as soon as possible. You'd need a raid 5 controller if your motherboard doesn't support it, and another drive to make it work.

With 1 TB SATA drives, and RAID 5, you are pretty much guaranteed data loss. Never use RAID 5. RAID-1 for two drives or RAID-10 for more is the gold standard.

I am curious as to where that opinion comes from? Got any links about that?

it is anecdotal. i can confirm though, ppl in my circle will never ever use 5 again after bad experiences.

Ahh kk, I'd just never heard of the technology being genuinely flawed in any way like that. I can accept bum controllers or worse badly written software raid doing stupid things or the CPU they rely upon doing it upon their behalf, and I can accept catastrophic failure like a PSU spikes your entire array through the goal posts of life, but if the parity algorithm vs drives larger then 1TB was a documented problem I'd be forced to make some very expensive changes around work and home :P

BumpInTheNight:American Decency Association: BumpInTheNight: finnished: BraveNewCheneyWorld:If you backup that often, then you'll only be out of action for some time while the data is transferred. If you went with a raid 5 array, then you'd have no downtime when a drive fails. You'd just want to be sure to replace the failed drive as soon as possible. You'd need a raid 5 controller if your motherboard doesn't support it, and another drive to make it work.

With 1 TB SATA drives, and RAID 5, you are pretty much guaranteed data loss. Never use RAID 5. RAID-1 for two drives or RAID-10 for more is the gold standard.

I am curious as to where that opinion comes from? Got any links about that?

it is anecdotal. i can confirm though, ppl in my circle will never ever use 5 again after bad experiences.

Ahh kk, I'd just never heard of the technology being genuinely flawed in any way like that. I can accept bum controllers or worse badly written software raid doing stupid things or the CPU they rely upon doing it upon their behalf, and I can accept catastrophic failure like a PSU spikes your entire array through the goal posts of life, but if the parity algorithm vs drives larger then 1TB was a documented problem I'd be forced to make some very expensive changes around work and home :P

I have a RAID 5 on six 1TB drives, have had it running for years without a single byte of lost data. Just for adding to the `anecdotal` data

Of course, I picked drives that work well with RAID and don`t go into sleep or into low power modes or any stuff like that.

Thinking of going to 2tb and RAID 10 though to get rid of the parity overhead and maybe get some more speed for the video editing. Do that with your `no upgrade or changes after purchase` machines...

BumpInTheNight:k, I'd just never heard of the technology being genuinely flawed in any way like that. I can accept bum controllers or worse badly written software raid doing stupid things or the CPU they rely upon doing it upon their behalf, and I can accept catastrophic failure like a PSU spikes your entire array through the goal posts of life, but if the parity algorithm vs drives larger then 1TB was a documented problem I'd be forced to make some very expensive changes around work and home :P

It's not necessarily flawed, it's just that it doesn't necessarily give the kind of protection you think it does. That in addition to its downsides, like slow writing due to parity calculation, makes RAID-1/RAID-10 the better option. Especially considering, and also due to, big hard drives being cheap.

RAID-5 was great years ago, when hard drives were small, but still redundancy and space was needed. It was a compromise.

The quick rundown is this: besides failing catastrophically, hard drives can experience read errors. In fact, this is more likely than a hard drive completely dying. So, when a RAID-5 array loses a drive, and starts rebuilding it, if it encounters a read error on the remaining drives, the entire array is lost.

Now you might say "but the chances of the read error happening must be very small". And it is, kind of. Looking at the datasheets, it can be very small. Like to the tune of 1:10^14 bits for SATA. But remember that the rebuild operation needs to read the entire disks. And when the disk sizes are in the TB range then all of a sudden we start reaching probabilities that are actually probable. Not to mention the long rebuild time required.

But regardless of all this, what if you think "Well, I'm just a home user, I don't need the hardcore redundancy." Ok, say you want to create a 2 TB array. With RAID-5 you could do 3 WD Black Drives, $119.99 each at Newegg. Or, do 2 2TB drives with RAID-1 for $179.99 each. The RAID-5 array ends up costing you $0.01 less. What's the benefit of saving that $0.01? What's the downside?

Kazan:Darth_Lukecash: I think most computer problems stem from stupidity of do it yourself people.

no

This. My machine is going on 5 years old, still runs like a new one. Then again, I hand-picked the parts and built to my specification. Your home build crew are the ones that take the time to research out their parts and only buy decent equipment. Your PC hardware problems are mainly from your Ma and Pa Kettles that buy the cheapest $250 eMachines off the shelf at Wal-Mart because the kids said they need a computer. Now, with the parts I buy, I can build a starter rig for about $450 that can handle even some 3D games with not too much trouble (Not Skyrim on full detail 1600x900, but it'll handle most games at a playable framerate). That machine will last 5-8 years on average. I have some machines I have built that are going on 12 years and no failures.

That $300 Wal-Mart special is using the cheapest, slowest RAM that can be bought, hard disks usually from IBM or Hitachi which are cheaper but have proven to be far less reliable, and most often sub-standard power supplies that fail in just over a year most of the time and can take down a motherboard once they start to flake out.

The problem is that probably 70% of home users are the ones buying the crap systems off the shelf and that's what gives PCs such the stigma of being poor hardware choices. It's also why Apple users think Apple's hardware is so superior, even though now it's the exact same components that any hobby system builder is already building machines with, and they're usually better quality components than Apple's.

finnished:BumpInTheNight: k, I'd just never heard of the technology being genuinely flawed in any way like that. I can accept bum controllers or worse badly written software raid doing stupid things or the CPU they rely upon doing it upon their behalf, and I can accept catastrophic failure like a PSU spikes your entire array through the goal posts of life, but if the parity algorithm vs drives larger then 1TB was a documented problem I'd be forced to make some very expensive changes around work and home :P

It's not necessarily flawed, it's just that it doesn't necessarily give the kind of protection you think it does. That in addition to its downsides, like slow writing due to parity calculation, makes RAID-1/RAID-10 the better option. Especially considering, and also due to, big hard drives being cheap.

RAID-5 was great years ago, when hard drives were small, but still redundancy and space was needed. It was a compromise.

The quick rundown is this: besides failing catastrophically, hard drives can experience read errors. In fact, this is more likely than a hard drive completely dying. So, when a RAID-5 array loses a drive, and starts rebuilding it, if it encounters a read error on the remaining drives, the entire array is lost.

Now you might say "but the chances of the read error happening must be very small". And it is, kind of. Looking at the datasheets, it can be very small. Like to the tune of 1:10^14 bits for SATA. But remember that the rebuild operation needs to read the entire disks. And when the disk sizes are in the TB range then all of a sudden we start reaching probabilities that are actually probable. Not to mention the long rebuild time required.

But regardless of all this, what if you think "Well, I'm just a home user, I don't need the hardcore redundancy." Ok, say you want to create a 2 TB array. With RAID-5 you could do 3 WD Black Drives, $119.99 each at Newegg. Or, do 2 2TB drives with RAID-1 for $179.99 each. The RAID-5 array ends up costing you $0.01 ...

Interesting set of articles, the one from 2007 seems to have several doomsayer parrots but then also several that explain away the false assumptions that went into the statistics he used to come up with his theory, the biggest one being that a raid controller would decide to nuke an entire rebuild over one lost bit rather then just flag that set corrupt and move on with its life.

2TB array? No, 12TB arrays are your magic mark at this point my friend. Btw, raiding with black drives? Heh, I guess you don't know about their little feature called TLER and why not having it on black drives is some what of a problem?

I build every computer I buy. No reliability issues requiring the replacement of the entire computer, ever. One motherboard failure in the last 12 years, 2 video cards, and 1 HDD. I have five computers for four family members (one file server) and all are up and running.

I had ONE iMac, failed after 3 years. Had a macbook, failed after 3.5 years. Not going down the apple path after I gave them two (2) chances to build a better PC than I can.

Don't buy a Mac Pro, man. Just buy a spec'ed out Mini. Mac Pros are for people who need to be convinced that a computer is "industrial-strength" by looking at it. It's one of the worst desktop Macs ever designed. Everyone I know who has bought one is unhappy with it. I'm not surprised, you can tell Apple hates the product and regrets making it by the way they treat it. It's going the way of the X-Serve, which had way more reason to exist than the Mac Pro ever has.

BumpInTheNight:Interesting set of articles, the one from 2007 seems to have several doomsayer parrots but then also several that explain away the false assumptions that went into the statistics he used to come up with his theory, the biggest one being that a raid controller would decide to nuke an entire rebuild over one lost bit rather then just flag that set corrupt and move on with its life.

2TB array? No, 12TB arrays are your magic mark at this point my friend. Btw, raiding with black drives? Heh, I guess you don't know about their little feature called TLER and why not having it on black drives is some what of a problem?

Link to WD's big fat warning about using black drives in raids

I'd say if that wasn't on your radar I really suggest you reconsider.

Yes, with RAID-5 that's exactly what's going to happen. The controller will drop the entire array when the second drive is unreadable. Ironically, the array is

And about the drives, that's exactly what I mean. People will go ahead and buy whatever drives, WD Greens even, put them in RAID-5 thinking that now they're covered against data loss. When they're not.

But what it boils down to is what do you gain by using RAID-5 instead of RAID-1 (Or -10)? Again, RAID-1/10 is the gold standard of RAID.

Surool:PsyLord: Obligatory Apple Fanboi retort: Why would you need to upgrade something that is already perfect?

Above: Obligator unhinged iHater post. Nobody says that but you guys.

I actually own a few iProducts. I just wish Apple would make them friendlier to upgrades or connectivity, such as a microSD slot, non-proprietory power/sync port, etc. Just take cell phones for instance. I can charge/sync my Samsung S3 using any micro USB cable. Motorola and HTC also uses micro USB for power/data transfers.

t3knomanser:Does anyone buy an all-in-one computer because they expect it to be upgradeable? I will never, ever understand why anyone gives a shiat about the fact that products obviously designed around a certain form-factor aren't user-serviceable.

I really don't understand why anybody cares about this, or why anybody pretends to be surprised.

StoPPeRmobile:t3knomanser: Does anyone buy an all-in-one computer because they expect it to be upgradeable? I will never, ever understand why anyone gives a shiat about the fact that products obviously designed around a certain form-factor aren't user-serviceable.

I really don't understand why anybody cares about this, or why anybody pretends to be surprised.

BumpInTheNight:Btw, raiding with black drives? Heh, I guess you don't know about their little feature called TLER and why not having it on black drives is some what of a problem?

The amount of time that a WD drive spends trying to recover a bad block can be changed using a disk utility tool (versions exist for both Windows and Linux). So you could set that time to a sane value for RAID if you want. The only catch is that you're changing the value in volatile memory, not in ROM, so you have to reset it each time the drive powers up from a cold boot.

So you could use Black and Green drives for RAID in PC-based systems if you could get that value changed very early during bootup. You wouldn't be able to use them in stand-alone RAID boxes unless they include firmwares that could make the same change to your drives.

/just went with WD Red 3TB drives instead of messing with hacks//drives are whisper quiet, which is great since they're in my HTPC/NAS box in the living room

I think that the glued together computers are an environmental disaster. There's no reason to attach a screen inextricably to a computer that will be nonfunctional if half of the time of the screen. If they're going to do it, at the very least they should warranty the machine for 5 years from date of purchase with no additional charge. After all, with no moving parts, what do they have to lose? It should NEVER fail unless encountering liquids or drops (for laptops) I think we need a right to repair law for computers like there is for cars.

The issue is: what's the MTBF. For memory, it's already pretty high. And with SSDs, you're getting into that neighborhood.

The only time I've ever had a RAM stick fail was when I gave it a good static shocking. It's been a long time since I've had a HDD failure of any stripe. Just going on raw probabilities: the chances of these parts failing when the product is outside of warranty and isn't due for replacement in some fashion is pretty slim.

Some of us buy a tower and keep it for decades, gradually upgrading parts like Theseus's ship. Most of us change over computers entirely every 2-5 years. I keep myself on a 3-ish year upgrade cycle. The MTBF for most parts is much larger than that.

As long as you have a decent case.....helps if its a full tower, your upgrade costs over time are almost negligible. I pay about a hundred bucks for an 18 month old 'best video card on the market' every couple years. Whenever I feel the need to reinstall the OS I put in a new hdd, but I store all my important stuff in a Drobo with Carbonite running on it. My chip is one of the first gen 2.4 quad cores from an off the shelf HP I bought after my first big hdd crash, I put it in a new MB at some point, maybe to get SATA or 64 bit I dont remember. My optical drives are old, they maybe cost $20 bucks a piece. Parts almost never break, so upgrading is just a question of what I want to do. I think the most expensive thing I ever did was the upgrade to 64 bit with the 8 gigs of ram.

The point here isnt to say im good with tech, because Im not. The point is that A relative dunderhead like me can continually upgrade his computer for less than a quarter of the cost of matching the capability in Macintosh parts.

I bailed out on Mac when they licensed the clones. I bought one thinking it was all the advantage of a Mac with the expand-ability of the PC world. I was so wrong. As long as I didnt have anything to compare it to it was fine.....but after I used a PC at work I realized it wasn't actually necessary to sit and wait for a computer to do things. Thats when I realized my entire computer life I had been making excuses for the limitations of the Macintosh line. Its like being an abused spouse, you make excuses for your fear of change.

FinFangFark:t3knomanser: downstairs: Because hard drives and memory never fail?

The issue is: what's the MTBF. For memory, it's already pretty high. And with SSDs, you're getting into that neighborhood.

The only time I've ever had a RAM stick fail was when I gave it a good static shocking. It's been a long time since I've had a HDD failure of any stripe. Just going on raw probabilities: the chances of these parts failing when the product is outside of warranty and isn't due for replacement in some fashion is pretty slim.

Some of us buy a tower and keep it for decades, gradually upgrading parts like Theseus's ship. Most of us change over computers entirely every 2-5 years. I keep myself on a 3-ish year upgrade cycle. The MTBF for most parts is much larger than that.

So you've never experienced a HDD failure in all those years?

My last bad HDD fail was a month before I bought my Drobo. Lost everything. Im reasonably paranoid about it now, with one of those uploader backups and a Drobo. I would not trust a laptop with anything important. Cloud computing in my mind is just an admission that your ok with someone else owning all your crap.

finnished:BumpInTheNight: Interesting set of articles, the one from 2007 seems to have several doomsayer parrots but then also several that explain away the false assumptions that went into the statistics he used to come up with his theory, the biggest one being that a raid controller would decide to nuke an entire rebuild over one lost bit rather then just flag that set corrupt and move on with its life.

2TB array? No, 12TB arrays are your magic mark at this point my friend. Btw, raiding with black drives? Heh, I guess you don't know about their little feature called TLER and why not having it on black drives is some what of a problem?

Link to WD's big fat warning about using black drives in raids

I'd say if that wasn't on your radar I really suggest you reconsider.

Yes, with RAID-5 that's exactly what's going to happen. The controller will drop the entire array when the second drive is unreadable. Ironically, the array is

And about the drives, that's exactly what I mean. People will go ahead and buy whatever drives, WD Greens even, put them in RAID-5 thinking that now they're covered against data loss. When they're not.

But what it boils down to is what do you gain by using RAID-5 instead of RAID-1 (Or -10)? Again, RAID-1/10 is the gold standard of RAID.

We both agree that raids are not a substitute for backups, but what you're arguing for is like saying that the hammer is the gold-standard tool of a tradesman, every tool has its purpose.

BumpInTheNight:We both agree that raids are not a substitute for backups, but what you're arguing for is like saying that the hammer is the gold-standard tool of a tradesman, every tool has its purpose.

No, that's not what I'm saying. As far as tools go, RAID-5 is more like the tool made for cutting holes in floppy disks so you can use the reverse side. At one point it might have been very useful, but not today.

There is no situation where RAID-5 would be a better choice than RAID-1/-10.

finnished:BumpInTheNight: We both agree that raids are not a substitute for backups, but what you're arguing for is like saying that the hammer is the gold-standard tool of a tradesman, every tool has its purpose.

No, that's not what I'm saying. As far as tools go, RAID-5 is more like the tool made for cutting holes in floppy disks so you can use the reverse side. At one point it might have been very useful, but not today.

There is no situation where RAID-5 would be a better choice than RAID-1/-10.

Speed and drive capacity used for storage vs integrity ratio is higher?

No, RAID-5 has an expensive parity calculation that slows it down compared to straight mirroring.

Less lost disk space was certainly a factor in the past, and that's why it was popular. But with today's hard drive prices, it's not a reason any more. And actually, the cheap large drives are a reason NOT to use RAID-5.

finnished:No, RAID-5 has an expensive parity calculation that slows it down compared to straight mirroring.

Less lost disk space was certainly a factor in the past, and that's why it was popular. But with today's hard drive prices, it's not a reason any more. And actually, the cheap large drives are a reason NOT to use RAID-5.

Expensive is true, but which do you tend to run out of first: Disk bandwidth or processing power? With today's quad CPU hex cores let alone dedicated raid controller's abilities its really not a problem to dial up the calculations and still maintain full borne write speeds.

BumpInTheNight:Expensive is true, but which do you tend to run out of first: Disk bandwidth or processing power? With today's quad CPU hex cores let alone dedicated raid controller's abilities its really not a problem to dial up the calculations and still maintain full borne write speeds.

Of course it depends on the amount of read/write, and if you have enough activity, it'll bog down any system. RAID-5 will bog down earlier. This will be especially apparent during a rebuild, which will take hours longer to complete, while leaving your system unprotected.

But now you're just trying to figure out ways to make RAID-5 as good as RAID-1/-10. Why not use RAID-1/-10 to begin with?

finnished:No, RAID-5 has an expensive parity calculation that slows it down compared to straight mirroring.

Doesn't RAID-5 just use a XOR? With a physical controller, aren't you just talking about a few gate delays here and there? (A 4096 bit XOR in discrete ICs has a settle time of what, 24ns? Even at 6Gbps, you'd only receive 19 bytes or 1/26 of a block and an ASIC is going to beat cascaded discretes) Call me crazy, but it doesn't seem like you'd have to worry that much about the parity calculation being the bottleneck. Maybe if SATA was running higher than 150Gbps...

finnished:BumpInTheNight: Expensive is true, but which do you tend to run out of first: Disk bandwidth or processing power? With today's quad CPU hex cores let alone dedicated raid controller's abilities its really not a problem to dial up the calculations and still maintain full borne write speeds.

Of course it depends on the amount of read/write, and if you have enough activity, it'll bog down any system. RAID-5 will bog down earlier. This will be especially apparent during a rebuild, which will take hours longer to complete, while leaving your system unprotected.

But now you're just trying to figure out ways to make RAID-5 as good as RAID-1/-10. Why not use RAID-1/-10 to begin with?

Not trying to figure out my friend, trying to explain where I use raid5s to leverage surplus processing to increase write speed vs using mirrors. Besides the unprotected status only lasts until the processes are shifted to a different server (usually a few seconds) and then the one with the dead drive is tasked to rebuild with the hot spare before taking the reigns again. I'll admit that URE thing is something I'm very curious about and that'll shift my opinion about what I do at home, but I my gut is still thinking that where it'd truly come into play (disk A dies and then during rebuild disk B errors out too) the controller can handle it or whatever knocked out disk A likely slew disks B,C & D etc as well.

(Sorry, "The controller can handle it" meaning that it'll mark the sector bad and move on rather then spoil the whole rebuild, so far from random searching the 'spoil the rebuild' bug was exterminated many years ago)

ProfessorOhki:Doesn't RAID-5 just use a XOR? With a physical controller, aren't you just talking about a few gate delays here and there?

The real world difference depends hugely on the implementation, so there's no hard and fast numbers to give there. But even a small added delay gets obviously multiplied especially during a rebuild. Or if the array is in use.

The rebuild part is especially problematic, since with RAID-5/RAID-1, if you lose another disk during the rebuild, you're dead in the water. With RAID-10, not necessarily so.

BumpInTheNight:Not trying to figure out my friend, trying to explain where I use raid5s to leverage surplus processing to increase write speed vs using mirrors. Besides the unprotected status only lasts until the processes are shifted to a different server (usually a few seconds) and then the one with the dead drive is tasked to rebuild with the hot spare before taking the reigns again. I'll admit that URE thing is something I'm very curious about and that'll shift my opinion about what I do at home, but I my gut is still thinking that where it'd truly come into play (disk A dies and then during rebuild disk B errors out too) the controller can handle it or whatever knocked out disk A likely slew disks B,C & D etc as well.

So now we're up to enterprise applications with virtualization then? Ok, so say you have a server that has a hard drive that is about to fail. The server has a RAID-5 with a hot spare. Since it's a hot spare, how does the server process get moved to another VM host? The storage might not even be on the same host, even if you're not using a SAN or something. The host has no idea that the rebuild has started.

So, anyway. The operator gets notified, but he doesn't need to respond since the rebuild starts automatically with the hot spare. It churns for a while, everything looks good, the server is still available. But uh-oh, now there's a problem. You get a read error on one of the disks. The storage array disables the volume, the server is down.

Operator gets notified, only to find out the array is down. Bad news. But that's OK, there's a good backup from earlier today. But the backup was made 8 hours ago. You just lost 8 hours of data.

Compare this to a scenario WITHOUT a hot spare.

The hard drive fails. No hot spare. Operator gets notified. The server and data are still available, though. The operator then can either a) make a backup or b) move the VM to another array completely, or both. Rebuild still fails, but it doesn't matter since the data was moved off. No data lost. Life goes on.

Since the thread is going to close soon, anyone who's actually interested in continuing conversation, you can find plenty of professionals at Spiceworks' Storage forum. I'll probably be here till then, though.

finnished:ProfessorOhki: Doesn't RAID-5 just use a XOR? With a physical controller, aren't you just talking about a few gate delays here and there?

The real world difference depends hugely on the implementation, so there's no hard and fast numbers to give there. But even a small added delay gets obviously multiplied especially during a rebuild. Or if the array is in use.

The rebuild part is especially problematic, since with RAID-5/RAID-1, if you lose another disk during the rebuild, you're dead in the water. With RAID-10, not necessarily so.

Yeah, RAID-10 wins if you can swing the cost/Gb. I just didn't think the parity calculation for RADI-5 was as massive as a penalty as suggested.

Of course, then you have my use case. My chassis had spots for 4 drives. One is an independent SSD for the OS. The other 3 are an array; can't run RAID-10 on that :P

finnished:BumpInTheNight: Not trying to figure out my friend, trying to explain where I use raid5s to leverage surplus processing to increase write speed vs using mirrors. Besides the unprotected status only lasts until the processes are shifted to a different server (usually a few seconds) and then the one with the dead drive is tasked to rebuild with the hot spare before taking the reigns again. I'll admit that URE thing is something I'm very curious about and that'll shift my opinion about what I do at home, but I my gut is still thinking that where it'd truly come into play (disk A dies and then during rebuild disk B errors out too) the controller can handle it or whatever knocked out disk A likely slew disks B,C & D etc as well.

So now we're up to enterprise applications with virtualization then? Ok, so say you have a server that has a hard drive that is about to fail. The server has a RAID-5 with a hot spare. Since it's a hot spare, how does the server process get moved to another VM host? The storage might not even be on the same host, even if you're not using a SAN or something. The host has no idea that the rebuild has started.

So, anyway. The operator gets notified, but he doesn't need to respond since the rebuild starts automatically with the hot spare. It churns for a while, everything looks good, the server is still available. But uh-oh, now there's a problem. You get a read error on one of the disks. The storage array disables the volume, the server is down.

Operator gets notified, only to find out the array is down. Bad news. But that's OK, there's a good backup from earlier today. But the backup was made 8 hours ago. You just lost 8 hours of data.

Compare this to a scenario WITHOUT a hot spare.

The hard drive fails. No hot spare. Operator gets notified. The server and data are still available, though. The operator then can either a) make a backup or b) move the VM to another array completely, or both. Rebuild still fails, but it doesn't matter since the ...

ProfessorOhki:Yeah, RAID-10 wins if you can swing the cost/Gb. I just didn't think the parity calculation for RADI-5 was as massive as a penalty as suggested.

The only cost in the real world is write speed, because most users don't write nearly as often as they read, which for the more typical user is beyond acceptable. I only made the suggestion because the original person I was responding to had a raid 0 for everything, which I assumed was for cost efficiency per gb. I probably should have clarified from the start exactly why I made that suggestion.

BraveNewCheneyWorld:ProfessorOhki: Yeah, RAID-10 wins if you can swing the cost/Gb. I just didn't think the parity calculation for RADI-5 was as massive as a penalty as suggested.

The only cost in the real world is write speed, because most users don't write nearly as often as they read, which for the more typical user is beyond acceptable. I only made the suggestion because the original person I was responding to had a raid 0 for everything, which I assumed was for cost efficiency per gb. I probably should have clarified from the start exactly why I made that suggestion.

Nah, not a server, just a desktop. Only reason I even bothered with an array is because I occasionally toss around uncompressed video files and didn't want to get caught with having to fragment something near the ends. For my purposes, I might as well have gone 0, but the controller could do 5 and a bit of redundancy for the overhead seemed like a reasonable trade off. Thanks for the suggestion though.

Depends on what you're working with, I'd think. If my guess about controller implementation holds up, a massive sequential write would have the same latency penalty as a 1 block write. If you were handling discretely large data, maybe something like a renderfarm, you'd be talking about a nano-second scale latency on a multiterrabyte read/write. If you were talking about tons of small writes then it would definitely get multiplied.

/Not an IT guy//Closer to a chip guy, hence the curiosity///RAID-0////More like AID-0.//Never again