So the UPS man stops by today and drops off a couple boxes. In one I have some boring stuff that I bought for my part-time job; nothing that I get to tinker with so let's just toss it into a corner for a couple days and forget about that for now.

In the other box are 4, yes 4, 1TB drives. ...sorry about that, I had to wipe my chin again.

Anyway, I decided the other day to setup a software RAID set on an old pc for backups. I plugged in the drives and booted up the box. one quick command later

Code:

dmesg | grep TB

and I find out that the drives are /dev/sd{b,c,d,e} Well, now we are in business.

use fdisk to create one big partition on each one and set the partition type to "fd" This took about 2 minutes per drive, the first ones took longer because I had to print the help menu to remember the next options. Once that was done it was time to create the RAID set.

Now I have a new device/dev/md0. I created a directory and mounted it, added it to fstab so it will mount on reboot and I am done. 2.75TB of free space. All told it took me more time to get the drives in the case and run the wiring than it did to setup the RAID set and get it working.

I think back to the days of recompiling a kernel just to add a new piece of hardware, or banging my head against the desk trying to get XX software configured correctly. I think I might enjoy the direction Linux is headed.

I was nervous about it as well. I was talking to a couple people in the local LUG and one of them has been doing this for backups for the past several years.

He has replaced drives that have failed, upgraded in place by replacing the drives one at a time and then expanding the volumes, and all has gone swimmingly.

There is a rumor that as long as you get the drives to ID in the same manner on a second machine you can even move the array to another linux machine and get the volume working. I have not tested that theory and have my doubts; but hey, who thought setting this up would be this easy?

I was nervous about it as well. I was talking to a couple people in the local LUG and one of them has been doing this for backups for the past several years.

He has replaced drives that have failed, upgraded in place by replacing the drives one at a time and then expanding the volumes, and all has gone swimmingly.

There is a rumor that as long as you get the drives to ID in the same manner on a second machine you can even move the array to another linux machine and get the volume working. I have not tested that theory and have my doubts; but hey, who thought setting this up would be this easy?

I may need your expertise later on this week Lodis.

I'm planning on adding a second RAID Array to the FileServer (runing Ubuntu Server/KDE 9.04 Jaunty) was going to use the spare Areca 1210 I took out of my gamer after I built the file server (got rid of a fair amount of heat in my gaming rig by doing that! 7 drives puts out some serious firepower) for the second Array but the mainboard has 6 unused sata ports on it and a RAID chip... Long story short I think I would rather try a software raid on the Intel ICH10R chip rather than adding in the second Areca card. Getting two of those cards to work in the same box for different arrays could be a bit tricky but Im thinking that one hardware and one software raid should be a better workaround. Thoughts?

Maybe you could shed some light on Urmums Cross-Platform RAID desires... I am pretty sure that isn't a good way to do it WITHOUT using hardware and he doesn't want to spend $100 on a card.

/sigh

Urmumsacow and I thrashed it about in this thread Crash. Me explaining why sometimes fakeraid and Software RAID solutions werent the most efficient means to an end. However, since this IS doable under debian like distro's, I would think that both urmumsacow and I could bothe benefit from the thread.

I just don't think eh is going to get the Linux / Windows crossing he wants without hardware.

I agree. Hardware RAID is so much more reliable and controllable than is fakeraid. Too many ways to implement it in fakeraid under differing OS's and different controller designs and chipset designs its like untying Medusa's Locks.

Its why I suggested to him to byte (sorry couldnt resist) the bullet and buy a RAID Card. They really are not that expensive compared to buying a (usually proprietary) network storage solution that requires you to run some piece of software that is ill written and not user friendly. And most halfway decent low end RAID controllers for PCI-Express are compatable with a host of OS's and NOS's

Wasn't there a SATA RAID card reviewed on the front page a couple weeks ago? As I recall it was only like $40.

Yep the Highpoint RocketRaid 2640x4

here is a quote from the front page article tho:

"As expected, the RocketRAID 2640x4, which has four SAS/SATA 3Gb/s ports but no onboard processor or memory, performed better than our test bed motherboardâ€™s onboard RAID controller but couldnâ€™t match the performance of the $450 Adaptec 5405, which boasts an onboard 1.2GHz processor and 256MB DDR2 cache."

That quote says it is still a fakeraid and is relying on the cpu to do XOR processing. The Adaptec 5405 and the Areca 1210 are True Hardware RAID devices and do all of their calculations on the card. Really all you are buying yourself here is a few more sata/sas ports

Who is online

Users browsing this forum: No registered users and 3 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum