I'm trying to set up hardware raid with gentoo. I have 5 scsi disks and an adaptec aaa-130u2 adaptor. I believe there is no linux support for this device but also that you can use the generic aic7xxx and it will work just fine. After I set up the array and boot from the gentoo install cd, there are 5 devices in the dev directory (sda, sdb, sdc, sdd, sde). Should I be seeing only one logical device? Do I need to do something else.

There is a lot of info on software raid. This post would seem to indicate I should use software raid. But I'm not sure.

Hi and welcome to the forums.
If you had support for hardware raid, you would only see one drive, not five. Have you tried using the Adaptec RAID aacraid? However, I can't tell you if it will support your card._________________Jorge.

My personal preference is to go with Software RAID over hardware RAID in linux. Mostly because:

- lots of flexibility in how you configure the disks
- ability to monitor arrays in linux without special software
- reuse code from other's who monitor their /proc/mdstat status
- no dependency on particular hardware
- easy to move software RAID disks to other controllers in the system

The biggest fear that I have when running on a particular brand of hardware RAID is that the card will get fried and I'll have to go hunting for a new controller that is compatible. (The expensive answer, of course, is to buy (3) controllers. Install one, keep one on-site as a spare and keep the other one off-site as a backup spare.) This gets easier if you have multiple machines that all use the same RAID card, then you might only need 2 spares for every dozen machines. But when every machine you buy is unique, it's worrisome.

Granted, I have't done software RAID on top of SCSI yet.

But to give an example of disk portability: While troubleshooting an issue with my AMD64 unit, I was installing/uninstalling numerous add-in IDE cards. Moving the (2) RAID1 disks around to different ports in the system, etc. The mdadm software RAID simply didn't care when drive identifiers changed, it used the UUIDs on the individual components of the RAID arrays and assembled them into the proper RAID volumes. (I eventually went back after I was done and touched up mdadm.conf to reflect the proper drive identifiers, mostly so I wouldn't confuse myself a few years from now.)

I went ahead and did software raid since gentoo was seeing 5 drives. After the first failed attempt, everything went smoothly, except that for some reason the last hard disk does not appear in /dev on the mounted raid drive (but it did appear in the livecd environment). So I couldn't install a boot image using grub on that last drive. But looking at mdstat, all 5 drives appear to be working in a raid. I also couldn't shut down without forcing. Reboot couldn't find /dev/initctl.

After reboot, the boot screen did not disply at all (but I know it was in boot). Once it booted, some info was quickly displayed and then replaced by barely readable text. One line appeared to say something about not being able to find /dev/md2, which is the root mount. So I'm not sure what's wrong there. The other really frustrating thing is, in order to mount my raid drive from the boot cd, I first have to set up raid, which means typing in the entire raidtab file each time until I figure out what the problem is. I think I'll burn the file to a disk (no floppy drive). Is there another way I can deal with this?

Semi-OT... but why is it needed to use a raidtab file instead of using mdadm and relying on the RAID superblocks that get written in the partitions? I've been able to mount RAID systems from the LiveCD using:

No need for me to type in a raidtab file at all. (And I can use the output of mdadm --detail --scan, dumped into /etc/mdadm.conf as a backup configuration. Creating the disks was done using the "--create" option of mdadm.)

My personal preference is to go with Software RAID over hardware RAID in linux. Mostly because:

- lots of flexibility in how you configure the disks
- ability to monitor arrays in linux without special software
- reuse code from other's who monitor their /proc/mdstat status
- no dependency on particular hardware
- easy to move software RAID disks to other controllers in the system

I'm sure other folks have opinions that hardware RAID is better.

Yeah, here are few opinions

- More reliable
- Doesn't take CPU time at all
- Usually having cache backup battery
- Possibility to have Hot-Swap drive case

& I don't mean those crappy integrated (soft)hardware raid controllers _________________1st use 'Search' & lastly add [Solved] to
the subject of your first post in the thread.

Nothing complex in my grub.conf files (although I'm tempted to look into the auto-fallback options which allow the system to auto-fallback to a previously good kernel). The "root=" option simply tells the kernel where to look for the root partition.

P.S. And I'll agree that hot-swap capability is one of the nicer things about hardware RAID. If I had a shop where 24x7 was required, I'd lean more towards hardware RAID.

these disks are already on a scsi and w/out raid theres no telling weather or not they will act as a single channel or dual channel. the card is going to inflict overhead that will hurt a software raid.

It turns out I was missing some raid support in the kernel. I can boot now. The only problem is, it's unable to mount the boot partition. It fails on fsck - says that it's not a valid ext2 partition and it can't be repaired. I tried mounting it myself and had the same problem. I tried using mdadm to recreate md0 but that didn't work.

However, I have no trouble creating md0 from the livecd and mounting boot. So it is a valid partition. In my fstab I have /boot labled as ext2 (on the hard disk boot). I'm pretty sure it is ext2. Is there any way I can check?