For the purposes of this HOWTO, I am assuming you have only a
Linux system running. Also, note that I've only tried this out with
the DPT Smartcache IV PM2144UW and PM3334UW controllers, with DPT
(SmartRAID tower) and Wetex enclosures, and I have no experience with
other setups. So things may be different for your setup.

One well-supported host-based hardware RAID controller (i.e, a
controller for which there exists a driver under Linux) is one that is
made by
DPT. However, there
exist other host-based and SCSI-to-SCSI controllers which may work
under Linux. These include the ones made by
Syred,
ICP-Vortex, and
BusLogic. See the
RAID solutions for Linux page for more info.

If, in the future, there is support for other controllers, I will
do my best to incorporate that information into this HOWTO. Please
send me any such information you think is appropriate for this
HOWTO.

ICP vortex has a complete line of disk array controllers which
support Linux. The ICP driver is in the Linux kernel since version
2.0.31. All major Linux Distributors S.u.S.e., LST Power Linux, Caldera
and Red Hat support the ICP controllers as boot/installation
controllers. The RAID system can easily be configured with their
ROMSETUP (you do not have to boot MS-DOS for configuration!).

ICP is transitioning the entry-level RS series from Ultra2 SCSI to
Ultra160 SCSI. The drivers, firmware, features, capabilities etc
remain the same. They are still 32 Bit cards with the i960RS
processor working at 100MHz. The only difference is they will work at
Ultra160 (data transfer rate of 160MB/sec) rather than Ultra2 (data
transfer of 80MB/sec).

Effective immediately, the GDT7523RN units will become GDT8523RZ
and the GDT7623RN units will become GDT8623RZ. The transition from
33MHz on the PCI bus to 66MHz represents a huge potential performance
increase. The new cards will have the new Intel 80303 "Zion"
processor, allowing bus master transfer rates of up to 528MB/sec, and
will take up to 256MB of ECC RAM on PC133 SDRAM Dimms.

Given all these options, if you're looking for a RAID solution,
you need to think carefully about what you want. Depending on what
you want to do, and which RAID level you wish to use, some cards may
be better than others. SCSI-to-SCSI adapters may not be as good as
host-based adapters, for example. Michael Neuffer (
neuffer@uni-mainz.de), the author of the EATA-DMA
driver, has a nice discussion about this on his
Linux High Performance SCSI and RAID page.

The enclosure type affects the hot swap-ability of the drive, the
warning systems (i.e., whether there will be indication of failure,
and whether you will know which drive has failed), and what kind of
treatment your drive receives (for example, redundant cooling and
power supplies). We used the DPT supplied enclosures which work
extremely well, but they are expensive.

Refer to the instruction manual to install the card and the
drives. For DPT, since a storage manager for Linux doesn't exist yet,
you need to create a MS-DOS-formatted disk with the system on it
(usually created using the command "format /s" at the MS-DOS prompt).
You will also be using the DPT storage manager for MS-DOS (available
from
the Adaptec website),
which you should probably make a copy of for safety.

Once the hardware is in place, boot using the DOS system
disk. Replace the DOS disk with the storage manager. And invoke the
storage manager using the command:

a:\ dptmgr

Wait a minute or so, and you'll get a nice menu of options. Configure
the set of disks as a hardware RAID (single logical array). Choose
"other" as the operating system.

The MS-DOS storage manager is a lot easier to use with a mouse,
and so you might want to have a mouse driver on the initial system
disk you create.

Technically, it should be possible to run the SCO storage manager
under Linux, but it may be more trouble than its worth. It's probably
more easier to run the MS-DOS storage manager under Linux.

You will need to configure the kernel with SCSI support and the
appropriate low level driver. See the
Kernel HOWTO for information on how to compile the kernel. Once you choose
"yes" for SCSI support, in the low level drivers section, select the
driver of your choice (EATA DMA or EATA ISA/EISA/PCI for most EATA DMA
compliant (DPT) cards, EATA PIO for the very old PM2001 and PM2012A
from DPT). Most drivers, including the EATA DMA and EATA ISA/EISA/PCI
drivers, should be available in recent kernel versions.

Once you have the kernel compiled, reboot, and if you've set up
everything correctly, you should see the driver recognising the RAID
as a single SCSI disk. If you use RAID-5, you will see the size of
this disk to be 2/3 of the actual disk space available.

You can now start treating the RAID as a regular disk. The first
thing you'll need to do is partition the disk (using fdisk). You'll
then need to set up an ext2 filesystem. This can be done
by running the command:

% mkfs -t ext2 /dev/sdxN

where /dev/sdxN is the name of the SCSI partition. Once you do this,
you'll be able to mount the partitions and use them as you would any
other disk (including adding entries in /etc/fstab).

We first tried to test hot swapping by removing a drive and putting
it back in the DPT-supplied enclosure/tower (which you buy for an
additional cost). Before we could carry this out to completion, one
of the disks failed (as I write this, the beeping is driving me
crazy). Even though one of the disks failed, all the data on the RAID
drive was accessible.

Instead of replacing the drive, we just went through the motions
of hot swapping and put the same drive back in. The drive rebuilt
itself and everything turned out okay. During the time the disk had
filed, and during the rebuilding process, all the data was
accessible. Though it should be noted that if another disk had failed,
we'd have been in serious trouble.

Here's the output of the Bonnie program, on a 2144 UW with 9x3=17
GB RAID 5 setup, using the EATA DMA driver. The RAID is on a dual
processor Pentium Pro machine running Linux 2.0.33. For comparison,
the Bonnie results for the IDE drive on that machine are also given.

Some people have disputed the above timings (and rightly so---I've
been unable to try it out on our machines since they're completely loaded)
because the size of the file used may have led to it being cached
(resulting in an unusually good performance report). Here are some
timings with a 3344 UW controller:

This section describes some of the commands available under Linux
to check on the RAID configuration. Again, while references to the
eata_dma driver is made, this can be used to check up on any
driver.

To see the configuration for your driver, type:

% cat /proc/scsi/eata_dma/N

where N is the host id for the controller. You should see something
like this:

This could be due to several reasons, but it's probably because
the appropriate driver is not configured in the kernel. Check and make
sure the appropriate driver (EATA-DMA or EATA ISA/EISA/PCI for most
DPT cards) is configured.

The RAID has not been configured properly. If you're using a DPT
storage manager, you need to configure the RAID disks as a single
logical array. Michael Neuffer (
neuffer@uni-mainz.de) writes: "When you configure
the controller with the SM start it with the parameter /FW0 and/or
select Solaris as OS. This will cause the array setup to be managed
internally by the controller."

As stated in the DPT manual, this is clearly a no-no and might
require the disks to be returned to the manufacturer, since the DPT
Storage Manager might not be able format it. However, you might be
able to perform a low level format on it, using a program supplied by
DPT, called clfmt in their utilities page. Read the
instructions after unzipping the clfmt.zip file on how to use it (and
use it wisely). Once you do the low level format, you might be able to
treat the disks like new. Use this program carefully!

and this might end up causing the machine to freeze. I (and many
others) have been able to fix this problem by simply reading one or
two hundred MB from the RAID array with dd like this:

% dd if=/dev/sdX of=/dev/null bs=1024k count=128

During a format, a fast rush of requests for chunks of memory that
is directly accessible is made, and sometimes the memory manager
cannot deliver it on time anymore. The dd is a workaround
that will simply create the requests sequentially instead of one huge
heap at once like the format tends to create it.

Read the SCSI-HOWTO again. Check the cabling and the termination.
Try a different machine if you have access to one. The most common
cause of problems with SCSI devices and drivers is because of faulty
or misconfigured hardware. Finally, you can post to the various
newsgroups or e-mail me, and I'll do my best to get back to you.