Seth Mos writes:
>There recently was a message about building a $5000 dollar IDE raid on
>slashdot IIRC.
>Don't know the URL anymore.
http://staff.sdsc.edu/its/terafile/
>Basic configuration was purchasing a large case which can house 16 disks
>Purchase 16 100GB Maxtor IDE disks (the 540DX uses less power)
>Purchase 2 3ware Escalade 6800 controllers
>A PC for booting the machine.
I was disapointed by this article. It was way more than $5k (probably
that was Slashdot's mistake) which is to be expected for 16 disks

They had 3 different models. The one they built was 6000 or so.

I guess, however their performance barchart (which looks impressive)
bears little relation to the numbers in their spreadsheet.
In particular their RAID5 write numbers are kinda low at ~18MB/s.
The newer 7000 series 3ware cards might be better???

writes. Alternatively we can double their 50-60MB/s RAID0 reads to around
110MB/s(!). We were pretty happy to see those numbers :)

But you probably can't go faster. The PCI bus does not go faster. Otherwise
use a serverworks chipset with a dual PCI bus. Then you can go past the
illusive 133MB/s.

I believe that there are other motherboards out there that have 2 PCI busses.

Even a Dell PowerEdge 1300 with a serverworks chipset had a dual PCI bus
($2000 machine)

We're still setting up the machine (these are from late last night so
should maybe be taken with a grain of salt) and are moving to faster
memory and cpu soon, so hope to see them improve. Also one of our
disks is misbehaving and probably is dragging the numbers down.

How do you do that? bad cable perhaps (I've seen this with IDE before).

I
should try bonnie++ with 4x memory size and not just the default 2x
also - that may lower our numbers a bit and remove more caching effects.
We haven't played with chunk sizes, mkfs options, mount options, or
external logs yet.

External logs make a huge difference and using more logbufs during mount
will probably help if you have a busy filesystem. Make sure you have more
the 256MB of ram.

I also tried the SDSC folks 'fastest' 2xRAID5 + RAID0 configuration,
but it was no quicker than just a RAID5 over all 8 disks. I guess
they're seeing some artifacts from their 3ware cards/drivers.

2 cards, 2 different raidsets, no problems. I think they are having a hard
time living together on the bus.

XFS seemed slightly slower (5%?) than ext2 for the tests we've run so
far. We're running some NFS tests over 100Mbit now. Gigabit will follow
once we have another cat5 gigabit card somewhere to write from! :)

It might be sligthly slower but it is possible that you are getting bitten
by the internal log on the raid device. How many ethernetcards does the
machine contain and wat network are you connected to?
It might well be that you are not hitting the stalls yet because you are
not pushing harder then 10MB/s over NFS.

Cheers
Seth
--
Seth
Every program has two purposes one for which
it was written and another for which it wasn't
I use the last kind.