Why the file writings are low than hdparm test?
Are there some kernel limit should be tuned?

I have an Areca 1680 adapter with 16x1Tb SAS disks, scientific linux 6.0

EDIT

My bad.
Sorry all units are in MB/s

More on hardware:

2 areca contollers in dual quadcore machine. 16Gb ram
the firmware for sas backplane and areca is recent one.
the disks are seagate 7.200 rpm 16x1Tb x2 raid boxes.
each 8 disks are raid6, so total 4 volumes with lba=64.

two volumes groupped by striped lvm and formatted ext4

the stripe size is 128

when I format the volume I can see by iotop it writes 400mb/s

iostat shows also that both lvm member drives are writing with 450MB/s

FINALLY WRITING with 1600GB/s

One of the raids was degrading the performance due to bad disk.
It is strange that disk in the jbod mode gives 100MB/s with hdparm as others.
After heavy IO, it was reporting in the log files Write Error(not it has 10 of them).
The raid still was not failing or degrading.

5 Answers
5

Check if your FS is aligned with RAID dimensions. I'm getting 320MB/s on RAID-6 array with 8 x 2TB SATA drives on XFS and I think it is limited by 3Gb/s SAS channel rater then RAID-6 performance. You can get some ideas on alignment from this thread.

450Mb/s = 56MB/s which is about on par with what you're seeing in real life. They're both giving you the same reading (but one is in bits, one is in bytes). You need to divide 450 by 8 to get the same measure for both.

(In your question, you've got the capitalisation the other way around, now I can only hope/assume that this is a typo error, because if you reverse the capitalisation you get an almost perfect match)

Unfortunately™, you're deeply wrong. There's no reason for hdparm to speak in terms of bits at all, so it doesn't. It uses "MebiBytes" per second and you can check it out by yourself.
–
poigeApr 3 '11 at 0:07

2

@Poige - you'll have to forgive me then, I was simply working off the fact that the op talked in Mbs and then MBs, of which there is a large difference.
–
Mark Henderson♦Apr 3 '11 at 4:56

hdparm does not test write performance, it's "read-only". Moreover, it tests actually block I/O read performance, but the way you invoke dd makes it test both write and filesystem performance as well (and RAID-5,6 write is noticeable slower than read by design). If your FS is EXT3, for e. g., you can easily get poor performance having not formatted it properly (taking not into consideration full stripe size parameter of your RAID).

Also, there's quite a big number of people who tend to use rather small stripe sizes which leads to suboptimal disk I/O. What was your stripe size choice when creating this RAID?

Another question is how dd's numbers differ while varying bs parameter? Have you tried using full stripe write size for it?