I have added minimal hdparm -t results above. Do you have any benchmarking suggestions?
–
c cardDec 15 '11 at 16:26

Common disk IO benchmarks are bonnie++, iozone, fio. Make sure the size of your test file(s) are at least 2*RAM.
–
jannebDec 16 '11 at 6:12

After running hdparm and bonnie, the results showed the disks were the problem as you suggested. I upgraded the source disk to a solid state drive, and the destination disk to two disks in a raid0, and the results were about 6 GB/m -- THANKS
–
c cardDec 19 '11 at 14:21

In my experience iSCSI is the lowest-overhead of the bunch, an jumbo-frames do end up counting. I have seen iSCSI saturate a GigE connection using the LIO-Target iSCSI framework and a ramdisk as the target. That thing flew. The older version of the Linux iSCSI stack did have some performance issues in it, and couldn't use a ramdisk for full-bore throughput. I'm not sure what FreeNAS is running these days, the LIO-Target stuff is fairly recent.

One of the bigger limiters to such throughput ends up being the storage-system backend. As I mentioned, I got the above speed through use of a ram-disk (the server had 32GB of RAM, so it was worth trying). When I tried the same test using storage striped across 48 disks, I was able to saturate GigE during the sequential tests, but the random I/O tests were well below that; in the 65-80MB/s range as I recall.