9 Answers
9

80 MB / second is actually pretty good! That's about 640mbps, which is pretty darn close to the gigabit capacity of the NIC. If you take into consideration the TCPIP overhead, and disk speed you're probably at your maximum speed.

Correct, rule of thumb when using TCP is 20% overhead.
–
pauskaJun 17 '09 at 14:21

1

As he talks about 1GiB and 80MiB, I guess it's 80mpbs and not 80MB/s.
–
radiusJun 17 '09 at 14:22

3

Though he does say "real megabytes" in his original post. Even if it is 80mpbs, he could have slow disk performance as a bottleneck.
–
Russ WarrenJun 17 '09 at 14:27

2

I assume "real megabytes" refers to mebibytes, not the fake megabytes that disk manufacturers like. (1024^3, rather than 1000^3)
–
David PashleyJun 17 '09 at 14:57

1

I saw you corrected 1GiB by 1Gbps, you must not use mebibytes for network but megabytes. Anyway your 80MiB are 671mbps. Now we have to know how you measure this. If it's a speed on ethernet layer it's poor for an Gbps card, if it's a speed at application level it's quite good but you could do a little better but we have to know how you measure it.
–
radiusJun 17 '09 at 15:35

Each connection we make requires an ephemeral port, and thus a file descriptor, and by default this is limited to 1024. To avoid the Too many open files problem you’ll need to modify the ulimit for your shell. This can be changed in /etc/security/limits.conf, but requires a logout/login. For now you can just sudo and modify the current shell (su back to your non-priv’ed user after calling ulimit if you don’t want to run as root):

ulimit -n 999999

Another thing you can try that may help increase TCP throughput is to increase the size of the interface queue. To do this, do the following:

Gigabit ethernet is just over 1 billion bits per second. With 8/10 encoding this gives you a maximum of around 100MB per second. A 32 bit PCI bus should be able to put 133MB/sec through and you should be able to saturate it (I can demonstrate saturation of a PCI bus with a fibre channel card and get a figure close to the theoretical bandwidth of the bus), so it is unlikely to be the cause of the bottleneck unless there is other bus traffic.

The bottleneck is probably somewhere else unless you have another card using bandwidth on the bus.

wikipedia says that twisted pair based 1000Base-T doesn't use 8/10. Is it right?
–
SaveTheRbtzJun 18 '09 at 2:25

Not sure. I was under the impression it did. Maybe I'm thinking of Fibre Channel.
–
ConcernedOfTunbridgeWellsJun 18 '09 at 10:06

1000Base-T has a line rate of 125M symbols/s, the same as 100MbE. It uses all 4 pairs however, and 5 signal levels (simplified: 2 level signals per direction and a neutral), to produce 1 Gbps. The first 8 bits are encoded to 12 transmission bits to prime the DC-balance algorithm, but otherwise transmissions occur at line rate. 1000Base-SX (and other fiber variants) do use 8b/10b line coding and operate at a line rate of 1.25Gbps.
–
Chris SNov 17 '11 at 22:32

Disk subsystem: It takes at least 3-4 hard drives in a RAID array of some sort to be able to hit GigE speeds. This is true on the sending and receiving end.

CPU: GigE can use a lot more CPU than you would think. Given that it's in a 33mhz PCI slot I'm going to go out on a limb here and say that this system is fairly old and may have a slower cpu.

TCP/IP overhead: Some bits that are sent over the wire is not the data payload but other overhead bits. This said I have had a system that consistently hit and sustained 115MB/s with a single GigE link.

PCI Bus: Is the NIC the only thing on that PCI bus or is it being shared with another device.

Other factors: There are too many other factors to mention them all but some of the biggest would be what other disk IO activity is happening. Is it a mix of read/write, lots of small IO requests, etc.

While writes are more expensive than reads in terms of performance there's still no way a single hard drive is going to keep up with a gigabit network connection. Even a 15k rpm SAS drive isn't capable of sustaining gigabit speeds across the entire surface for reads or writes.
–
3dinfluenceJul 8 '09 at 18:18

1

Striping two sata drives should be able to fill a gigabit, even if done in software.
–
RoyOct 20 '09 at 9:28

How sure are you that it is the card that is the bottleneck? It might be that is the best speed it can negotiate with the device on the other end so it is stuck waiting. The other device might be stuck running at 10/100 speeds so 80 would be about right with a bit of overhead.

Erm, no. If the other end was running at 10 Mbps, you'd be unable to push more than about 1.2 MB/s (slightly less, in fact) and if it was running 100 Mbps, you'd be looking at a peak somewhere in the 11.9 MB/s range. "Gig ethernet" is not "gigabyte", it's "gigabit" (and it's a "base 10" gig, at that, so 10^9, not 2^30).
–
VatineNov 15 '10 at 13:25

In my experience 80 MiB/s is pretty good. I've not seen much higher speeds no matter what combination of NICs and switches are being used. I remember 100 Mbps showing much the same behaviour. 70-80% utilization was pretty much all you could ask for, though I see gigabit equipment running above 90% in 100 Mbps mode these days.

By comparison, my very first gigabit configuration at home, based on SMC switches and broadcom integrated NICs could barely manage 400 Mbps. Now, years later and using Netgear management switches along with Intel and Marlin NICs I usually find myself in the range of 70-80 MiB/s sustained transfer.