Replies

Besides decreasing serialization delay, there's also the decrease in delay in sending multiple packets that are part of a burst. The first bit likely won't arrive sooner, but subsequence ones likely will.

Another way of looking at this, you write there's no network congestion, but how do you define network congestion? To me, any time there's more than one packet enqueued you have network congestion. Then it's not a question of having congestion, but how much congestion. ([edit]More bandwidth does tend to reduce this congestion.)

Yes, many do but that doesn't mean you don't have transient congestion or that there's no advantage to having additional bandwidth.

Simple example, openning/loading a huge spreadsheet or Powerpoint presentation from a server. Even if the link shows very low average utilization, the additional bandwidth decreases the time to transfer the file to the end host.

The advantage is clear. What's not clear is whether this advantage is worth the additional cost.

Conversely, I've worked with circuits showing 100% utilization for hours performing backups. As long as the backup completed within the time window allocated, the fact that many would consider the link heavly congested, didn't, alone, merit the need for bandwidth upgrade.

So, again, there's benefit to moving from FastE to GigE, but it's a slightly different question whether the benefit's value outweighs the cost.

Hopefully you might see some improvement for larger data transfers, of course this also assumes there's supporting bandwidth between the two hosts.

Something else you might notice, if running XP hosts, XP adjusts its TCP receive window based on NIC connection bandwidth. If you have any high speed WAN connections that are long distance, you might see a jump in WAN transfer performance too. Again, increase isn't due to increase in LAN bandwidth, it's because of XP's TCP stack adjustment.