Strange HTTP Performance Variations Between OS’s

[2013-12-19 I let latentsee.com lapse but you can still install latentsee on your own webserver]

In a recent talk at
VelocityConf, John Rauser explained the effect of TCP Slow Start and Congestion
Control on web performance. He pointed out that RFC 1122 states:

Recent work by Jacobson on Internet congestion and
TCP retransmission stability has produced a transmission algorithm
combining “slow start” with “congestion avoidance”. A TCP MUST implement this
algorithm.

While examining the impact of these with my new HTTP performance testing tool
(LatentSee) I noticed that the charts
generated on my Mac & Windows machines didn’t seem to match the theory. Usually
we would expect to receive 3 packets (< 4500 bytes) in the first segment.
Instead I am seeing up to 67KB on the Mac and around 35KB on Windows 7.

I’m very curious about these differences. It takes the same time for my Mac to retrieve any file up to 67KB in size from Slicehost. Have they tuned their TCP stack differently? Why then does Ubuntu behave similarly against both Slicehost and Brightbox? Is everyone conforming to the RFCs?