At the moment we're trying to decide whether to move our datacenter from the west coast to the east coast.

However, I am seeing some disturbing latency numbers from my west coast location to the east coast. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes:

West coast to east coast:
215 ms latency, 46 ms transfer time, 261 ms total

West coast to west coast:
114 ms latency, 41 ms transfer time, 155 ms total

It makes sense that Corvallis, OR is geographically closer to my location in Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only increased 10%, yet the latency increased 100%!

That feels... wrong... to me.

I found a few links here that were helpful (through Google no less!) ...

Any measurement across Networks you do not control seem almost pointless. Too often in these types of Network discussions it seems that we forget there is a temporal component associated with every packet. If you ran the test repeatedly 24 x 7 and arrived at some conclusion that is one thing. If you ran the test twice then I suggest you run it some more. And to those advocating the use of ping as some measure of performance, don't. On every major network I ever worked on we set ICMP traffic to the lowest priority. Ping means only one thing, and it ain't ;) about performance.
– dbasnettMay 9 '10 at 11:03

From where I live, Jefferson City, MO, the times are similar.
– dbasnettMay 9 '10 at 12:46

4

As a side note: light itself takes ~14ms to travel from NY to SF in straight line (considering fiber all the way).
– ShadokJul 16 '12 at 8:54

Light in fiber travels with a velocity factor of .67 (equivalent to the refractive index) ~201,000 km/s, so it's at least 20 ms.
– Zac67Sep 25 '17 at 6:38

9 Answers
9

Speed of Light:
You are not going beat the speed of light as an interesting academic point. This link works out Stanford to Boston at ~40ms best possible time. When this person did the calculation he decided the internet operates at about "within a factor of two of the speed of light", so there is about ~85ms transfer time.

TCP Window Size:
If you are having transfer speed issues you may need to increase the receiving window tcp size. You might also need to enable window scaling if this is a high bandwidth connection with high latency (Called a "Long Fat Pipe"). So if you are transferring a large file, you need to have a big enough receiving window to fill the pipe without having to wait for window updates. I went into some detail on how to calculate that in my answer Tuning an Elephant.

First, even though most clients are
served by a geographically nearby CDN
node, a sizeable fraction of clients
experience latencies several tens of
milliseconds higher than other clients
in the same region. Second, we find
that queueing delays often override
the benefits of a client interacting
with a nearby server.

BGP Peerings:
Also if you start to study BGP (core internet routing protocol) and how ISPs choose peerings, you will find it is often more about finances and politics, so you might not always get the 'best' route to certain geographic locations depending on your ISP. You can look at how your IP is connected to other ISPs (Autonomous Systems) using a looking glass router. You can also use a special whois service:

In a vacuum a photon can travel the equator in roughly 134 ms. The same photon in glass would take around 200 ms. A 3,000 mile piece of fiber has 24 ms. of delay without any devices.
– dbasnettMay 9 '10 at 12:33

Measure with ICMP first if at all possible. ICMP tests typically use a very small payload by default, do not use a three-way handshake, and do not have to interact with another application up the stack like HTTP does. Whatever the case, it is of the utmost importance that HTTP results do not get mixed up with ICMP results. They are apples and oranges.

Going by the answer of Rich Adams and using the site that he recommended, you can see that on AT&T's backbone, it takes 72 ms for ICMP traffic to move between their SF and NY endpoints. That is a fair number to go by, but you must keep in mind that this is on a network that is completely controlled by AT&T. It does not take into account the transition to your home or office network.

If you do a ping against careers.stackoverflow.com from your source network, you should see something not too far off of 72 ms (maybe +/- 20 ms). If that is the case, then you can probably assume that the network path between the two of you is okay and running within normal ranges. If not, don't panic and measure from a few other places. It could be your ISP.

Assuming that passed, your next step is to tackle the application layer and determine if there is anything wrong with the additional overhead you are seeing with your HTTP requests. This can vary from app to app due to hardware, OS, and application stack, but since you have roughly identical equipment on both the East and West coasts, you could have East coast users hit the West coast servers and West coast users hit the East coast. If both sites are configured properly, I would expect to see all numbers to be more less equal and to therefore demonstrate that what you are seeing is pretty much par for the coarse.

If those HTTP times have a wide variance, I would not be surprised if there was a configuration issue on the slower performing site.

Now, once you are at this point, you can attempt to do some more aggressive optimization on the app side in order to see if those numbers can be reduced at all. For example, if your are using IIS 7, are you taking advantage of its caching capabilities, etc? Maybe you could win something there, maybe not. When it comes to tweaking low-level items such as TCP windows, I am very skeptical that it would have much of an impact for something like Stack Overflow. But hey - you won't know until you try it and measure.

Several of the answers here are using ping and traceroute for their explanations. These tools have their place, but they are not reliable for network performance measurement.

In particular, (at least some) Juniper routers send processing of ICMP events to the control plane of the router. This is MUCH slower than the forwarding plane, especially in a backbone router.

There are other circumstances where the ICMP response can be much slower than a router's actual forwarding performance. For instance, imagine an all-software router (no specialized forwarding hardware) that is at 99% of CPU capacity, but it is still moving traffic fine. Do you want it to spend a lot of cycles processing traceroute responses, or forwarding traffic? So processing the response is a super low priority.

As a result, ping/traceroute give you reasonable upper bounds - things are going at least that fast - but they don't really tell you how fast real traffic is going.

In any event -

Here's an example traceroute from the University of Michigan (central US) to Stanford (west coast US). (It happens to go by way of Washington, DC (east coast US), which is 500 miles in the "wrong" direction.)

In particular, note the time difference between the traceroute results from the wash router and the atla router (hops 7 & 8). the network path goes first to wash and then to atla. wash takes 50-100ms to respond, atla takes about 28ms. Clearly atla is further away, but its traceroute results suggest that it's closer.

To add some specific relevance to the original question... As you can see I had an 83 ms round-trip ping time to stanford, so we know the network can go at least this fast.

Note that the research & education network path that I took on this traceroute is likely to be faster than a commodity internet path. R&E networks generally overprovision their connections, which makes buffering in each router unlikely. Also, note the long physical path, longer than coast-to-coast, although clearly representative of real traffic.

right, it is expected that the east coast datacenter would be more friendly to our European audiences -- you're seeing about +200ms time taken to traverse the width of the USA. Should only be ~80ms per the other answers though?
– Jeff AtwoodApr 30 '10 at 11:49

it looks like it is consistent at around 200ms, I've hit refresh about 20-30 times now on both (not at the same time though), and the serverfault site looks like it hovers around 200ms +/- more than the other one. I tried a traceroute, but it comes up with stars on everything so perhaps our IT admins have blocked something.
– Lasse Vågsæther KarlsenApr 30 '10 at 11:59

everyone here has some really good point. and are correct in their own POV.

And it all comes down to there is no real exact answer here, because there are so many variable any answer given can always be proven wrong just by changing one of a hundred variables.

Like the 72ms NY to SF latency is the latency from PoP to PoP of a carrier of a packet. This does not take into account any of the other great points that some have pointed out here about congestion, packet loss, quality of service, out of order packets, or packet size, or network rerouting just between the perfect world of the PoP to PoP.

And then when you add in the last mile (generally many miles) from the PoP to your actual location within the two cities where all of these variable become much more fluid thing start to exponentially escalate out of reasonable guess-ability!

As an example I ran a test between NY city and SF of the course of a business day. I did this on a day were there was no major "incidents" occurring around the world that would cause a spike in traffic. So maybe this was not average in todays world! But nonetheless it was my test. I actually measured from one business location to another over this period, and during normal business hours of each coast.

At the same time, I monitored the circuit providers numbers on the web.

The results were latency numbers between 88 and 100 ms from door to door of the business locations. This did not include any inter office network latency numbers.

The service provider networks latency ranged between 70 and 80 ms. Meaning the last mile latency could have ranged between 18 and 30 ms. I did not correlate the exact peaks and lows between the two environments.