It sounds obvious that a faster connection lowers latency... But I wonder: I am working remotely on a host the other side of the world - light can only travel so fast (1 foot in a nano second) and we both have broadband connections in excess of 1,000kbps upload and 10,000kbps download:

Will a higher bandwidth connection lower the time it takes to ping?? Since it is very little data how would a faster connection help? Currently ping takes 450ms is there any way I can improve it??

Random sidenote, if you filled an AirBus with 3TB hard drives and flew across the Atlantic, the connection speed would probably be in the tens of Gb's/second but the latency would be hours.
–
samAug 9 '11 at 11:03

450ms smells like satellite to start with. I am going nearly half way around the world (Chicago -> Berlin) and I have about 125ms. If you take this as linear, 450ms would be what - around the world, nearly. Something is odd here.
–
TomTomAug 30 '12 at 20:16

2

@TomTom - the op is from Australia according to his profile, which is notorious for pathetic latency within our own country. My bet is that the majority of that latency occurs before his packets even leave the country. If he's with someone like TPG, it probably happens before it leaves his ISP.
–
Mark Henderson♦Aug 30 '12 at 21:41

12 Answers
12

First, Bandwidth is not the same as latency. A faster connection won't necessarily reduce your latency. 450ms does seem a little slow but not that far off if you are going 1/2 way across the world. As a frame of reference a high speed, low latency link will take ~70-80ms to cross the US. You might be able to eek a bit less latency by changing your provider assuming they have a more optimal peering path. but I can't promise anything.

In other words no. The only way to improve response times could be using a different provider which may have a better path. Is that correct?
–
user55029Jul 12 '11 at 2:34

2

We would need to see traceroutes (in both directions) to comment more. Knowing the first-hop latency (aka the last mile) can also help in determining whether another provider will actually help or not.
–
Wim KerkhoffJul 12 '11 at 4:09

tracereoutes are deliberately slowed it seems over the Internet and are ridiculously long..
–
user55029Jul 18 '11 at 1:46

Ah, no, traceroutes work fine from all the locations I am having servers at, sorry.
–
TomTomAug 30 '12 at 20:14

I have a bad connection and if I limit my bandwidth using Netlimiter, I get a higher ping in tests... So for example if I disable 50% of the bandwidth, my ping will increase drastically.
–
BluedayzDec 28 '14 at 10:21

A "faster" connection (as you're referring to it) doesn't lower latency. A "faster" connection allows more data to be placed on the wire in a given period of time.

Bandwidth is a measure of capacity.

Latency is a measure of delay.

EDIT

Here's an example of the difference between bandwidth and latency: Imagine 2 internet connections, one 10Mbps and the other 1 Mbps. Both have latency of 50ms. Now imagine that I'm sending keystrokes to a remote terminal on the other end of those connections. For the sake of simplicity lets say that each keystroke consumes 1 Mbps of bandwidth. On the 10Mbps connection I'm able to send the letters A, B, C, D, E, F, G, H, I, J at the same time, so they all arrive at the remote terminal 50ms later and are echoed on the screen... at the same time. Now on the 1Mbps connection each keystroke is sent independently because each keystroke consumes all of the available bandwidth. So the letter A is sent, and then 50ms later it's received by the remote terminal and echoed on the screen, followed by the letter B 50ms after that, then the letter C... all the way to the letter J. It would take 500ms for all ten letters to be received on the remote terminal and to be echoed to the screen. Is the 10Mbps connection faster? No it isn't. It's latency is 50ms just like the 1Mbps connection. It appears faster due to the fact that it has higher throughput (bandwidth) and more data can be placed on the wire at one time. That's the difference between bandwidth (capacity) and latency (delay). In the strict sense, a "faster" connection (in the way you're referring to it) will not reduce latency.

Connections are measured in two primary factors, latency and bandwidth. There is no such thing as "high speed" or "faster". Those are marketing doublespeak and are meaningless in the context of professionally managed connections.

OK, same answer as the others, could you answer though: will a higher bandwidth connection lower the ping response time? if not: is there any way I can get a better response time?
–
user55029Jul 12 '11 at 2:32

2

Bandwidth and Latency are independent. Latency is dependent on three things (usually): Connection medium (wireless is slow, modems are slow, cable modems are fast, so are T1s and Fiber), Distance (electricity travels near the speed of light, which is slower than you'd think), Congestions (waiting your turn adds time). The first factor is the only one you'll really have control over.
–
Chris SJul 12 '11 at 16:10

True but it is easier to say "high speed connection" rather than "we move more packets per second than the other guys!"
–
JYeltonJul 12 '11 at 18:14

understood - but thats not quite right - see what silver fire has said. Take the case of a ping of 32 bytes: you are saying so long as the bandwidth at each peer is greater than 32 bytes per second and neither peer is doing any other comms it would only take latency time to reach the other peer; actually it would take 1 second + latency as the peer has to download the 32 byte ping whereas if connections had a bandwidth 320 bytes per second it would take 0.1 second + latency. Admittedly once you get to connections in excess of 1 MB/s then this time to download is small. But silverfire is right.
–
user55029Jul 14 '11 at 2:42

2

The total transmission time is not the same at latency. A ping test tries to approximate latency by sending a very small transmission. The fact that bandwidth, especially in extreme examples, affect total transmission time is not lost on me or the people who decided a standard ping test would be 32 bytes. Latency is the time it takes the transmission from the start of transmitting to the start of receiving at the other end. Hyperbole makes a good example: If you were to test a connection with a 100MB file, and it took 3 hours, your connection probably doesn't have a 3 hour latency.
–
Chris SJul 14 '11 at 12:39

Transmission delay is the time to push the packet bits on the wire. Propagation delay is related to the medium and is the time to reach the destination. Processing delay is related to the receiving and sending machines/routers.

Often it will, yes. But the two aren't the same thing and aren't directly linked. It just happens that typically connections with more bandwidth also have lower latency due to the technology being used.

But it's not always true. Consider a fast method to transfer massive amounts of data: filling up 12 2TB hard drives with data and sending them by courier. The data transfer rate is VERY high (over 2000MBps given that you can send 24TB in 24 hours). The latency is also very high (24 hours). Dialup has a much lower latency then that, but it'd take years to send 24TB over dialup.

It's not a good idea to directly equate the two. If you specifically need lower latency, you should ask about that specifically and not shop by bandwidth.

The only real solution for improving your latency is to shorten the number of hops in between the two hosts in question.

If you are a big-enough corporate customer, you should be able to open a dialogue with your telecommunications providers on both ends about taking a shorter (possibly costlier) IP route between the two sites.

Number of hops doesn't necessarily mean anything. I would much rather use a path with 30 hops that transits purely through fibre then a 5 hop path that has a satellite connection in it.
–
Wim KerkhoffJul 12 '11 at 4:05

You have done a lot of positing without gathering facts. Your best bet is to try to identify the source of high latency: where does it start? Then you can try to answer the question of: how do I fix it?

Run a traceroute, or better yet, mtr (mytraceroute). If you're on Windows, you can use winmtr. PingPlotter is also a good tool for this.

Find where your high latency starts, then work to fix it. Throwing more bandwidth at your problem isn't the answer.

Higher bandwidth will not help much, unless bulk data is drowning out interactive data. If both sides used fibre instead of xDSL/Cable/Wireless, that might you shave 20-80ms on your RTT.

Do a ping test using pingtest.net to determine the quality of each link. Latency is important, but /jitter/ can make a huge difference as well. I would much rather have a slower (3 Mbps) connection without jitter then a faster (eg 15 Mbps) connection with jitter.

For TCP connections (e.g. SSH, telnet etc), some TCP tuning can help.

You can also look at using a TCP accelerator; there are commercial ones but pepsal can make a difference already.

There are many different answers to this question, and the correct answer (in my opinion) is "It depends".

It doesn't matter if you have a 1gbit/s connection if it's saturated. TCP (and other protocols) rely on transmission checks wich in 99% of cases are not prioritized correctly with QoS or similar technologies.

Symmetrical (SDSL, fibre etc) lines are generally better suited for low-latency operations, as they do not share RX with TX (wich means that TCP ACK's, ICMP replies etc won't get hindered if you are downloading at full blaze). It still requires QoS to guarantee traffic for sensitive applications (VoIP in particular).

Surprisingly, the number of hits (and quality of hits) on google when it comes to prioritizing TCP ACK's are quite thin.. talk to any networking expert, and they'll know why you need this.

A higher bandwidth will mean that the packet will take less time to completely download before another packet can transmit data and the time the entire packet takes to download is a factor here as well but in reality that's only going to add 10-20ms in the worst cases.

Another possible cause of latency is wireless transmissions, almost every form of multi-access wireless will place a large toll on latency whether it be normal home wireless or mobile wireless as the wireless card must wait till everyone has finished transmitting data before it can send its own data. The more users transmitting on a wireless system, the slower the speed AND latency (again not by much, mainly caused by waiting till its clear to send)

The number one factor is the time a packet of data will spend being sorted at routers and other WAN infrastructure.

The theoretical minimum time a packet of data takes to travel the entire world is around 70ms, that's when the packet is traveling at the speed of light.

Ask around and find out if other people on faster connections and other ISP's are experiencing the same latency, its fully possible its caused by the ISP or the connection but its very unlikely.

Bandwidth and delay are different, but not completely unlinked.
While it's basically true, things are not black and white. There are a lot of shades of grey in between.

It's true that having a larger bandwidth doesn't necessarily means having lower delays, and it doesn't necessarily means having the same or higher delays.

We should remember that it depends on the infrastructure which is consisted from network devices and physical media in the way from our host to its destination.

One ISP can prioritize low / high bandwidth connections, and give more hardware and software benefits to one or another which will result in a difference in the delay.

And in the spirit of the different examples that were given here, yes - it's better taking a truck with 20 disks with the same capacity then taking the same truck with only 1 of these 20 disks.
However - what about the weight of the disk? More disks mean more fuel and slower acceleration. But what if I change the engine of the truck?

So, in short - bandwidth and delay are different, but not completely unlinked.