Bandwidth—the number of bits per second that a device or connection can transfer every second—is the number that everyone loves to talk about. Whether it be the gigabit per second that your Ethernet card does, boasting about your fancy new FTTP Internet connection at 85 megabits per second, or bemoaning the lousy 128 kilobits per second you get on hotel Wi-Fi, bandwidth gets the headlines.

Bandwidth isn't, however, the only number that's important when it comes to network performance. Latency—the time it takes the message you send to arrive at the other end—is also critically important. Depending on what you're trying to do, high latency can make your network connections crawl, even if your bandwidth is abundant.

Why latency matters

It's easy to understand why bandwidth is important. If a YouTube stream has a bitrate of 1Mb/s, it's obvious that to play it back in real time, without buffering, you'll need at least 1Mb/s of bandwidth. If the game you're installing from Steam is about 3.6GB and your bandwidth is about 8Mb/s, it will take about an hour to download.

Latency issues can be a bit subtler. Some are immediately obvious; others are less so.

Nowadays, almost all international phone calls are typically placed over undersea cables, but not too long ago, satellite routing was common. Anyone who's used one or seen one on TV will know that the experience is rather odd. Conversation takes on a disjointed character because of the noticeable delay between saying something and getting acknowledgement or a response from the person you're talking to. Free-flowing conversation is impossible. That's latency at work.

There are some applications, such as voice and video chatting, which suffer in exactly the same way as satellite calls of old. The time delay is directly observable, and it disrupts the conversation.

However, this isn't the only way in which latency can make its presence felt; it's merely the most obvious. Just as we acknowledge what someone is telling us in conversation (with the occasional nod of the head, "uh huh," "go on," and similar utterances), most Internet protocols have a similar system of acknowledgement. They don't send a continuous never-ending stream of bytes. Instead, they send a series of discrete packets. When you download a big file from a Web server, for example, the server doesn't simply barrage you with an unending stream of bytes as fast as it can. Instead, it sends a packet of perhaps a few thousand bytes at a time, then waits to hear back that they were received correctly. It doesn't send the next packet until it has received this acknowledgement.

Because of this two-way communication, latency can have a significant impact on a connection's throughput. All the bandwidth in the world doesn't help you if you're not actually sending any data because you're still waiting to hear back if the last bit of data you sent has arrived.

How latency works

It's traditional to examine networks using a layered model that separates different aspects of the network (the physical connection, the basic addressing and routing, the application protocol) and analyze them separately. There are two models in wide use, a 7-layered one called the OSI model and a 4-layered one used by IP, the Internet Protocol. IP's 4-layer model is what we're going to talk about here. It's a simpler model, and for most purposes, it's just as good.

Sometimes c just isn't enough

The bottom layer is called the link layer. This is the layer that provides local physical connectivity; this is where you have Ethernet, Wi-Fi, dial-up, or satellite connectivity, for example. This is the layer where we get bitten by an inconvenient fact of the universe: the speed of light is finite.

Take those satellite phones, for example. Communications satellites are in geostationary orbits, putting them about 35,786 kilometers above the equator. Even if the satellite is directly overhead, a signal is going to have to travel 71,572 km—35,786 km up, 35,786 km down. If you're not on the equator, directly under the satellite, the distance is even greater. Even at light speed that's going to take 0.24 seconds; every message you send over the satellite link will arrive a quarter of a second later. The reply to the message will take another quarter of a second, for a total round trip time of half a second.

Undersea cables are a whole lot shorter. While light travels slower in glass than it does in air, the result is a considerable improvement. The TAT-14 cable between the US and Europe has a total round trip length of about 15,428 km—barely more than a fifth the distance that a satellite connection has to go. Using undersea cables like TAT-14, the round trip time between London and New York can be brought down below 100 milliseconds, reaching about 60 milliseconds on the fastest links. The speed of light means that there's a minimum bound of about 45 milliseconds between the cities.

The link layer can have impact closer to home, too. Many of us use Wi-Fi on our networks. The airwaves are a shared medium: if one system is transmitting on a particular frequency, no other system nearby can use the same frequency. Sometimes two systems will start broadcasting simultaneously anyway. When this happens, they have to stop broadcasting and wait a random amount of time for a quiet period before trying again. Wired Ethernet can have similar collisions, though the modern prevalence of switches (replacing the hubs of old) has tended to make them less common.

The Internet is an internetwork

The link layer is the part that moves traffic around the local network. There are usually lots of links involved in using the Internet. For example, you might have home Wi-Fi and copper Ethernet to your modem, VDSL to a cabinet in the street, optical MPLS to an ISP, and then who knows. If you're unlucky, you might even have some satellite action in there. How does the data know where to go? That's all governed by the next layer up: the internet layer. This links the disparate hardware into a singular large internetwork.

The internet layer offers plenty of scope for injection of latency all of its own—and this isn't just a few milliseconds here and there for signals to move around the world. You can get seconds of latency without the packets of data going anywhere.

The culprit here is Moore's Law, the rule of thumb stating that transistor density doubles every 18 months or so. This doubling has the consequence that RAM halves in price—or becomes twice as large—every 18 months. While RAM was once a precious commodity, today it's dirt cheap. As a result, systems that once had just a few kilobytes of RAM are now blessed with copious megabytes.

Normally, this is a good thing. Sometimes, however, it's disastrous. A widespread problem with IP networks tends more to the disastrous end of the spectrum: bufferbloat.

Network traffic tends to be bursty. Click a link on a webpage and you'll do lots of traffic as your browser fetches the new page from the server, but then the connection will be idle for a while as you read the new page. Network infrastructure—routers, network cards, that kind of thing—all has to have a certain amount of buffering to temporarily hold packets before transmitting them to handle this bursty behavior and smooth over some of the peaks. The difficult part is getting those buffers the right size. A lot of the time, they're far, far too big.

That sounds counter-intuitive, since normally when it comes to memory, bigger means better. As a general rule, the network connection you have locally, whether wired or wireless, is a lot faster than your connection to the wider Internet. It's not too unusual to have gigabit local networking with just a megabit upstream bandwidth to the 'Net, a ratio of 1,000:1.

Thanks again to Moore's Law (making it ridiculously cheap to throw in some extra RAM), the DSL modem/router that joins the two networks might have several megabytes of buffer in it. Even a megabyte of buffer is a problem. Imagine you're uploading a 20MB video to YouTube, for example. A megabyte of buffer will fill in about eight milliseconds, because it's on the fast gigabit connection. But a megabyte of buffer will take eight seconds to actually upload to YouTube.

If the only traffic you cared about was your YouTube connection, this wouldn't be a big deal. But it normally isn't. Normally you'll leave that tediously slow upload to churn away in one tab while continuing to look at cat pictures in another tab. Here's where the problem bites you: each request you send for a new cat picture will get in the same buffer, at the back. It will have to wait for the megabyte of traffic in front of it to be uploaded before it can finally get onto the Internet and retrieve the latest Maru. Which means it has to wait eight seconds. Eight seconds in which your browser can do nothing bit twiddle its thumbs.

Eight seconds to load a webpage is bad; it's an utter disaster when you're trying to transmit VoIP voice traffic or Skype video.

When I was designing VoIP systems for global companies, my catch phrase became: "you can't overcome the speed of light." VoIP and IP Telephony opened a lot of people's eyes to the latency issue. While gamers had dealt with it daily, and some applications, especially real time trading and other systems had to as well, people didn't understand why they couldn't send 100 calls across a 10MB pipe from New York to Singapore. Even New York to Los Angeles can be a challenge without the right type of network in place. I think people got much smarter about network design after VoIP and especially IP Telephony, became commonplace.

While this is a pretty good article on networking basics, I think it was a mistake to tie it with the slow YouTube issues headline. There has been a substantial change in the way YouTube serves data to visitors lately. Whether it is more aggressive ISP throttling or YouTube reducing the amount of CDN servers, something is definitely going on other than just "that's just the way networks work".

While this article may not be incorrect, a lot of the recent issues people may be having with youtube buffering could be due to CDN caching. I've found that if I set up an iptables filter to block 173.194.55.0/24, while it does take a few more seconds to resolve the main google ip, it does improve my youtube performance. I can view 1080p videos without buffering...it actually uses my full connection speed.I run sudo iptables -A INPUT -s 173.194.55.0/24 -j REJECT on my linux box and it's like a brand new world.

I know it's outside the scope of this article, but it paints added latency as an insurmountable consequence of ballooning RAM sizes in network equipment. This simply isn't the case. While it IS a problem, and it's true that I hadn't actually considered it outside of an enterprise setting, there are solutions. Someone above mentioned traffic shaping, but a more descriptive word for it is different traffic queuing strategies. All it is, is maintaining several smaller buffers instead of one big one. Your notional router has 1MB of RAM, so instead of having 1x1MB buffer, you can have, say, 10x0.1MB buffers. That way any one traffic flow doesn't monopolize the connection. The complexity of this can range from simple round-robin, where any new flow gets assigned to the emptiest buffer, and all buffers are given an equal precedence, all the way up to enterprise QoS where traffic from various sources, to various destinations, using various protocols are assigned to different buffers, which are then prioritized in different ways.

Now, how pervasive that is in home networking equipment, I don't know. I would suspect that dd-wrt and the like would support it however. I know when I get a free moment I'll be taking a look at my router at the house...

While this is a pretty good article on networking basics, I think it was a mistake to tie it with the slow YouTube issues headline. There has been a substantial change in the way YouTube serves data to visitors lately. Whether it is more aggressive ISP throttling or YouTube reducing the amount of CDN servers, something is definitely going on other than just "that's just the way networks work".

I agree. Looking a the title, I thought this was going to be an article about Youtube's server problems - I recently tried ripping a video that would not play, and it downloaded at about 60KB/s on a 100Mb/s connection. Clearly it was located on an overloaded sever.

Thanks, a good brush up on routing basics and I learned a thing or two as well. I'm with you, latency has got to be snuffed out.

Making sure you're using the best DNS sever is one thing you can do to decrease latency. I use an app called namebench (for OS X) and it determines the fastest DNS server near me it's often not my ISP's DNS server here in Thailand.

While this is a pretty good article on networking basics, I think it was a mistake to tie it with the slow YouTube issues headline. There has been a substantial change in the way YouTube serves data to visitors lately. Whether it is more aggressive ISP throttling or YouTube reducing the amount of CDN servers, something is definitely going on other than just "that's just the way networks work".

I find it interesting that if a youtube video wants to show you a 30 second ad, it plays instantly in all its high res glory without skipping a frame, then you get to the video you went there to watch, and its buffer, buffer, buffer, even on a low def version.

I haven't had time to read the whole article yet, but am I the only one that actually wants a watch like the one in the picture? I'm imagining the lines in the inner part being the hours and the outer bit being the minutes of course.

When you download a big file from a Web server, for example, the server doesn't simply barrage you with an unending stream of bytes as fast as it can. Instead, it sends a packet of perhaps a few thousand bytes at a time, then waits to hear back that they were received correctly. It doesn't send the next packet until it has received this acknowledgement.

Look up Sliding WIndow Protocol.

This was all figured out once, in a different millennium long ago swept away in the sands of time. Once there were these things called Modems and Dial Up.

There was a file download protocol known as XModem. It worked as described. Server sends a packet. Doesn't send next packet until client sends back acknowledgement. Horribly inefficient. It's not only the speed of light, but the fact that bandwidth had a horrible effect. We were only talking 2400, then 9600, then 19200, then thirty-something thousand bits per second. So even bits at the speed of light didn't matter because of the limited number of bits per second (eg, bandwidth). But then the latency of the client, and the return acknowledgement.

Later protocols like YModem and ZModem were developed. These are called Sliding Window protocols. The server calculates a window size of, let's say, 8 packets. The server starts sending you packets 1, then 2, then 3, without awaiting acknowledgement. Maybe by the time it is sending packet 4 or 5, the acknowledgement of packets 1 and 2 are arriving in a single acknowledgement packet. The acknowledgement packet indicates the highest packet number successfully received. So if an ACK packet says ACK 2, then the server knows that packet 1 is also implicitly acknowledged, and the server can now reclaim buffers 1 and 2 to load up with packets 9 and 10. Packets 3 to 8 are still unacknowledged. Maybe by the time the server is sending packet 7, acknowledgement of packet 5 arrives.

A Negative Acknowledgement (NAK) can also be sent. If the client recognizes that packet 3 is garbled, it can send a single packet that says ACK 2, NAK 3. That way the server knows that the very next packet it needs to send is a retransmission of packet 3, followed by wherever it left off.

This type of protocol keeps the download channel fully busy without any interruptions.

The "TCP" part of TCP/IP is just such a sliding window protocol. Except it has a sliding window of byte numbers instead of "packet" numbers. The window is bytes n through m. The client can acknowledge receipt up through byte x, allowing the server to reclaim buffer space on the sending side. Intermediate routers could, in principle, notice the receipt of several intermediate TCP packets that contain sequential parts of the byte stream, and combine them into a single packet. Or in principle, split a single packet into several. TCP also has the capability to send a fragmented packet larger than the MTU (maximum transfer unit) of the links between the server and the client.

By the way, if you do want a server to "barrage you with an unending stream of bytes as fast as it can" you should look at Tsunami UDP, which does literally that: "A fast user-space file transfer protocol that uses TCP control and UDP data for transfer over very high speed". Very nice for very large one-time WAN transfers, comes from academia transferring huge datasets across the world.

While this article may not be incorrect, a lot of the recent issues people may be having with youtube buffering could be due to CDN caching. I've found that if I set up an iptables filter to block 173.194.55.0/24, while it does take a few more seconds to resolve the main google ip, it does improve my youtube performance. I can view 1080p videos without buffering...it actually uses my full connection speed.I run sudo iptables -A INPUT -s 173.194.55.0/24 -j REJECT on my linux box and it's like a brand new world.

There's a write-up on Reddit about this but it's falsely represented as accurate. First, block parts of Google's IP range can cause problems with accessing other Google services. Not very smart for people who use multiple services. Additionally, YouTube doesn't just service content based on server types. Meaning they have serveral cache sites to relay content across the world. These cache URLs have a hierarchy and taking shortcuts through the structure can temporarily help or hurt your connection speed (how fast you receive Youtube data). HTTP arguments actually specify theshold speeds, quality type, etc. Some of these arguments are interdependent.

Check it out. I thought the same thing about blocking the IP ranges but research proves differently.

What YOU can do about it:1. Turn on QoS on your router and set the up and down-stream bandwidth parameters to slightly less than than your observed bandwidth. This will keep the buffers of filling up.2. Prioritize ACK's, small packets and DNS traffic. Your connection will "feel" faster.3. Decide what the priority or latency-sensitive traffic is on your network. Prioritize that and de-prioritize bulk transfers.

And the bursty nature of web and email is why ISPs are getting all hissy about bandwidth hogs.

This because as long as usage was bursty, you could cram say 10x the users onto a single backbone connection because they would rarely all be saturating their connection at the same time.

Torrents, streaming and similar changes that usage big time, moving the usage closer to 1:1 between user capacity and backbone capacity needs.

Never mind that while users are sold flat rate these days, the network interconnects have agreements based on either metered billing or tit for tat upload and download. That is, the agreements expects users to download more than they upload. Againt torrents make a mess of this, as you upload while you download to help maximize the download speed for everyone in the swarm.

EDIT:With FTTH I get the following pings to YouTubeReply from 74.125.108.83: bytes=32 time=9ms TTL=59Reply from 173.194.62.19: bytes=32 time=24ms TTL=55Reply from 74.125.4.83: bytes=32 time=37ms TTL=53Otherwise pinging the ISP webserver shows me an average 1.25ms

I don't think the speed of light is generally the primary latency problem. It is the latency of all of the intermediate routers.

At the speed of light, a signal can go around the earth, at the equator, about seven and one half times per second. So even if the convoluted route from server to client is twice the distance around the earth (unlikely) that doesn't fully explain the ping times. A lot of latency happens once a packet is received by a router, held in memory, it's cpu figures out which outgoing port to send it on, queues it for transmission on that port, the packet sits there wasting time in an outgoing queue. Once being transmitted, there is still the notion of just how many bits-per-second it can be clocked out at onto the outgoing line. (bandwidth) The latency problem I just described happens also in ethernet "switches", but the routing problem is vastly simpler, based on nothing more than MAC addresses and having no knowledge of what higher protocol (like TCP, or AppleTalk, or DecNet, or SPX/IPX, or NetBIOS, etc are riding in ethernet frames).

While this is a pretty good article on networking basics, I think it was a mistake to tie it with the slow YouTube issues headline. There has been a substantial change in the way YouTube serves data to visitors lately. Whether it is more aggressive ISP throttling or YouTube reducing the amount of CDN servers, something is definitely going on other than just "that's just the way networks work".

Agreed. I expected something much different after reading the title. At this point, I think most netizens understand how latency works and it's effects on a network or the internet.

I'm still not sure that ISPs are the reason YouTube speeds are slow. There are tons of indications that YouTube itself could be the problem. The way YouTube deliveries content is sometimes complicated. After you get past the HTTP arguments like: threshold speed, encoding time, quality settings, etc. Their cache sites may have to mirror the content to sometimes distant cache sites. The cache sites are globally located and work based on rough DNS location. So if you're at a corporate office in California but the company's outside DNS servers (or ISP) is located in New York then your connection could suffer.

When you download a big file from a Web server, for example, the server doesn't simply barrage you with an unending stream of bytes as fast as it can. Instead, it sends a packet of perhaps a few thousand bytes at a time, then waits to hear back that they were received correctly. It doesn't send the next packet until it has received this acknowledgement.

Look up Sliding WIndow Protocol.

This was all figured out once, in a different millennium long ago swept away in the sands of time. Once there were these things called Modems and Dial Up.

There was a file download protocol known as XModem. It worked as described. Server sends a packet. Doesn't send next packet until client sends back acknowledgement. Horribly inefficient. It's not only the speed of light, but the fact that bandwidth had a horrible effect. We were only talking 2400, then 9600, then 19200, then thirty-something thousand bits per second. So even bits at the speed of light didn't matter because of the limited number of bits per second (eg, bandwidth). But then the latency of the client, and the return acknowledgement.

Later protocols like YModem and ZModem were developed. These are called Sliding Window protocols. The server calculates a window size of, let's say, 8 packets. The server starts sending you packets 1, then 2, then 3, without awaiting acknowledgement. Maybe by the time it is sending packet 4 or 5, the acknowledgement of packets 1 and 2 are arriving in a single acknowledgement packet. The acknowledgement packet indicates the highest packet number successfully received. So if an ACK packet says ACK 2, then the server knows that packet 1 is also implicitly acknowledged, and the server can now reclaim buffers 1 and 2 to load up with packets 9 and 10. Packets 3 to 8 are still unacknowledged. Maybe by the time the server is sending packet 7, acknowledgement of packet 5 arrives.

A Negative Acknowledgement (NAK) can also be sent. If the client recognizes that packet 3 is garbled, it can send a single packet that says ACK 2, NAK 3. That way the server knows that the very next packet it needs to send is a retransmission of packet 3, followed by wherever it left off.

This type of protocol keeps the download channel fully busy without any interruptions.

The "TCP" part of TCP/IP is just such a sliding window protocol. Except it has a sliding window of byte numbers instead of "packet" numbers. The window is bytes n through m. The client can acknowledge receipt up through byte x, allowing the server to reclaim buffer space on the sending side. Intermediate routers could, in principle, notice the receipt of several intermediate TCP packets that contain sequential parts of the byte stream, and combine them into a single packet. Or in principle, split a single packet into several. TCP also has the capability to send a fragmented packet larger than the MTU (maximum transfer unit) of the links between the server and the client.

Bufferbloat messes with this tho, right? In that packets can be sitting in the buffer but be seen as lost in transit by the protocol and so it gets sent again, wasting bandwidth and stuffing the buffer even more.

End result is that once the buffer starts filling the observed bandwidth drops like a rock because most of the traffic are pointless resends.

Seems everyone is dead set on re-posting bad information that blocking IP ranges can help your YouTube experience. As a whole, this is wrong. First, you're blocking IP ranges to other Google services. Not smart. Secondly, circumventing the cache site hierarchy may/may not help you. It's all temporary seeing as YouTube's cache sites mirror/relay the most popular content is help ease overall performance. At this point, viewing an unpopular video would render baseline results (pre-IP blocking speeds or worse).

We've changed the headline to make it clear that the article is about latency, not issues that people are apparently experiencing with YouTube. We just used YouTube as an example since it's so well-known.

Very minor issue and I hate to be nit-picky. Someone may have already mentioned it.

A 3.6GB download would actually take closer to 6 hours on a 8Mb/s connection. Data speeds tend to be rated in bits, while storage is typically in bytes.

It was a very informative article, so I'm going to chalk it up as a typo. It's just annoying when someone whines to me that they have a 50Mb/s connection and it takes them almost 10 seconds to download a 50MB file.

I don't think the speed of light is generally the primary latency problem. It is the latency of all of the intermediate routers.

At the speed of light, a signal can go around the earth, at the equator, about seven and one half times. So even if the convoluted route from server to client is twice the distance around the earth (unlikely) that doesn't fully explain the ping times. A lot of latency happens once a packet is received by a router, held in memory, it's cpu figures out which outgoing port to send it on, queues it for transmission on that port, the packet sits there wasting time in an outgoing queue. Once being transmitted, there is still the notion of just how many bits-per-second it can be clocked out at onto the outgoing line. (bandwidth) The latency problem I just described happens also in ethernet "switches", but the routing problem is vastly simpler, based on nothing more than MAC addresses and having no knowledge of what higher protocol (like TCP, or AppleTalk, or DecNet, or SPX/IPX, or NetBIOS, etc are riding in ethernet frames).

Sorry but I've got to disagree with this. The actual time to process a packet either for routing or switching is essentially instantaneous when discussing WAN traffic. Yes, a packet can spend time in a queue but that's different. The time to lookup the route, even including access lists, is minimal on all decent hardware platforms. And by decent I mean anything your packets are likely to hit once they leave your house or business. The forwarding lookup tables are optimized and stored in either fast SRAM (DRAM is generally considered too slow for forwarding tables) or Content Addressable Memory (CAM) in the case of ethernet switches. Some ethernet switches are actually cut-through which means they start transmitting the packet out the egress port before it's even fully received on the ingress port. At 10G speeds that means you're measuring the latency in nanoseconds.

Buffers and distance are generally adding tens or hundreds of milliseconds. Switching and routing are generally adding 100s of nanoseconds. It's three orders of magnitude difference in a lot of cases.

We've changed the headline to make it clear that the article is about latency, not issues that people are apparently experiencing with YouTube. We just used YouTube as an example since it's so well-known.

Just an idea: maybe you could do an article about YouTube buffering problems? That would be an interesting read

We've changed the headline to make it clear that the article is about latency, not issues that people are apparently experiencing with YouTube. We just used YouTube as an example since it's so well-known.

Just an idea: maybe you could do an article about YouTube buffering problems? That would be an interesting read

Huh? Guess it's just me, but Youtube seems to have resolved most of their issues around here.

Bufferbloat messes with this tho, right? In that packets can be sitting in the buffer but be seen as lost in transit by the protocol and so it gets sent again, wasting bandwidth and stuffing the buffer even more.

End result is that once the buffer starts filling the observed bandwidth drops like a rock because most of the traffic are pointless resends.

My understanding is that the TCP layer in your OS will acknowledge the incoming bytes, so that no retransmission occurs. But the buffer in the TCP layer may not get consumed by the client software.

IIRC, the acknowledgement in TCP is a range of buffer space available on the client. So if the client sent, for instance, an acknowledgement that it is ready to receive the byte range 306,281 through 306,280 (which is an empty buffer range), the server will stop sending until the byte range available at the client becomes non-zero. But the server will know that everything up through byte 306,280 has been received and can free up resources at the server.

Once the client consumes some bytes out of the client TCP buffer, then TCP layer can send a non-empty range to the server causing it to resume transmission.

We've changed the headline to make it clear that the article is about latency, not issues that people are apparently experiencing with YouTube. We just used YouTube as an example since it's so well-known.

Just an idea: maybe you could do an article about YouTube buffering problems? That would be an interesting read

Huh? Guess it's just me, but Youtube seems to have resolved most of their issues around here.

It's not this simple. Even if YouTube works well with your ISP, it doesn't mean it works well with another. As I said in my previous post, the issue is the limited peering bandwidth between the ISP and the local YouTube Redirector. Who pays whom for the generated bandwidth?

I don't know if it is presently true, but for a long time, a CPU could not possibly sustain transmission rates supported by the ethernet cards plugged into their EISA/ISA slots, or later PCI slots. (Remember those?)

So why have a 100 Megabit / second ethernet link?

Capacity of the entire network, that's why.

Imagine a network switch that can send you several packets at high speed, even though they set in your computer's memory briefly before being consumed by software intended to receive them. The computer might not be able to absorb a large amount of traffic at that speed, but being able to blast out several packets at that speed, frees up resources at the network switch.

Also remember that before "ethernet switches" there were "ethernet hubs" which were once common. A packet received at the hub was repeated out on all ports even though it was only necessary to send it out one port to get it on its way to its destination. You want a fast enough network to deal with the fact that the wire coming to your computer also carried the packets between Alice and her printer -- even though you didn't care about any of those packets.

All of that said, bandwidth does play a role in speed. But not the only role in speed. If you believe bandwidth plays no role, try lowering your transmission speed from 100baseT or 1000baseT down to modem speeds of, say, 38000 bits per second.

Very minor issue and I hate to be nit-picky. Someone may have already mentioned it.

A 3.6GB download would actually take closer to 6 hours on a 8Mb/s connection. Data speeds tend to be rated in bits, while storage is typically in bytes.

It was a very informative article, so I'm going to chalk it up as a typo. It's just annoying when someone whines to me that they have a 50Mb/s connection and it takes them almost 10 seconds to download a 50MB file.

While this is a pretty good article on networking basics, I think it was a mistake to tie it with the slow YouTube issues headline. There has been a substantial change in the way YouTube serves data to visitors lately. Whether it is more aggressive ISP throttling or YouTube reducing the amount of CDN servers, something is definitely going on other than just "that's just the way networks work".

I think Google has done something to more aggressively provide bandwidth to popular clips than before, when it was more evenly distributed. I often watch those 10 million+ viewed clips pretty easily but watching something with 10k views is pure torture.

Or maybe it is simple a reduction in server capacity, and this is uncovering their priority system more as it's under more pressure than before.