YouTube buffering or slow downloads? Blame the speed of light

Sorry but I've got to disagree with this. The actual time to process a packet either for routing or switching is essentially instantaneous when discussing WAN traffic. Yes, a packet can spend time in a queue but that's different. The time to lookup the route, even including access lists, is minimal on all decent hardware platforms. And by decent I mean anything your packets are likely to hit once they leave your house or business. The forwarding lookup tables are optimized and stored in either fast SRAM (DRAM is generally considered too slow for forwarding tables) or Content Addressable Memory (CAM) in the case of ethernet switches. Some ethernet switches are actually cut-through which means they start transmitting the packet out the egress port before it's even fully received on the ingress port. At 10G speeds that means you're measuring the latency in nanoseconds.

Buffers and distance are generally adding tens or hundreds of milliseconds. Switching and routing are generally adding 100s of nanoseconds. It's three orders of magnitude difference in a lot of cases.

Okay. You make a good point. Routers are a lot faster than once upon a time.

Also, very high end routers (not what you buy at Office Depot) even start sending the packet out the outgoing port before the packet has finished coming in on the incoming port. (eg, start clocking the packet bits on the outgoing line, before the packet is fully received at a fixed clock rate on the incoming line.)

I don't think the speed of light is generally the primary latency problem. It is the latency of all of the intermediate routers.

well..both are surely contributors.

I'm on a fast network at the University of Michigan, connected to other universities via Internet2. So the routers in question are probably less loaded than typical commercial ISP routers.

On a relatively short distance (Ann Arbor, MI, to Georgetown in Washington, DC; 835 km by road, i believe the fiber path is close to that distance), I"m seeing a 32ms ping RTT. This is 18 router hops away. At 200k km/sec (speed of light in fiber according to wikipedia), 832 * 2 km takes about 8.5ms.

You are correct that the time of 18 router hops (plus whatever else is happening in the network layer that we can't see - optical regeneration, layer 2 switching, etc) matters a lot, too. It could be as low as 24ms or as high as 57ms - i imagine the majority of the difference is due to non-router stuff but I don't know for sure.

We've changed the headline to make it clear that the article is about latency, not issues that people are apparently experiencing with YouTube. We just used YouTube as an example since it's so well-known.

Why are YT, video buffering and downloads still being mentioned? They're not related to latency at all.Video buffering issues are caused by insufficient bandwidth somewhere between the client and the server.

I think his point was trying to include all the buffers, though. it's splitting hairs to say that the buffers in the routers are not "part of routing" - they don't have to be part of routing, but when it happens inside a router i'll call it part of routing.

Bufferbloat messes with this tho, right? In that packets can be sitting in the buffer but be seen as lost in transit by the protocol and so it gets sent again, wasting bandwidth and stuffing the buffer even more.

End result is that once the buffer starts filling the observed bandwidth drops like a rock because most of the traffic are pointless resends.

My understanding is that the TCP layer in your OS will acknowledge the incoming bytes, so that no retransmission occurs. But the buffer in the TCP layer may not get consumed by the client software.

IIRC, the acknowledgement in TCP is a range of buffer space available on the client. So if the client sent, for instance, an acknowledgement that it is ready to receive the byte range 306,281 through 306,280 (which is an empty buffer range), the server will stop sending until the byte range available at the client becomes non-zero. But the server will know that everything up through byte 306,280 has been received and can free up resources at the server.

Once the client consumes some bytes out of the client TCP buffer, then TCP layer can send a non-empty range to the server causing it to resume transmission.

I don't think the speed of light is generally the primary latency problem. It is the latency of all of the intermediate routers.

At the speed of light, a signal can go around the earth, at the equator, about seven and one half times. So even if the convoluted route from server to client is twice the distance around the earth (unlikely) that doesn't fully explain the ping times. A lot of latency happens once a packet is received by a router, held in memory, it's cpu figures out which outgoing port to send it on, queues it for transmission on that port, the packet sits there wasting time in an outgoing queue. Once being transmitted, there is still the notion of just how many bits-per-second it can be clocked out at onto the outgoing line. (bandwidth) The latency problem I just described happens also in ethernet "switches", but the routing problem is vastly simpler, based on nothing more than MAC addresses and having no knowledge of what higher protocol (like TCP, or AppleTalk, or DecNet, or SPX/IPX, or NetBIOS, etc are riding in ethernet frames).

Sorry but I've got to disagree with this. The actual time to process a packet either for routing or switching is essentially instantaneous when discussing WAN traffic. Yes, a packet can spend time in a queue but that's different. The time to lookup the route, even including access lists, is minimal on all decent hardware platforms. And by decent I mean anything your packets are likely to hit once they leave your house or business. The forwarding lookup tables are optimized and stored in either fast SRAM (DRAM is generally considered too slow for forwarding tables) or Content Addressable Memory (CAM) in the case of ethernet switches. Some ethernet switches are actually cut-through which means they start transmitting the packet out the egress port before it's even fully received on the ingress port. At 10G speeds that means you're measuring the latency in nanoseconds.

Buffers and distance are generally adding tens or hundreds of milliseconds. Switching and routing are generally adding 100s of nanoseconds. It's three orders of magnitude difference in a lot of cases.

In my original draft I actually included a paragraph to add that there's lots of little sources of latency such as router overhead, but in the end I removed it. The article went into enough detail about enough different things that I didn't want to go down another rabbit hole.

I think in general, the situation these days is that unless your packet can't be fast pathed, routing overhead is basically negligible. You can see occasional odd corner cases (e.g. TTL expiration, fragmention needed but DF bit set) where router delays do become significant, but the impact that has in normal usage is, I think, pretty negligible. I think in practice, even home routers are more than fast enough to avoid significant routing latency (though there can be significant bufferbloat latency).

While this is a pretty good article on networking basics, I think it was a mistake to tie it with the slow YouTube issues headline. There has been a substantial change in the way YouTube serves data to visitors lately. Whether it is more aggressive ISP throttling or YouTube reducing the amount of CDN servers, something is definitely going on other than just "that's just the way networks work".

Indeed, it's been odd for me the last several months. I have plenty of bandwidth on my hands and my ping times are pretty low as well, but if I load up a 720p YT video, it loads differently on a seemingly random basis.

I'll open the video and it'll take 5 minutes to load the first 30 seconds. I'll close the tab, re-open the same video and the entire video will buffer in less than a minute.

Sadly, scientists haven't bothered to do anything to make the speed of light faster. It seems we're pretty much stuck with that one.

What we need is a link layer that uses quantum entangled bits rather than actually sending bits on pipes limited by the slow-as-molasses speed of light. Any phd students out there in search of a thesis?

I recall several times getting into arguments with people who insisted they were seeing "normal" (100ms or less) ping times over satellite connections. I tried explaining the whole "speed of light" thing to them, they were having none of it.

I didn't see specific mention of it, but latency is also why you'll always have issues with buffering, stuttering, etc. in live video. Because each dropped packet, etc., reduces the possible size of your buffer, until eventually you wind up at zero...what's on your screen is actually what is happening right now. Or at least right now minus transport time. So the next dropped/delayed packet means either garbled picture or a pause for rebuffering. Not a problem with prerecorded video, because they can buffer well ahead of the current viewing point.

My job involves doing network analysis, modeling, and simulation. Instead of "...can't change the speed of light...", I say "I can't drag Australia any closer to North America" to start to educate clients about the effects of latency.

I liked the article a lot but it mostly points at the network path and doesn't address what I see every single day, on every single engagement; application developers/owners with very little idea of how their code scales and performs over a long link.

When I show a client their app is chatty I typically get a response of "...but it was fine in Qual/Dev...", which is always a short, fat pipe with microsecond latency. Of course it smokes.

Yes, tune the pipe, but for God's sake tell your app guys to account for the network to be deployed on in their design and testing of the app code.

I haven't had time to read the whole article yet, but am I the only one that actually wants a watch like the one in the picture? I'm imagining the lines in the inner part being the hours and the outer bit being the minutes of course.

I live in a rural area of southern Colorado where high-speed Internet access is not available. Several years ago, I had a satellite-based ISP (now branded HughesNet). A latency issue I noticed came with the increasingly complex pages being created by web designers.

The problem is easily seen when using the Safari browser. As a page loads, along the bottom of the window is a message like 'Loading "http://arstechnica.com/", completed 52 of 69 items'. Then, after item 53 is loaded, it may say 'Loading "http://arstechnica.com/", completed 53 of 105 items', and later 'Loading "http://arstechnica.com/", completed 102 of 157 items'.

My assumption is that those "items" were mostly images, CSS files, and JavaScript files (mostly marketing spyware?). From a web designer's perspective, having frequently used components of a page in separate files makes good sense. But it seemes it can cause the browser to repeatedly make more requests with a resulting increase in total latency.

I wonder if it would be advantageous for the assembly of a web page from its component parts to be a server-side task instead of a client-side task, especially for clients on a high-latency connection. Of course, this raises the question of whether it is even possible for a server to deduce that a client is on a high-latency connection.

I understand that caching of web pages has long been used by ISPs as an attempt to improve performance, but I don't know if the cached page is a completely assembled page or just an image of the page as created by the designer, with all its latency-inducing links intact.

With my former satellite-based ISP, complex pages commonly might take 1-2 minutes to load. I now have a ground-based, wireless Internet connection (advertised bandwidth 0.5 Mbps; typical actual bandwidth 0.25 Mbps), and a page such as Ars Technica loads in 10-15 seconds.

While this is a pretty good article on networking basics, I think it was a mistake to tie it with the slow YouTube issues headline. There has been a substantial change in the way YouTube serves data to visitors lately. Whether it is more aggressive ISP throttling or YouTube reducing the amount of CDN servers, something is definitely going on other than just "that's just the way networks work".

Indeed, it's been odd for me the last several months. I have plenty of bandwidth on my hands and my ping times are pretty low as well, but if I load up a 720p YT video, it loads differently on a seemingly random basis.

I'll open the video and it'll take 5 minutes to load the first 30 seconds. I'll close the tab, re-open the same video and the entire video will buffer in less than a minute.

I can't for the life of me find a consistent behavior.

Because of how YouTube serves content. First, they do rough geo-location based on your ISP DNS (if using ISP DNS) then you're sent of 1 of 5 cache URLS. The cache URLs are linked to server locations throughout the world. If a video is not at a particular location it is relayed to the one you've accessed. Any follow-up views at that location will load more quickly.

While this is a pretty good article on networking basics, I think it was a mistake to tie it with the slow YouTube issues headline. There has been a substantial change in the way YouTube serves data to visitors lately. Whether it is more aggressive ISP throttling or YouTube reducing the amount of CDN servers, something is definitely going on other than just "that's just the way networks work".

Indeed, it's been odd for me the last several months. I have plenty of bandwidth on my hands and my ping times are pretty low as well, but if I load up a 720p YT video, it loads differently on a seemingly random basis.

I'll open the video and it'll take 5 minutes to load the first 30 seconds. I'll close the tab, re-open the same video and the entire video will buffer in less than a minute.

I can't for the life of me find a consistent behavior.

Because of how YouTube serves content. First, they do rough geo-location based on your ISP DNS (if using ISP DNS) then you're sent of 1 of 5 cache URLS. The cache URLs are linked to server locations throughout the world. If a video is not at a particular location it is relayed to the one you've accessed. Any follow-up views at that location will load more quickly.

That's just one example, though. It's not always after a follow-up view. I could re-load the video 5 times and on the 4th try it would load as expected, but on the 5th time it wouldn't. I have a basic understanding as to why it's loading slower than my connection should permit; what I don't get is the randomness of it. Sometimes it'll buffer perfectly on the first load. Other times it does some form of what I've described above. Sometimes it just plain refuses to load even after I let it sit for half an hour or more.

My job involves doing network analysis, modeling, and simulation. Instead of "...can't change the speed of light...", I say "I can't drag Australia any closer to North America" to start to educate clients about the effects of latency.

I liked the article a lot but it mostly points at the network path and doesn't address what I see every single day, on every single engagement; application developers/owners with very little idea of how their code scales and performs over a long link.

When I show a client their app is chatty I typically get a response of "...but it was fine in Qual/Dev...", which is always a short, fat pipe with microsecond latency. Of course it smokes.

Yes, tune the pipe, but for God's sake tell your app guys to account for the network to be deployed on in their design and testing of the app code.

As a developer, whenever I have to work on things intended to function on slow networks (slower than our lab setup), I insist on getting a suitable traffic shaping box built; you can do it with Linux or one of the BSDs on decent PC hardware quite easily, or buy an off the shelf shaper.

Then set both a throughput throttle and an added latency per packet to match the expected network behaviour; most recently, I've had a box with 750ms added latency in each direction, 2000kbit/s in one direction, 20kbit/s in the other (to match a client's deployed satellite infrastructure).

This stops me making things work fine in qual/dev but not on the slower real-world network, as I get the same experience when I'm testing my code as I would in the field; however, when I've got real problems, I can just remove the traffic shaper, and get things working at all before they work quickly.

I live in a rural area of southern Colorado where high-speed Internet access is not available. Several years ago, I had a satellite-based ISP (now branded HughesNet). A latency issue I noticed came with the increasingly complex pages being created by web designers.

The problem is easily seen when using the Safari browser. As a page loads, along the bottom of the window is a message like 'Loading "http://arstechnica.com/", completed 52 of 69 items'. Then, after item 53 is loaded, it may say 'Loading "http://arstechnica.com/", completed 53 of 105 items', and later 'Loading "http://arstechnica.com/", completed 102 of 157 items'.

My assumption is that those "items" were mostly images, CSS files, and JavaScript files (mostly marketing spyware?). From a web designer's perspective, having frequently used components of a page in separate files makes good sense. But it seemes it can cause the browser to repeatedly make more requests with a resulting increase in total latency.

I wonder if it would be advantageous for the assembly of a web page from its component parts to be a server-side task instead of a client-side task, especially for clients on a high-latency connection. Of course, this raises the question of whether it is even possible for a server to deduce that a client is on a high-latency connection.

In general, no, but there are some techniques that developers can use to help. JavaScript files can often be combined (and compressed) into a single file, and CSS Sprites are increasingly used to deliver icons and other small images in a more efficient way.

I really loved this discussion and enjoyed the additions in the comments!

One thing that hasn't stuck out is the idea of loading the pages before the user wants them. Much like Google Instant works, if a cached copy of Facebook and other popular pages could be kept on hand this would save sever megabits/hour. This may tax on the user's RAM however, but within a couple years many laptops will have 8gb and that is no longer an issue.

There is a way to beat the speed of light: predict the future. And as your article points out, we only need a few seconds into the future. If the Youtube vid of Harlem Shake on your newsfeed were already being sent to you when you clicked Facebook, have we beat light?

Does TCP really wait until confirmation of reception of a previous packet before ending another? That seems ridiculous! Keep sending packets on the assumption they're going to make it through. Since TCP can restitch out-of-order packets it only has to re-request missed packets when the entire block is actually ready for delivery/use. I have to believe there's some sort of redundancy there.

I really loved this discussion and enjoyed the additions in the comments!

One thing that hasn't stuck out is the idea of loading the pages before the user wants them. Much like Google Instant works, if a cached copy of Facebook and other popular pages could be kept on hand this would save sever megabits/hour. This may tax on the user's RAM however, but within a couple years many laptops will have 8gb and that is no longer an issue.

There is a way to beat the speed of light: predict the future. And as your article points out, we only need a few seconds into the future. If the Youtube vid of Harlem Shake on your newsfeed were already being sent to you when you clicked Facebook, have we beat light?

I wish the cable companies would implement a variant of this on my damned digital tuner. It used to be that when you clicked a channel the analog filters would "instantly" change the program. Now with everything digital I've got to wait as long as a couple seconds (on premium encrypted channels) to see a picture. When I had 20 channels I could surf with a 0.1 s between channels and now that I've got 400 channels I've got to wait 1 s.

Instead, why can't the cable company locally buffer the channels I can reach from my remote? There are only a handful of buttons on my remote that can instantly change the channel: Ch+, Ch- and last channel. Why can't the cable company buffer these or at least some subset of the frames? Well, of course the answer is that that would take bandwidth and they're a monopoly so they're not interested in improving their customer experience. Alas.

I'm a huge proponent of reclaiming the decimal based use of the SI prefixes. When dealing with hard drive sizes (which have no bias towards base 2 or base 10) and network speeds and clock speeds (which tend to be tidy base-10 numbers) we should absolutely use base 10. However memory sizes, cache sizes, and 2^16 buffer sizes are obvious places to use base 2.

So why not say "IP packets can be as long as 2^16 bytes, 64 KiB" or "IP packets can be as long as 2^16 bytes, 65536 bytes"

I really loved this discussion and enjoyed the additions in the comments!

One thing that hasn't stuck out is the idea of loading the pages before the user wants them. Much like Google Instant works, if a cached copy of Facebook and other popular pages could be kept on hand this would save sever megabits/hour. This may tax on the user's RAM however, but within a couple years many laptops will have 8gb and that is no longer an issue.

There is a way to beat the speed of light: predict the future. And as your article points out, we only need a few seconds into the future. If the Youtube vid of Harlem Shake on your newsfeed were already being sent to you when you clicked Facebook, have we beat light?

I wish the cable companies would implement a variant of this on my damned digital tuner. It used to be that when you clicked a channel the analog filters would "instantly" change the program. Now with everything digital I've got to wait as long as a couple seconds (on premium encrypted channels) to see a picture. When I had 20 channels I could surf with a 0.1 s between channels and now that I've got 400 channels I've got to wait 1 s.

Instead, why can't the cable company locally buffer the channels I can reach from my remote? There are only a handful of buttons on my remote that can instantly change the channel: Ch+, Ch- and last channel. Why can't the cable company buffer these or at least some subset of the frames? Well, of course the answer is that that would take bandwidth and they're a monopoly so they're not interested in improving their customer experience. Alas.

It wouldn't take bandwidth. Cable is a broadcast medium: the cable connection always has all the channels pumped down it all the time.

What it would take, however, is multiple tuners, and tuners require actual hardware and cost actual money.

Even if you have the tuners, there's another problem. The data broadcast over cable is compressed. I don't know what algorithm it actually uses in the US, but the two used in other broadcast systems are MPEG-2 and H.264. Both of these use intraframe prediction. Instead of broadcasting every frame of video in totality, they broadcast occasional whole frames, with a series of delta frames encoding the differences in between. This is great for efficiency, but it has one repercussion: until you receive one of those periodic whole frames, there's simply no way of reconstructing the picture.

As a consequence of this, even if you had enough tuners to speculatively tune into the other channels, fast channel hopping still wouldn't be instantaneous, because you can switch channels faster than the whole frames get broadcast. As a result, you'll still have to wait.

I really loved this discussion and enjoyed the additions in the comments!

One thing that hasn't stuck out is the idea of loading the pages before the user wants them. Much like Google Instant works, if a cached copy of Facebook and other popular pages could be kept on hand this would save sever megabits/hour. This may tax on the user's RAM however, but within a couple years many laptops will have 8gb and that is no longer an issue.

There is a way to beat the speed of light: predict the future. And as your article points out, we only need a few seconds into the future. If the Youtube vid of Harlem Shake on your newsfeed were already being sent to you when you clicked Facebook, have we beat light?

I wish the cable companies would implement a variant of this on my damned digital tuner. It used to be that when you clicked a channel the analog filters would "instantly" change the program. Now with everything digital I've got to wait as long as a couple seconds (on premium encrypted channels) to see a picture. When I had 20 channels I could surf with a 0.1 s between channels and now that I've got 400 channels I've got to wait 1 s.

Instead, why can't the cable company locally buffer the channels I can reach from my remote? There are only a handful of buttons on my remote that can instantly change the channel: Ch+, Ch- and last channel. Why can't the cable company buffer these or at least some subset of the frames? Well, of course the answer is that that would take bandwidth and they're a monopoly so they're not interested in improving their customer experience. Alas.

That's a different situation. Digital television is usually encoded with a 1 second GOP. Your decoder needs a slight delay to lock on to an Intra/key frame (I-frame) before the rest of the picture can be displayed. This is because every other frame in the picture is derived from this key frame every second.

Analog television was the equivalent of sending 30 intra frames per second which is why it was so much more responsive to channel tuning.

I liked the summary, but it is kinda misleading to use video streaming and teleconferencing as an example of a problem scenario where latency could be caused by TCP ACKS, Since both of those are better done through udp since you don't care too much about dropping the occasional packet in a video or a voice conversation since a dropped packet is just some pixels or a small loss of call quality. If you ever watch a tcp streamed video it can be painful.

Also tcp ACKS work through windowing, they don't always ack every packet. Usually the sender sends packets up to a receiver specified windowed size then wait for an ack, sometimes this window size can be large enough and the receiver can send acks fast enough that a sender can continuously send data.

Additionally Congestion control, is already built into the tcp protocol and is not just an area of research. A tcp connection will use latency and a few other things to determine if a connection is having problems like overflowing buffers and then send data slower, and increase it slowly to determine how much a connection can handle. If you think about all of the tcp connections doing this it actually does a lot to limit congestion problems.

While this is a pretty good article on networking basics, I think it was a mistake to tie it with the slow YouTube issues headline. There has been a substantial change in the way YouTube serves data to visitors lately. Whether it is more aggressive ISP throttling or YouTube reducing the amount of CDN servers, something is definitely going on other than just "that's just the way networks work".

I find it interesting that if a youtube video wants to show you a 30 second ad, it plays instantly in all its high res glory without skipping a frame, then you get to the video you went there to watch, and its buffer, buffer, buffer, even on a low def version.

I've had the exact opposite all too often as well. The video I want to watch streams perfectly, but the ad ahead of it keeps buffering. Worse, the "skip" option is tied to the playback index of the ad, not real-world time. So if the ad freezes, so too does the "skip ad in X seconds".

We were only talking 2400, then 9600, then 19200, then thirty-something thousand bits per second. So even bits at the speed of light didn't matter because of the limited number of bits per second (eg, bandwidth).

The good ol'days when downloading GTA took like 5 6 nights... and we downloaded individual songs from bands...

That's a different situation. Digital television is usually encoded with a 1 second GOP. Your decoder needs a slight delay to lock on to an Intra/key frame (I-frame) before the rest of the picture can be displayed. This is because every other frame in the picture is derived from this key frame every second.

Analog television was the equivalent of sending 30 intra frames per second which is why it was so much more responsive to channel tuning.

EDIT: Beaten by Dr. Pizza!

Indeed, but it's interesting how, "it's a monopoly so it must be bad" is so thoroughly beaten into people. Monopolies have to obey physics too.

I've had several problems with my FiOS service not behaving as expected. Since I'm on an older plan, my upload bandwidth is actually pretty slow (2Mbps). It appears that their POS router, the Actiontec MI-424WR suffers from buffer bloat. I'm fortunate in that it was installed with CAT-5 from the ONT, so I can set up a bridge when I get around to it.

However, there is one thing I found that greatly improved overall responsiveness. I had noticed that when navigating to a wide range of URLs, Chrome would sit and spin counter-clockwise (waiting for response from server) for several seconds, then suddenly spin clockwise and the page would load in a fraction of the time it spent waiting on a response.

This told me that the actual downstream bandwidth was fine, and I had no problem sending acks back to the server. But something was going wrong between the initial request to the server and getting the page and all other materials (CSS, scripts, images, AJAX data, etc.) I guessed that maybe it was something going on with the default DNS that FiOS was using. Since the router did not specify a DNS IP, it was using whatever service Verizon chose at their end.

I switched to OpenDNS and immediately saw an improvement. While I still experience outbound periodic connection flooding due to what is most likely buffer bloat, browsing the web has become a much more enjoyable experience (again). The frustrating thing is that when I first got FiOS, none of this was an issue. But I've been through 3 Actiontecs now and each one has been a slightly newer hardware revision.

Your thoughts? Am I onto something or just on something? Just my imagination or is it entirely plausible?

My job involves doing network analysis, modeling, and simulation. Instead of "...can't change the speed of light...", I say "I can't drag Australia any closer to North America" to start to educate clients about the effects of latency.

I liked the article a lot but it mostly points at the network path and doesn't address what I see every single day, on every single engagement; application developers/owners with very little idea of how their code scales and performs over a long link.

When I show a client their app is chatty I typically get a response of "...but it was fine in Qual/Dev...", which is always a short, fat pipe with microsecond latency. Of course it smokes.

Yes, tune the pipe, but for God's sake tell your app guys to account for the network to be deployed on in their design and testing of the app code.

Any decent firewall, like PFSense, allows to simulate latency, jitter, and packetloss.

I my hatred for YouTube burns with the heat of one thousand suns. It sucks all the time no matter what. On my home FiOS, at work, cellular, Windows, Mac, Linux, iOS, Safari, Firefox, IE, Chrome, it doesn't matter. It just. always. sucks. If the equivalent video is on Vimeo, it always loads quickly and plays great. What is the problem Google. Get it together.

Does TCP really wait until confirmation of reception of a previous packet before ending another? That seems ridiculous! Keep sending packets on the assumption they're going to make it through. Since TCP can restitch out-of-order packets it only has to re-request missed packets when the entire block is actually ready for delivery/use. I have to believe there's some sort of redundancy there.

To keep sending packets when they're getting dropped typically means a link is congested and sending more packets only makes it worse.

The Internet has a a whole would crumble.

Out of Order packets are VERY costly to process. It does not scale to high bandwidth and requires lots of buffering.

Imagine a 10Gb link trying to buffering 0.5sec of data, which is about 625MB, because one packet decided to take a hair bit longer.

Mind you, buffers to re-order packets are typically allocated up-front, so that means you need to allocate 625MB of memory PER CONNECTION.

Well, they don't have to be, but many implementations do because of simplicity, speed, and scalability. Usually a simple high performance ring buffer that blocks when it gets full.

If you don't allocate your memory up-front, you typically have to single thread your network or have locking, which are both prohibitive to high performance.

I really loved this discussion and enjoyed the additions in the comments!

One thing that hasn't stuck out is the idea of loading the pages before the user wants them. Much like Google Instant works, if a cached copy of Facebook and other popular pages could be kept on hand this would save sever megabits/hour. This may tax on the user's RAM however, but within a couple years many laptops will have 8gb and that is no longer an issue.

There is a way to beat the speed of light: predict the future. And as your article points out, we only need a few seconds into the future. If the Youtube vid of Harlem Shake on your newsfeed were already being sent to you when you clicked Facebook, have we beat light?

I wish the cable companies would implement a variant of this on my damned digital tuner. It used to be that when you clicked a channel the analog filters would "instantly" change the program. Now with everything digital I've got to wait as long as a couple seconds (on premium encrypted channels) to see a picture. When I had 20 channels I could surf with a 0.1 s between channels and now that I've got 400 channels I've got to wait 1 s.

Instead, why can't the cable company locally buffer the channels I can reach from my remote? There are only a handful of buttons on my remote that can instantly change the channel: Ch+, Ch- and last channel. Why can't the cable company buffer these or at least some subset of the frames? Well, of course the answer is that that would take bandwidth and they're a monopoly so they're not interested in improving their customer experience. Alas.

It wouldn't take bandwidth. Cable is a broadcast medium: the cable connection always has all the channels pumped down it all the time.

What it would take, however, is multiple tuners, and tuners require actual hardware and cost actual money.

Even if you have the tuners, there's another problem. The data broadcast over cable is compressed. I don't know what algorithm it actually uses in the US, but the two used in other broadcast systems are MPEG-2 and H.264. Both of these use intraframe prediction. Instead of broadcasting every frame of video in totality, they broadcast occasional whole frames, with a series of delta frames encoding the differences in between. This is great for efficiency, but it has one repercussion: until you receive one of those periodic whole frames, there's simply no way of reconstructing the picture.

As a consequence of this, even if you had enough tuners to speculatively tune into the other channels, fast channel hopping still wouldn't be instantaneous, because you can switch channels faster than the whole frames get broadcast. As a result, you'll still have to wait.

Thanks for the clarification. It seems to me that I need to catch the extra three key frames. Then when I change channels I can apply the deltas to that. The image won't be as clean but at least I'll know in general what I'm looking at and the picture will reset in one second. Thanks for the indulgence in the off-topic discussion.

What it would take, however, is multiple tuners, and tuners require actual hardware and cost actual money.

Even if you have the tuners, there's another problem. The data broadcast over cable is compressed. I don't know what algorithm it actually uses in the US, but the two used in other broadcast systems are MPEG-2 and H.264. Both of these use intraframe prediction. Instead of broadcasting every frame of video in totality, they broadcast occasional whole frames, with a series of delta frames encoding the differences in between. This is great for efficiency, but it has one repercussion: until you receive one of those periodic whole frames, there's simply no way of reconstructing the picture.

As a consequence of this, even if you had enough tuners to speculatively tune into the other channels, fast channel hopping still wouldn't be instantaneous, because you can switch channels faster than the whole frames get broadcast. As a result, you'll still have to wait.

I should soon(tm) be getting Fiber where the ISP uses IPTV. I was talking to a tech and got on the subject of video quality and he mentioned that when someone changes channels, the network can burst upwards of 1Gb/s to get the video-stream playing ASAP.

I assume it could be something similar to what you mentioned. If the ISP buffers, per channel, the current iFrame, when someone changes channels, it could burst the iFrame and all needed delta frames really fast to get the end point in sync.

Another thing he mentioned was that most DOCSIS system's that claim to output 1080p, really just upscale 720p, and that their fiber system has full native 1080p. Meh.. I don't really watch TV anymore anyway.

I really loved this discussion and enjoyed the additions in the comments!

One thing that hasn't stuck out is the idea of loading the pages before the user wants them. Much like Google Instant works, if a cached copy of Facebook and other popular pages could be kept on hand this would save sever megabits/hour. This may tax on the user's RAM however, but within a couple years many laptops will have 8gb and that is no longer an issue.

There is a way to beat the speed of light: predict the future. And as your article points out, we only need a few seconds into the future. If the Youtube vid of Harlem Shake on your newsfeed were already being sent to you when you clicked Facebook, have we beat light?

I wish the cable companies would implement a variant of this on my damned digital tuner. It used to be that when you clicked a channel the analog filters would "instantly" change the program. Now with everything digital I've got to wait as long as a couple seconds (on premium encrypted channels) to see a picture. When I had 20 channels I could surf with a 0.1 s between channels and now that I've got 400 channels I've got to wait 1 s.

Instead, why can't the cable company locally buffer the channels I can reach from my remote? There are only a handful of buttons on my remote that can instantly change the channel: Ch+, Ch- and last channel. Why can't the cable company buffer these or at least some subset of the frames? Well, of course the answer is that that would take bandwidth and they're a monopoly so they're not interested in improving their customer experience. Alas.

It wouldn't take bandwidth. Cable is a broadcast medium: the cable connection always has all the channels pumped down it all the time.

What it would take, however, is multiple tuners, and tuners require actual hardware and cost actual money.

Even if you have the tuners, there's another problem. The data broadcast over cable is compressed. I don't know what algorithm it actually uses in the US, but the two used in other broadcast systems are MPEG-2 and H.264. Both of these use intraframe prediction. Instead of broadcasting every frame of video in totality, they broadcast occasional whole frames, with a series of delta frames encoding the differences in between. This is great for efficiency, but it has one repercussion: until you receive one of those periodic whole frames, there's simply no way of reconstructing the picture.

As a consequence of this, even if you had enough tuners to speculatively tune into the other channels, fast channel hopping still wouldn't be instantaneous, because you can switch channels faster than the whole frames get broadcast. As a result, you'll still have to wait.

Thanks for the clarification. It seems to me that I need to catch the extra three key frames. Then when I change channels I can apply the deltas to that. The image won't be as clean but at least I'll know in general what I'm looking at and the picture will reset in one second. Thanks for the indulgence in the off-topic discussion.

Right, but that's not good enough. Assuming one I-frame (those are the periodic whole frames) every second: pressing channel up means that pressing channel up a second time within one second means that you're still going to have to wait for an I-frame again. There's no way, in general, to do what you want to do, short of capturing all the channels all the time.

There are two models in wide use, a 7-layered one called the OSI model and a 4-layered one used by IP, the Internet Protocol. IP's 4-layer model is what we're going to talk about here. It's a simpler model, and for most purposes, it's just as good.

I've been doing networking now for about 5 years, and besides some very early studying, I've never used the TCP/IP model since.