No Bandwidth!

I couldn't reconnect to CBS. The website had to many people accessing it. When I go to the video content it was slowly streaming. I think they didn't expect so many people to watch the game over the internet.

This does raise a point though. Flash if that's what they're using does have a built-in Peer to Peer system. Assuming most folks have a router with UPnP turned on or a method is implemented to allow NAT Transversal, couldn't they in theory use a P2P distribution method? I wonder wonder how that would work on a mass scale. Lots of crushed last mile networks but I believe it would help out a lot in a LAN environment so you're not taxing the CDN further. One machine gets the 720p/1080p stream and sends it out to others who do the same on the LAN.

That's why you don't deliver video as a unicast stream from anywhere near the Internet's core. That's not the way Akamai does it, that's for sure. Put nodes as deep as possible into ISP networks (as deep as, if not deeper than, Akamai's current nodes/Netflix OpenConnect), then push the stream to them (total bandwidth used: ~20 Mbps per node). Then anycast (or use DNS based resolution, whatever floats your boat) over the last 50-250 miles, depending on how far you are away from an ISP core router.

Oh, and serve everything over UDP, with no ability to fast forward/rewind the stream other than what the client can handle on its own. What you want is a network of dumb pipes piping a data firehose to wherever it needs to go...the smartest piece of the puzzle should be bandwidth detection (serving 360p, 480p, 720p or 1080p), and that should be client-side.

Oh, and don't forget the dual 10 Gbit NICs on the edge nodes. So you can serve 1000 viewers from one system.

Is the scale of this event enormous? Absolutely. Is it a special case that, if handled correctly, requires significantly less hardware per streamer than you'd normally expect? Yep.

Yeah, the only thing P2P would do in this case is destroy the upstream capabilities of any network where there are a number of SB viewers...because most last-mile networks still are lacking in the upstream capacity department (I'm talking raw capacity, not provisioned speed). Never mind the fact that many folks couldn't stream more than 480p on the upload side anyway.

In a situation like this, you want a branching topology, where the data flows in exactly one direction: toward the end user. There will be branches along the way ("repeater nodes") but anyone serving the stream should have enough upstream bandwidth to serve several users (or, you know, 1000...16GB RAM, high-cpu, 2x10G NIC machine). And, for the next ten years or so, the only place where upstream bandwidth will be reliably available in that quantity is a data center.

I think large numbers of viewers are where we see the current flaw in IP video for single high volume events.

Mainly in that with a traditional delivery method of a live event, 100 million people tune into the existing stream. With IP you have 100 million streams to serve. CDNs help this but it still does not remove the inefficiency that is the fact if my neighbor is streaming the event and then I want to watch so I open the stream that is now two streams rather than just joining me onto the existing one.

Makes me wonder how does SDV work? If five homes on a node are watching channel 6 is that five streams of the same channel or is the node smart enough to know that feed is active and just puts the boxes on that existing feed.--[65 Arcanist]Filan(High Elf) Zone: Broadband Reports