Posted
by
simoniker
on Monday November 24, 2003 @04:30PM
from the tragic-leech-loss dept.

An anonymous reader writes "Renesys (the people who previously brought you cool animated graphs of the US/Canada power outage has a new report out. It challenges the widelyheldbelief that the Internet was largely unaffected by the power outage. Lots of important networks lost connectivity, including banks, hospitals, government organizations and investment funds. There's a cool appendix on the huge Italian power outage in September as well. They conclude that the Internet is not ready to be critical infrastructure."

It has always seemed to me that the internet isn't all that de-centralized, but a few major companies ran most of the backbones. Since it isn't a huge ad-hoc network, most of the data for an area probably goes out through no more than 5 connections. Especially in rual areas, I wouldn't doubt that at least one routing station in each of those chains doesn't have good long term backup facilities.

Well, duh. (Yes I did read the article) If 1/3 of the country goes out, we are sure as hell going to loose *some* connectivity.

Its pretty cool though that it can be observed in terms of routing activity.

Yes, ideally everyone would have backup power (and enough of it). If power outages were common, it might be a good selling point for ISPs, but they aren't so not many people want to may more $ per month just to have battery backup. (Especially residential customers who won't have it at home anyway).

I don't like big government either, but an FTC law (or whatever) mandating backup power for ISPs/backbones of sufficient size or type of service (business vs residenial) might be what's needed.

If phone companies have such a requirement, then the internet probably should to.

(Unfortunately, most phones are powered from the phone line, but I can't say the same about my cable modem...)

OTOH, did many businesses care to have backup power for sufficient length? Just because the some routers went out, it might not have mattered if their end users were already without power.

A robust internet is a great thing, but not near as great as a robust internet with robust users.

The ability to observe the outage (sharply) through routing activity is definitely the part that we thought was coolest.

People are saying two different things here: 1) well, duh, if power is out lots of people can't connect to the web; 2) if the core of the internet routes around that who cares. These are both interesting points. Here are some thoughts:

1) We agree. That's what I though. But read the keynote press releases. Or just google on 'blackout Internet' and you'll find glowing stories about how 'the Internet' didn't even blip under the blackout. We prove pretty conclusively that this is incorrect.

2) The core of the Internet did, indeed, route around the outage. This is good. What is less good is that thousands of networks within the outage area lost connectivity, either due to lost power themselves, or upstreams that lost power (or telcos who lost battery backup on csu/dsu units, or whatever). These are *not* DSL customers (or that grade, anyway). All of these are BGP-speaking networks with their own Autonomous Systems and their own prefixes.

The fact that so many networks went down is significant, given that many organizations are coming to rely on the Internet as a critical communications infrastructure.

well to think that this did not affect many or that the internet didn't even blip would be a dumb statement to make. First of all, with the news of power outage, many geeks ran to their computers to see if they were still able to "Aim" their buddy! or the online gammers who, regardless of the candles surrounding they could still not belive that this could be happening.

There were many-a-young-kid without the ability to hit the Maxum web site. So to say that there weren't any affects, it's just purely not

The thing that I think was a success about the internet in relation to the outage is that there was no loss of connectivity between Boston and California. I actually found out that a huge chunk of the middle of the country had no power from a site on the other side of the outage from me.

As for the people who lost power, I suspect that they were largely more concerned about other things that their communications. When the neighborhood down the street from me lost power, the banks and stores closed. I don't

I don't like big government either, but an FTC law (or whatever) mandating backup power for ISPs/backbones of sufficient size or type of service (business vs residenial) might be what's needed.

I agree and disagree. I wouldn't make it a required thing that ISP's _HAD_ to do to. I might come up with a scheme like they did with the emergency network dealy-bob. TV and radio stations, don't have to broadcast at any specific wattage or have any backup at all. If they want to be a part of the emergency netw

(I've been using netmegs.com for a few years, and been really happy with them - and while they're located in the north east, they didn't go down during the blackout... - ie: if people know what they're doing, the system is fairly fail-safe against these temporary short-term emergencies).

I think every house should have a phone that will operate without local electrical power, just in case. I know that my fancy digital answering machine in a phone unit requires power for the ringer or answering machine to work, but I can place calls just fine without power.

I still haven't dealt with my inadequate UPS coverage though. Must get more UPSs.

When I start to think about getting my own house, I intend to factor the price of a whole house generator into figuring what I can afford.

I think every house should have a phone that will operate without local electrical power, just in case.

Agree, and I do.

When I start to think about getting my own house, I intend to factor the price of a whole house generator into figuring what I can afford.

Depending on where you're headed, think green: solar, hydro, or wind! My sister lives off the electrical grid (hydro), and it's amazing. Just have to give a little more thought into what you're running.

I read about the outage first on *slashdot*. You can't tell me the "Internet" was knocked out. It's "parts of the Internet that did not have power" that were knocked out. I mean come on, do they expect the public sewage system to work when there's no water?

The great thing about the internet, and the whole reason ARPA (now DARPA) had it created in the first place, was that large chunks of it could go down, but as long as both yourself and your destination were still connected, it would re-route and get your data delivered.

This does NOT mean, as people seem to think (like the guy making the headline post) that this guarentees 100% uptime.

If things could be 100% relied upon, there would be no need for rerouting or redundancy. Since TCP/IP was made for the real

As many have stated, the Internet is less resiliant than POTS. I don't know if it is more reliable than the power grids, though.

The internet doesn't seem to have any cascading failure modes. Inter-router links tend to be much larger than any traffic that could go through them. TCP will backoff automatically if a path becomes bandwidth-constrained. Even routing loops will only affect a few paths, and tend to be corrected fairly quickly.

Ready or not, the internet is increasingly being used for critical infrastructure. At best, failures like the power outage should motivate governments and industry to bolster the internet up to where it needs to be for reliability standards.

Industry is more than willing to "bolster the internet up to where it needs to be for reliability standards", it's called Spend the Money. You want 5 9's connectivity, you gotta pay. The government get involved? I thought you were looking for MORE reliable?:)

The proper conclusion from the data would be that many businesses in the blackout area, despite handling large sums of money daily, did not have sufficient redundant power or connectivity.

Whether anyone could have anticipate such a large scale blackout (and prepare accordingly) is another topic.

The internet is not the only thing being used as critical infrastructure. Look at cell phones. People use them everyday, and they are becoming the norm. It is even becoming the standard with number portability moving land lines to cell phones and not vice-versa. But are they reliable. One power outage and they fail, one emergency and the cell towers get overwhelmed. Oh well just another piece of technology we are addicted to that could easily fail us.

While I know that was meant as a joke, it's important to point out that the power grid/isn't/ used for critical infrastructure. No hospital, or air traffic control station, or powerplant (oh, the irony) would be caught dead without a backup power system.

"We find that Internet connectivity in the blacked-out region was far more seriously affected than has been publicly revealed."

Pointing out that areas without power didn't have internet connectivity seems rather redundant to me. The big question is how did it affect people outside that area? The fact that the rest of the world just plugged right along seems contrary to the conclusion they seem to want to draw.

Pointing out that areas without power didn't have internet connectivity seems rather redundant to me.

For home users and small businesses, you are quite right. What about large businesses that invested in generators so they could stay online 24/7? They were prepared to remain online to conduct their business. They depended on the Internet and it failed them.

I work for a large bank. We were not hit by the power outage, but we were scrambling to find routes around the areas that were.

haha.I am sorry, but anybody who thinks keeping power to the one building, while everything else lost power , would keep the infrastructure outside there building working is an idiot. They should be fired.

"We were not hit by the power outage"

I am confused, where you in the blackout area and used generators, or where you trying to connecting to someone within the blackout area?

If a company promissed you 24/7 internet reliability, and they where in the blackout area, sick you legal dogs on them.

I think the implied problem was the connectivity that was provided by ISPs and backbone segments running off the affected sections of the power grid.

If the Internet were more redundant and ad-hoc (less backbone-centric), it would recover from problems better. That's how it was originally envisioned; unfortunately, the commercialization of NSFNet has largely destroyed this approach, for better or worse.

We have a more organized network, but it's very dependent on critical points because of it's multiplexing organization strategy, so when that fails...

To a certain extent you may be correct.But you have to look at it in a slightly different light.If the power goes out hospitals , telephone networks , and other "essential" services tend to have backup generators and backup batteries.Now for the internet to be ready to reach the legendary uptime of POTS it will have to improve.This means that we should not be routing information on which if it doesnt get there people die exclusively over the internet.The so called essential services must all be willing to accept that one or more of the essential services will fail (hence the amazing backup batteries , generators etc. found at hospitals and telphone companies).

Bah, I could have told you that. I work for an ISP that serves 15 states. I get calls from people who put 100% of their business into a DSL line - with no backup to other carriers or mediums. When a hardware failure or trunk line failure occures - they go postal.

Sorry, but uptime is not 100% never was, never will be - plan for it, or deal with it when your connection goes down.

Even though we have multiple connections to the backbone - local trunks can go down. Aka backhoe attacks on burried fiber, or dove hunters blasting pole run fiber (don't laugh - it happened last week). If you don't have a backup DSL,ISDN, or heck even dialup connection for your business - then stfu and wait while we repair.

And don't even get me started on residential accounts that call in 'I use this for work I need it up now - send someone out today.' And it's Sunday evening... no - you didn't pay for a business account, so you get residential service levels which include 24-72 hour turn around on repairs.

This is certainly a topical comment, but it misses the point a little (I think).

A large number of organizations that were multi-homed, using BGP to announce routes out multiple upstream providers lost connectivity. This speaks to the situation that people who have spent a bunch of money on network infrastructure may not have spent enough on power (or may not have carefully evaluated their upstream providers).

One of the organizations located in the study had nine (9!) upstream providers and still went out. This is not a case of people on the far end of a DSL link; this is the case of people not being able to put together reliable network connectivity, even in the face of multi-homing.

Interesting point about the cable TV system (and therefore cable modems) in my hometown. When they were installing the new system they installed UPS-like backup power supplies throughout the city to keep the cable system going for 60 minutes after commercial power fails. So, I basically can hang on the Internet for about the time my UPS has life on my computer in a blackout... the cable company arrived at the 60 minute figure because they believe that's as long as people will ever be able to power their own

Sorry, but uptime is not 100% never was, never will be - plan for it, or deal with it when your connection goes down.

The problem with this is most SME's can't afford to spend the money on a service merely as a 'backup', espessially if they don't understand exactly how much they rely on it (and I'd guess there's a lot of PHB's that don't).

That said, I work at a small company, and we do have a router that automatically fails-over to a modem (which is 16.8k or something - the only external I had sitting a

Speaking as someone who has recently been involuntarily annexed and is being *forced* and billed to have city water and sewer installed, I'd be damned pissed off if my water suddenly quit working and it would be 2-3 days before you would even send someone over.

Currently, if I lose power, I fire up my generator; I still have water. If the water pump has problems, I can usually get someone over that day (or the next at the latest) to fix it or replace it. With the city water system, I do not get that option

If power *is* a critical infrastructure, and lack of power is what caused these problems, how can that support a conclusion that the Internet is not ready to be considered critical?

Because the internet is communication, not power. So the correct comparison is the telephone company, not the power company.

Power can be backed up locally. Communication can not. So power only needs to be available MOST of the time, with backups on any critical services, to achieve its "critical infrastructure" level of reliability. Communications, on the other hand, requires an infrastructure with multiple links, routing around failures, and local power backup at the active nodes to achieve its own "critical infrastructure" service levels.

The telephone company HAS this level of backup power built in. Switching centers, for instance, run their equipment directly from TWO banks of 48v batteries suitable for days of operation, and run battery chargers continuously when there's power available. Repeaters on long copper trunks are powered from the endpoints - and can run with only one endpoint hot. Telephone instruments are powered from the central office switch via the copper wire. Active customer premesis equipment has battery backup for critical features or is designed to connect at least one POTS phone directly to a copper pair to the switch in case of blackout, and so on. SONET nodes are wired as rings rather than trees, so you have to cut TWO fibers in different places to isolate them. Other trunks are redundant and switch over automatically in case of outage. I could go on. About the only place a single cable cut can cut you off is the line to your house - and if you pay (a lot!) extra (as some businesses do) you can get another run in by a different path, so no single backhoe or downed pole can isolate you.

The Internet was ORIGINALLY designed with this kind of redundancy built in. Individual links were via the tellco's infrastructure, with its power-failure resistance. Routing was automatic, and would find a route between any two nodes if one still existed. (It WAS designed by people who were at least THINKING about surviving a nuclear attack, after all.)

But with the "inflation" of the commercial internet this robustness was lost. The explosion of active IP addresses made routing tables impossibly large, while most sites were connected via a local ISP rather than ad-hoc connection to two (or more) internet neighbors.

So the internet split into a "backbone" with SOME of the old routing redundancy, interconnecting ISPs, who in turn give you a default route JUST to their own servers. If your ISP fails you're cut off, and if the backbone connections to your ISP fail, ditto (even if you in principle COULD reach the rest of the net through somebody with a two-ISP feed.)

The ISP buisness has FIERCE price competition, and one BIG way to cut costs is to reduce redundant routing internally and neglect backup power.

At the backbone level the long-haul networks carrying the data had an even FIERCER price war, due to the excessive long-haul buildout of the internet bubble. Perhaps some of the upstarts powered their switches and repeaters with local power (on the assumption that the could slough any site that had a local power failure and that they'd have a path with all equipment powered between any two customers still live). A major blackout would violate that assumption, cutting off not just the dead area but others who could only reach the rest of the net by routing through it.

How about your DSL or cable IP feed? Did your cable company include battery backup power in the repeaters, pole-mounted routers, and fiber/cable bridges? Is you settop box battery backed up? How about your DSL modem? If you're corporate, are all your routers, your VoIP bridge, and any desktops running a softphone on the UPS? Do your SIP phones run if the power fails? (Home users ditto for your PC.)

Until all these are fixed the internet is NOT running at "critical infrastructure" reliability levels. So you'll want to think VERY CAREFULLY before disconnecting your POTS line and depending on Internet-VoIP. B-)

The reason everybody said that the internet survived was that they were able to visit most of the sites they cared about during the blackout. The chart seems to show that many links and servers were down (presumably without power) during the blackout (including some major components of the internet), yet most people basically unaffected. This seems to suggest that as long as the server itself isn't in the middle of a blackout, the Internet can survive rather well. How many of your learned about the blackout from Slashdot or some other online news source?

True, the internet did manage to "route around" affected areas - but obviously it couldn't "route into" them. It's a bit like a nuclear blast taking out Washington DC: the highway system would survive intact (well, mostly:) ) around the area and vehicles would be able to continue using it, but they couldn't drive into the area (unless, of course, you like mutations;) ).

It seems to me that this article is complaining that they couldn't drive to downtown D.C. after the nuke hit.

'Lots of networks/servers/etc in the blackout area were unreachable'

Well, duh.

The most we learn from this is that if you want to stay up in a blackout, invest in better backup power systems. It is not, however, pointing out a significant weakness in the worldwide network as a whole.

How many of your learned about the blackout from Slashdot or some other online news source?

I learned about it when my card wouldn't swipe me out of the parking garage. And then when all of the traffic lights were out downtown. And then after searching the dial and finding the one AM station still on the air.

I made it home as quickly as I could... traffic was a nightmare and I only had less than a quarter of a tank of gas. I had decided I would wait until that night to fill up instead of getting it at lunch as I drove past a gas station. Little did I know gas stations don't even have generators to run the pumps (which seems kind of odd... they have plenty of gas to run the generators, but no electricity for the pumps. Lack of planning on their part IMHO).

Air Canada lost it's reservations/bookings/everything servers, and couldn't operate anything approaching normally for one reason. The servers were based in the midst of the blackout. Out here on the left coast, there were no effects. So why, don't international org.s and government departments have duplicate facilities on independant grids? That's always bugged me.

Well, look at it this way... they say "an UPS is good enough, if power goes out it will go out a few seconds or maybe a half hour", and don't plan for a "worse-case" scenario, in which you have a few hours of "power outage"... so instead of saving everything, commiting caches and so on, they just keep on hoping "in a few seconds power will be back on"... I just hope they DID learn their lesson now, and cut back on cutbacks (lol).

Given that Air Crapanada regularly cancel east coast flights if there's the merest hint of a thunderstorm that might leave their crews / planes stranded somewhere that would be expensive to keep them for the night, it doesn't surprise me that they don't spend money on a decent back up strategy.

In fact, at the rate that they were (are?) losing money [canada.com], having their operations shut down temporarily probably saved them a fortune. Sadly it looks like they have not been allowed to go bankrupt.

I live in denmark and recently we had a blackout that lasted maybe 10 hours.

While I was unable to make any phone calls, I could get on the internet with GPRS and surf to our server with my laptop for as long as the laptop batteries lasted.

The server is hosted in a colo datacenter which was also in the middle of the affected area. We run a mud on the server, and most of the players are from USA. They never discovered the blackout as the datacenter went on emergency diesel backup and apparently knew to make business with backbone providers that also knew their stuff.

So to the people saying that internet can only route around blackout areas but not _through_ them, this is not true. Seems at least here in denmark all the infrastructure on the backbones got backup power and just keeps working when everyone else is busy lighting candles.

I can verify this. Main Danish Internet access was totally unaffected by the blackout that encompassed all of Copenhagen - the Danish capital - and most of the rest of the island of Zealand.

I work right next to where the central Danish Internet Exchange (the 'DIX') is located. My company's servers are on a standard UPS so we had power for a couple of hours before we ran dry. While we still had power, our network connectivity was completely unaffected. The DIX and most major Danish ISPs have excellent power

The people affected by the downed routers were people who were in the blackout and couldn't turn on their computers anyway, so it doesn't matter that those machines were down. People outside the blackout were able to route around it, and THAT is the relevant part of the statement that the internet did well during the blackout.

You seem to be assuming that most servers people connect to are physically located near them. I doubt this is the case, and i'm sure there were some major, national businesses with servers in the NE that people from across the country were trying to connect to. Ideally such large companies wouldn't keep all their webservers in one physical location, but i don't think thats usually the case.

I don't understand why critical systems were not backed up with UPSs and generators? Power failures happen everywhere. You should never be 100% reliant on utility power. A generator with adequate supply of diesel (and contract to keep it full for long term outages) is a must have for critical systems in my opinion.

you may have a backup battery for your servers but you could be an island in a sea of dead hops.
I suppose if you were using the internet as a critical service you would want backups lined to a major node, and probably more than one, and or have a sattelite relay.

I work in a hospital in Toronto. There were almost NO facilities or services that functioned in the early parts of the blackout. Would you claim phones are a critical infrastructure? It's true that they worked during the power outage, but very quickly all the phone networks were too congested to provide service - this lasted for several hours. Radio stations continued to broadcast until their backups ran out and we were left with dead air. Thankfully, the hospitals had sufficient emergency generation t

Radio stations continued to broadcast until their backups ran out and we were left with dead air

Just some thoughts about 1998's power outage during winter [aol.com]. Apparently, the air conditioning was not working in the most recent major power failure, which caused people to sweat more than what they were accustomed to.

Given that while major segments of network were taken out by the blackout, other large parts of the Internet including parts of the Internet inside the area with power failure remained unaffected -- as established by this report, one would likely conclude that the Internet is at least as reliable as the power grid if not quite a bit more so.

Given that the power grid is already considered critical infrastructure, it doesn't make sense why they would make the conclusion that the Internet is not suitable as s

It worked the whole duration of the 9 hour outage. The only problem I had was reaching local numbers; all the trunks for reaching local numbers were filled up. FirstEnergy's outage reporting number gave a busy signal right after I hit send or dialed it on a landline after about 10 minutes (I tried to call it a few times because I was pissed off; I was missing a good episode of Jerry:(

Then my landline died like 3 hours later. Completely. No voltage what so ever. but my mobile worked for the whole duration

I'm just wondering what would constitute a good response to a wide-scale outage? I mean if New York is destroyed by a meteor that shower I wouldn't count on being able to pull up the Time's server.

Frankly people the internet is run along the same backbone as the telephone system. Why? Cost. It is as reliable as your major long-distance phone carriers, because it's switched right along side of the long distance phone network.

What bugs me far more than the internet going down is the fact that some morons

The data they present indicates that the blackout had a severe regional impact. I see nothing that shows that there was a significant global impact (meaning that I can't get data from AS 12374 to AS 553, for example).

The WTC collapse probably had more impact on global routing (some large carriers had primary and backup equipment in both basements).

The data they present indicates that the blackout had a severe regional impact. I see nothing that shows that there was a significant global impact (meaning that I can't get data from AS 12374 to AS 553, for example).

That's correct. In fact, our data showed that it clearly did _not_ have global impact. (Compare with various worm events, which do generally have global impact: http://www.renesys.com/projects/bgp_instability/in dex.htmlcod red ii and nimda report [renesys.com])

The vast majority of the networks that went dark were 24-bit in size. That is generally either small to medium businesses or home office, or a division of a larger business. I think we can all agree that outages at that level, though undesired, are not the end of the world. Small outfits and home office workers can afford the down time in the case of a general crisis (ie the buses aren't running, either, so go have a coffee and read the WSJ) and 4-8 hour outages on their DSL are not uncommon either. I know that is the case where I work, and we have a global presence too.

We invested in a very large portable battery backup system for our server room back when California was having its own blackouts. The stack would probably stay up an hour or so, which we figure is enough to manage most blackouts nicely, and anything longer than that is a "major cockup" that we need to wait out. But if we go down who will care? Just us, and not all that much.

I think that the general expectation regarding the internet is not that it will stay up 100% in a crisis, but that it will continue to operate in cells of functionality during most kinds of disaster, then recover quickly on its own as soon as it can built remote connections again. Compare that to the electric grid, where most or all cells of function were sucked empty and driven into the ground when the grid dried up, and engineers spent days coordinating their recovery so that the first cell to go online didn't feed the entire electric grid on its own. Tricky stuff.

TCP/IP is built to understand rolling outages and uncoordinated recovery. The electric grid still is not. That, I would submit, is the main issue and not that routers on the edge of small networks didn't have generator backup.

The Internet was a lot MORE capable of being infrastructure, before *.com happened. Since it has been commercialized, the backbones have become more and more important, and routing/re-routing less and less important.

"Error: No Route To Host" at one point in history, literally meant that the computer directly connecting the computer you were trying to reach was offline. Now, "No Route To Host" means that there was a power failure somewhere in the world that just happened to be in the way of your provider routing through a few other providers, or that a janitor somewhere kicked out a plug in Minnesota, while you were trying to connect from Michigan to Texas.

The system used to be able to route around virtually ANY connectivity issue. Now, it can't route it's way out of a wet paper bag.

Hell, the recent blackout pretty much means that the electrical grid isn't ready to be critical infrastructure, either.

Let's not forget that part of the justification for building the Interstate highway system was that the high-speed roads could be closed down and used for military transport and possibly even as air strips in case the USA is even invaded. So, any civilian "in case of war" plan that depends on the highways being available is flawed because those roads just might not be open.

but this surprises, WHO, exactly? i work for a large telco (I won't mention AT&T by name) and i can assure you that even if YOU were up, THEY were down. which effectively made YOU down as well. those few days SUCKED to work for a carrier, lemm tell you...

Newsflash: the internet is already critical infrastructure, and the power grid that failed is critical infrastructure and has been for the better part of a Century.

If you're saying that lack of failure defines whether something is critical or ready to be critical then I guess by that definition the electrical distribution grid isn't ready to be critical infrastructure. That is preposterous because it is and manages quite nicely for the most part. The rest is down to cost benefit.

Except when the electrical grid breaks down hospitals, banks, and everyone else that runs something important has generators to make up for it. They don't have a generator that will let them access the database on the Internet a few thousand miles away though.

You seem to overlook that it was a lack of electricity and a lack of backup that brought the internet down. Backups tend to provide reduced capability for an hour or so, and not everyone has them. Nobody would argue that electricity isn't critical infrastructure, but going to local backup generation means that *infrastructure* has *failed*. Similarly roads are critical infrastructure, but they failed in the blackout too (thanks to signalling issues).

To me, life and death is critical. Not being able to read Slashdot is not critical. Are there functions at, say, hospitals where people's lives hang on being able to access the Internet? Backup power just takes a generator. How do you provide backup Internet to a hospital? I think that's the distinction between the electrical grid and the Internet. You can't really provision your own Internet for emergency purposes. At least not very easily.

Yes you can provide redundance, of course electricity is a prerequisite. There are satellite links and microwave and/or radio communications that you could use if you thought it important enough. Once again the cart is before the horse here, life and death is not the definition of critical infrastructure, neither is redundance. Critical in a context like this typically means it is critical for performing business functions (for example), and it certainly meets that definition.

I live and work in the NYC metro area, and was at work when the blackout started. I didn't notice there was a blackout until I walked outside and saw our generators on. For the record, I work for a company that provides services to large internet datacenters. Any datacenter worth its monthly fee wasn't affected by the power outage. Yes, individual institutions including banks etc etc who weren't prepared did lose connectivity, but backbone providers and large carrier centers in the area didn't skip a beat.

I guess my definition of "the internet" doesn't really include the last mile connection to users. Not that either one of us is right or wrong, mind you. But I guess if the users of "the internet" can't use it, then its not really working in a worthwhile manner.

Just wondering... even though you had normal capacity through the blackout, did your site maintain normal usage? Having the datacenter up is nice, but datacenters only exist to store information generated in the "real world".

If a datacenter's up, but nobody's online to use it, do the servers still hum?

Considering this was a backbone facility and home to several hundred different colocated and managed customers, I can't speak for all of them, but I do know no links to the outside world were lost. but then again, maybe thats because this was a high quality facility.:)

A few weeks prior, they moved the datacenter, to a brand new facility. The new feature was a generator attached to the datacenter, rather than a few hours of batteries. From what I was told, prior it was only batteries.

Found out after, that there was power the whole time in the datacenter, and services were pretty regular.

So not everyone did bad. Depends on how prepaired they were (and of course some luck).

This article obviously overstates the problem significantly. Those who had power could get on the web, those who did not could not. Those who had power could communicate with others who had power over the web for the most part even when those inbetween did not. The failure is with the power grid, not with the internet, the internet is reliant on that system, and of course when that system goes down in an area the internet will as well. However the internet as a whole performed quite gracefully, barely n

The internet impresses me every day. Its the ridiculous expectations of people that blow my mind.

When the world trade center went down, I worked at a major ISP. Verizon is right next door to the WTC, not surprisingly all the main trunks were destroyed. Connectivity for much of the Atlantic including Europe was disrupted. Many carriers had cell towers on top of the building as well. Even from California I didn't need to be told the internet was going to be f#@*ed up on the east. Yet some how people in

They conclude that the Internet is not ready to be critical infrastructure.

Huh? It would seem to me that the fucking power grid is not yet ready to be critical infrastructure but hey, here we are. Shit. There is nothing in the world (except for the sun, oceans, etc.) that is 100.00000% dependable.

They conclude that the Internet is not ready to be critical infrastructure.

Really? It already is a mission critical infrastructure for my company and most others, I suspect. When some idiot with a backhoe takes the region down for a few hours, we're in serious doo-doo (no second carrier where I am). We switch to ye ole spreadsheet as a backup, but we're crippled without Internet access.

I agree with the article - there are some serious architectural flaws that need to be addressed; however, fact of the matter is that the internet has already become a mission critical technology despite these shortcomings.