Re: Is the issue based on unregulated “information service” delivering a regulated service? Well, call it what you like, but anytime there’s equipment that fails in Denver Colorado, it should probably not interfere with emergency services in Boston Massachusetts... that’s not the right kind of redundancy. Still, and as companies shift to third parties for cost savings, perhaps the branded LECs and CLECs and wireless providers like Verizon should be held accountable for negotiating third-party contracts to deliver calls that ideally would have remained local in scope.

Still, and in the wireless situation I mentioned in an earlier comment,, they paid for a charter flight to get new linecards and routers. 3 hours downtime. But Because CenturyLink took two days, they must have hired the late Uber driver I used in Miami last week. Twitter isn’t a redundant communications platform. When seconds count, CenturyLink took to Twitter and essentially told customers that 9-1-1 help is only a 2-hour round trip drive away. 😉👍

Re: Is the issue based on unregulated “information service” delivering a regulated service? I had an off-forum conversation with Duh! about a network that I was in recently that had gear in it that was EOL before Cisco was formed. That was I was musing about in the other thread. In this case, I was thinking about X.25 gear that was in the OSS network of this carrier.

And just remember lots of old voice equipment had real PROM based firmware that was not changeable...(go back and look at POTS card requirements in TR-57 for example).

Re: Is the issue based on unregulated “information service” delivering a regulated service? Well, one would think so. However, this isn’t the first time I read about a line card going bad. Another issue, is that many manufacturers thought lasers and LED diodes won’t dim after thousands of hours of use; and they were marketed as products that never need replacement.

Still, something similar occurred with new Juniper equipment deployed in a Ericsson/wireless setting which also relied on Level 3 as a service provider.

After a lot of blame-games and finger pointing between light-wave service providers, the formal notes filed with the FCC placed responsibility squarely on “firmware issues” however, the line cards were replaced too.

As linecards start to age, especially those that connect legacy networks, I expect to see these types of occurrence become more frequent, especially in companies whom lack a robust CMDB (Configuration Management Database) and operational-company policy which mandates a record of changes. Two days’ to find out why 9-1-1 isn’t routing correctly is a long time and somewhat indicative of a lack of asset information.

It’s not like Storey could say the blame is due to a hurricane or weather and, by virtue of being a “Internet Wholesaler” it’s not like Storey has the power to issue a bill credit to a Verizon Customer whom was affected by the outage.

When I was at Company X, they had a network based on legacy Cisco Equipment. Eventually, the hardware was EOL’ed. Instead of making the priority of rip-replacing the EOL’ed network hardware to something covered, the people at X started sourcing parts from eBay. On the day I was about to give a report, I learned of a FBI bust where DoJ and FBI had deployed parts from a vendor that was sourcing “new-old-stock from eBay but we’re actually knockoffs. Suddenly every project was green-lit for rip-and-replace. Company X had “Gold Status” on Cisco RMAs and didn’t question any return/repair.

Some of this early equipment will eventually go out. It should be a priority to replace as part of a normal operating budget and before the “rip” is mandated. Citation: “Departments of Justice and Homeland Security Announce International Initiative Against Traffickers in Counterfeit Network Hardware”

Is the issue based on unregulated “information service” delivering a regulated service? This is one of those reasons why voice services for 9-1-1 are generally provided on dedicated, redundant circuits. When a “information services” provider such as Level(3) or CL translates those calls to VoIP or data service, a similar quality of service (including redundancy) should be required, similar to that required for regulated telecom service counterpart.

More clarity The story changed subtly since the day after the event. Now the misfunction was in one of the vendor's cards, not in a third-party's. That plus some additional bits gives credence to Fred Goldstein's theory that it was a hardware problem that caused a GMPLS issue that propagated through the control plane.

It also narrows the field as to which vendor it was (and I'm not naming names).