I'd like to know what happened in the above accident, I'm guessing winnie the pooh knocked over the motorcycle which failed to yield when entering the intersection. tigger who was behind Pooh rear-ended him, followed by the clueless postal worked who crashed into the pile. poor dude had no seatbelt and flew straight out

biography:I am jobless and trying to see if I can go to grad school... update I am now broke

location:NYC East Village and Queens

talents:photography and finding a lovely career I like

occupation:Parents do now...

Posted 14 March 2008 - 05:04 PM

I'd like to know what happened in the above accident, I'm guessing winnie the pooh knocked over the motorcycle which failed to yield when entering the intersection. tigger who was behind Pooh rear-ended him, followed by the clueless postal worked who crashed into the pile. poor dude had no seatbelt and flew straight out

something fishy going on with the network provider..... not sufu servers...
this was the reason for crash last week and suspect its similar what happened a few hours ago. if anyone can decipher the following excuse given to us then they must be very smart indeed.
++++++++++++++++++++++++++++++++++++++++++++++++++++++

Broadcast storm of undetermined origin caused link flapping which in turn caused HSRP and spanning tree failures. The broadcast storm apparently began in C2 data center, disrupting traffic on key corporate vlans as well as hosted servers. The C2 core router's CPU became overloaded and inter-data center links were non-responsive, causing STP recalculations and HSRP failures. Key corporate infrastructure became inaccessible as multiple routers attempted to take over (or relinquish) gateway IPs as spanning tree calculated switching paths appeared and disappeared. The C2 router shares switching infrastructure with the C3 core and the initial state of the data center interconnections had most traffic passing through the C5 data center. The broadcast storm cascaded through both the primary and backup C5 distribution networks, leaving access switches with no egress. The broadcast storm propagated through the shared switching infrastructure of the C3 data center facility. Both prima! ry and redundant customer colocation access routers were affected and the storm propagated to the customer access switches. As a result, many customer access devices (in the colocation cabinets) were left in a non-functioning state and required a reboot to restore services. Cisco engineers are on site to determine the root cause of the issue. In the interim we have taken the steps to deploy additional equipment and to remove certain HSRP and redundant switch paths to reduce the severity of link flapping in 100% resolution is proven.

Broadcast storm of undetermined origin caused link flapping which in turn caused HSRP and spanning tree failures. The broadcast storm apparently began in C2 data center, disrupting traffic on key corporate vlans as well as hosted servers. The C2 core router's CPU became overloaded and inter-data center links were non-responsive, causing STP recalculations and HSRP failures. Key corporate infrastructure became inaccessible as multiple routers attempted to take over (or relinquish) gateway IPs as spanning tree calculated switching paths appeared and disappeared. The C2 router shares switching infrastructure with the C3 core and the initial state of the data center interconnections had most traffic passing through the C5 data center. The broadcast storm cascaded through both the primary and backup C5 distribution networks, leaving access switches with no egress. The broadcast storm propagated through the shared switching infrastructure of the C3 data center facility. Both prima! ry and redundant customer colocation access routers were affected and the storm propagated to the customer access switches. As a result, many customer access devices (in the colocation cabinets) were left in a non-functioning state and required a reboot to restore services. Cisco engineers are on site to determine the root cause of the issue. In the interim we have taken the steps to deploy additional equipment and to remove certain HSRP and redundant switch paths to reduce the severity of link flapping in 100% resolution is proven.

Broadcast storm of undetermined origin caused link flapping which in turn caused HSRP and spanning tree failures. The broadcast storm apparently began in C2 data center, disrupting traffic on key corporate vlans as well as hosted servers. The C2 core router's CPU became overloaded and inter-data center links were non-responsive, causing STP recalculations and HSRP failures. Key corporate infrastructure became inaccessible as multiple routers attempted to take over (or relinquish) gateway IPs as spanning tree calculated switching paths appeared and disappeared. The C2 router shares switching infrastructure with the C3 core and the initial state of the data center interconnections had most traffic passing through the C5 data center. The broadcast storm cascaded through both the primary and backup C5 distribution networks, leaving access switches with no egress. The broadcast storm propagated through the shared switching infrastructure of the C3 data center facility. Both prima! ry and redundant customer colocation access routers were affected and the storm propagated to the customer access switches. As a result, many customer access devices (in the colocation cabinets) were left in a non-functioning state and required a reboot to restore services. Cisco engineers are on site to determine the root cause of the issue. In the interim we have taken the steps to deploy additional equipment and to remove certain HSRP and redundant switch paths to reduce the severity of link flapping in 100% resolution is proven.

weeelll... you see we have a big, big, big server in a big, big, big server facility and they are having big, big, big problems meaning we have big, big, big problems.
-> and if anyone knows what this means # df and what nasty log files await when you run that command then you get the picture