Meltdowns at NSA spy data center destroy equipment, delay opening

NSA struggling to identify cause of explosions that delayed facility for a year.

A massive data center being built by the National Security Agency to aid its surveillance operations has been hit by "10 meltdowns in the past 13 months" that "destroyed hundreds of thousands of dollars worth of machinery and delayed the center's opening for a year," the Wall Street Journal reported last night.

The first of four facilities at the Utah Data Center was originally scheduled to become operational in October 2012, according to project documents described by the Journal. But the electrical problems—described as arc fault failures or "a flash of lightning inside a 2-foot box"—led to explosions, failed circuits, and melted metal, the report states:

The first arc fault failure at the Utah plant was on Aug. 9, 2012, according to project documents. Since then, the center has had nine more failures, most recently on Sept. 25. Each incident caused as much as $100,000 in damage, according to a project official.

It took six months for investigators to determine the causes of two of the failures. In the months that followed, the contractors employed more than 30 independent experts that conducted 160 tests over 50,000 man-hours, according to project documents.

The 1 million square foot data center, filled with supercomputers and storage equipment to maintain surveillance information, is slated to cost $1.4 billion to construct. One project official told the Journal that the NSA planned to start turning on some of the computers at the facility this week. "But without a reliable electrical system to run computers and keep them cool, the NSA's global surveillance data systems can't function," the newspaper wrote.

Project officials are still trying to determine the cause of the meltdowns, and they disagree about whether proposed fixes will work. Backup generators have failed repeated tests, cooling systems "remain untested," and "there are also disagreements among government officials and contractors over the adequacy of the electrical control systems."

The Army Corps of Engineers is overseeing construction and promised to make sure the data center is "completely reliable" before allowing it to go online.

Promoted Comments

It's the circuit breakers. Datacenter power systems are complex because they have to switch between multiple power sources without discontinuing flow to the hardware. Dealing with unstable voltage is a tricky problem of electrical engineering but its the materials engineering side that generally causes the most grief. Simply, the breakers get hot. Really hot. Instantly.

Hot enough to explode; which is why in power substations all the parts are encased in armored steel shells.

A breaker in a commercial datacenter power management center is generally a ceramic brick about 2" on each side with contacts on the sides. When they explode they tend to crack and blow vaporized metal gunk out the fissure like the liquid metal penetrator of an anti-tank rocket. The metal contacts and the way they're bolted in tends to keep the ceramic parts intact once the molten innards escape, and when the metal remnants cool all the chunks are stuck together.

We had a main switch blow at my lab one time. Big old bang we thought something blew-up. Evacuated two buildings, had the fire department in for hours. The electrical guys tore-down and rebuilt the system, never did find anything wrong. Not sure what happened, we were not even drawing much power at the time, none of the high power stuff was even turned on. Has not happened since. That was less than a megawatt system, a multi megawatt power system losing it is probably exciting.

It's the circuit breakers. Datacenter power systems are complex because they have to switch between multiple power sources without discontinuing flow to the hardware. Dealing with unstable voltage is a tricky problem of electrical engineering but its the materials engineering side that generally causes the most grief. Simply, the breakers get hot. Really hot. Instantly.

Hot enough to explode; which is why in power substations all the parts are encased in armored steel shells.

A breaker in a commercial datacenter power management center is generally a ceramic brick about 2" on each side with contacts on the sides. When they explode they tend to crack and blow vaporized metal gunk out the fissure like the liquid metal penetrator of an anti-tank rocket. The metal contacts and the way they're bolted in tends to keep the ceramic parts intact once the molten innards escape, and when the metal remnants cool all the chunks are stuck together.

The variables I can think of in Arc Flashes is voltage delta's between ends of the arc, air quality variables, and distances between the arcpoints. If there's some persistent change reducing potential needed for an arc due to some obscure air handling condition, then that could be it. Perhaps some collateral damage to other parts? I've never seen an Arc occur in my IT lifetime, but the electricians and facilities folks take the risk of them quite seriously.

*Unless static sparks count as arcs, but I think we're typically talking about transformer equipment stepping down line voltage to three phase or somesuch.

Public service announcement: fellow Citizens, Schadenfreude is ok (and expected).

Quote:

ICARUS HAS FOUND YOU!!!!!>ICARUS HAS FOUND YOU!!!!!>>ICARUS HAS FOUND YOU!!!!!>>>ICARUS HAS FOUND YOU!!!!!>>>>ICARUS HAS FOUND YOU!!!!!>>>>>ICARUS HAS FOUND YOU!!!!!>>>>>>ICARUS HAS FOUND YOU!!!!!>>>>>>>ICARUS HAS FOUND YOU!!!!!>>>>>>>>ICARUS HAS FOUND YOU!!!!!>>>>>>>>>ICARUS HAS FOUND YOU!!!!!>>>>>>>>>RUN WHILE YOU CAN!!!!!!!!!!!>>>>>>>>RUN WHILE YOU CAN!!!!!!!!!!!>>>>>>>RUN WHILE YOU CAN!!!!!!!!!!!>>>>>>RUN WHILE YOU CAN!!!!!!!!!!!>>>>>RUN WHILE YOU CAN!!!!!!!!!!!>>>>RUN WHILE YOU CAN!!!!!!!!!!!>>>RUN WHILE YOU CAN!!!!!!!!!!!>>RUN WHILE YOU CAN!!!!!!!!!!!>RUN WHILE YOU CAN!!!!!!!!!!!RUN WHILE YOU CAN!!!!!!!!!!!

oh they should just hire google or amazon to do it for them. two hands for beginners...

Datacenters really are very hard. With the scale of this one - no it doesn't have a yottabyte like some claim, but still a lot of storage - they really should be relying on experts that have experience with giant datacenters. There are not many.

oh they should just hire google or amazon to do it for them. two hands for beginners...

Several problems with this. First off, there likely isn't enough storage and/or processing power available from Google or Amazon. That in itself is kinda scary. Secondly, the NSA needs to have end-to-end security as well as chain of custody on the hardware to prevent any sabotage. This is something that the NSA shoudl genuinely be handing itself.

On that note, the NSA does have other data centers. Their facility in Fort Meade is estimated to consume between 70 and 90 megawatts and that's smaller than what is being built in Utah. I would fathom that the Utah data center is being designed for over 100 megawatts of power consumption.