Posted
by
Soulskill
on Friday August 27, 2010 @11:00AM
from the your-tax-dollars-at-work dept.

bswooden writes "Some rather important departments (DMV, Social Services, Taxation) in the state of Virginia are currently without access to documents and information as a technology meltdown has caused much of their infrastructure to be offline for over 24 hours now. State CIO Sam Nixon said, 'A failure occurred in one memory card in what is known as a "storage area network," or SAN, at Virginia's Information Technologies Agency (VITA) suburban Richmond computing center, one of several data storage systems across Virginia.' How does the IT for some of the largest departments in a state come to a screeching halt over a single memory card? Oh, and also, the state is paying Northrup Grumman $2.4 billion over 10 years to manage the state's IT infrastructure."
Reader miller60 adds, "Virginia's IT systems drew scrutiny last fall when state agencies reported rolling outages due to the lack of network redundancy."

How does a fault in a single SAN controller cause an outage of the entire data storage network? Expensive SAN solutions are expensive & highly redundant for reason. This smells like a "Let's buy the cheaper solution" and/or an infrastructure design fail.

Also, this can happen when you hire an external firm to manage something that you should be managing yourself. External managers for projects like this are motivated by extracting as much money as possible from you. Internal departments of technology, by comparison, are motivated by convincing co-workers to not shout at them.

Also, this can happen when you hire an external firm to manage something that you should be managing yourself. External managers for projects like this are motivated by extracting as much money as possible from you. Internal departments of technology, by comparison, are motivated by convincing co-workers to not shout at them.

B-b-but you're saying that the bloated corrupt government that takes money from people at gunpoint and has no incentives for efficiency might have done a better job than a private contractor that works on the God-given free enterprise system that rewards efficiency and punishes waste! That's unpossible!

B-b-but you're saying that the bloated corrupt government that takes money from people at gunpoint and has no incentives for efficiency might have done a better job than a private contractor that works on the God-given free enterprise system that rewards efficiency and punishes waste!

On the contrary, the free market did exactly as it was supposed to: it eliminated the inefficiency of redundant systems and a safety margin. Efficiency or the safety of redundancy, you can have one or the other but not both. That's why any important system should be managed by the government, and free enterprise should be limited to the role of logistical optimization it's actually good at.

Unfortunately some people nowadays consider free market their religion, so we got deregulation and resulting financial crisis. Oh well...

First rule in government spending: why build one when you can have two at twice the price?

And sometimes that's exactly the right approach, except you should really build three or four or ten. One might argue that that's the very purpose of the government: to force inefficiency where short-term self-interest would result in long-term disaster - in other words, to avoid the tragedy of the commons.

Funny, I was unaware that Northrop Grumman were a scion of the free market. Could you name some of their non-government customers that provide more than 1% of their total revenue? It's called a Military-Congressional-Industrial Complex for a reason. But thanks for playing the strawman game.

If you're big enough that you're not just going to be scaling staff up and immediately down again, hire your people in-house. It's not a question of government vs private companies. It's a question of hiring your best people to be on staff, or outsourcing to someone who doesn't have the same motivations. This is true if you're a government, a corporation, a private entity, or a high school marching band. Plus the markup on external IT services is just obscene.

Poorly managed projects will be poorly managed internally or externally. But externally poorly managed projects are a lot more expensive, and harder to reign back under control.

Step 1) Design system so a single SAN controller is the only thing keeping the network running.Step 2) Use money saved by not adding redundancy/designing the system correctly to give self money.Step 3) Expect one component to last long enough for you to leave the job before it fails.Step 4) ????Step 5) Profit anyway because they don't get the concept of failures==bad things and keep paying you.

Did the dude from the City of SF design this network so that if he wasn't there to SSH in with a modem he had hidden in his toaster over, the ram in a SAN would bring the whole network down?

No he asked them repeatedly to buy a spare, which was denied, then he refused to yank it out of the live production system when another's department's boss said to give it to the chick he was banging so she could be a computer expert too.

The system was built with redundancies and backup storage. It was hailed as being able to suffer a failure to one part but continue uninterrupted service because standby parts or systems would take over. But when the memory card failed Wednesday, a fallback that attempted to shoulder the load began reporting multiple errors, Nixon said.

Cheap solution problem? Possibly. Infrastructure design fail? Possibly, but not likely. Couldn't critique it without seeing their setup, but it sounds like they designed some redundancy in. I wonder what kind of "memory card" failed. From the description, it sounds like it might be a cache module.

The system was built with redundancies and backup storage. It was hailed as being able to suffer a failure to one part but continue uninterrupted service because standby parts or systems would take over. But when the memory card failed Wednesday, a fallback that attempted to shoulder the load began reporting multiple errors, Nixon said.

Cheap solution problem? Possibly. Infrastructure design fail? Possibly, but not likely. Couldn't critique it without seeing their setup, but it sounds like they designed some redundancy in. I wonder what kind of "memory card" failed. From the description, it sounds like it might be a cache module.

Yeah really. Before we got away from traditional hardware (NAS, SAN, etc.) we had piece of crap Dot Hill arrays and they had redundant power supplies and redundant controllers. There must be more to this story.

How does a fault in a single SAN controller cause an outage of the entire data storage network? Expensive SAN solutions are expensive & highly redundant for reason. This smells like a "Let's buy the cheaper solution" and/or an infrastructure design fail.

On the plus side if the US government ever builds Skynet we know where to strike.

How does a fault in a single SAN controller cause an outage of the entire data storage network? Expensive SAN solutions are expensive & highly redundant for reason. This smells like a "Let's buy the cheaper solution" and/or an infrastructure design fail.

RTFA!

The problem was a dual (or worse) failure. What the article reveals is that while they may have had all of the right hardware in place and a mechanism for it to handle the most likely failures, they were missing the 'soft' components of a good HA system: routine testing of failover and a rapid repair plan. In the auto industry where failed systems can halt factories and rack up hundreds of thousands of dollars of cost per hour of downtime, it is the norm for HA systems to have frequent failover tests, to have on-site spares for critical components that can be replaced by on-site staff, and to have support arrangements that put a skilled human on-site with replacement hardware in a small amount of time. This is why traditional "enterprise class" systems are so expensive. They are designed for rapid diagnosis and repair, and a well-run enterprise that needs truly HA systems pays for expensive HUMAN support by their own staff and/or from IBM, Sun^WOracle, EMC, HP, etc. and monitoring systems on top of that. If you fail over your HA systems every Sunday at 02:00 (or whatever time is safe...) and have the right staff, processes, and support contracts in place, you will find nearly all of the latent failures and have them fixed before a true production failure exposes them.

The most appalling thing about this to me isn't the failure. Some systems don't have safe times for testing failovers, and I know from personal experience that a component in an HA system that was working perfectly Saturday and has been idle since Sunday can go tits-up when needed on Wednesday. The real problem is the long outage. If the clowns in the VA state government were doing their jobs, they would not have a system like this without vendor support contracts to fix well-defined hardware problems (e.g. "bad memory card" ) within a few hours at most. This was something I always loved about working in a shop with the top-grade EMC contract. The Symmetrix and its associated gadgetry would call EMC about failures and we'd have a tech show up at the DC with parts before we even noticed anything unusual: costly, but nowhere near as expensive as killing all of the SAN-reliant systems for a random day every 3 years. The 4th 9 is not cheap or simple, because it always requires humans.

If you fail over your HA systems every Sunday at 02:00 (or whatever time is safe...)

(voice of tech ignorant executive)

"We can't be down then. We have remote workers that want to do things at that time."

"The overtime for that window is too expensive, and we can't do it during production hours. We'll just assume you planned carefully."

"You just told me part of the reason that system is so expensive is that it is much less likely to fail. Well, we're not paying for a spare."

And after hearing that, I want to duct tape those fucking executives to their $1500 chair and let them watch while I take a powder-actuated nailer to their precious Mercedes S550.

Why the rage? Just spell out very clearly (and in writing) exactly what will happen if component X fails, and the cost to implement redundancy now. When component X fails and the company loses Y dollars of revenue and the CEO comes to you, just pull out the email and say "I tried to design redundancy but he wouldn't spend the money".

It worked for me when I tried to get money for a spare battery cabinet on our primary UPS. I told my boss that if a single battery in the string fails during a power failur

It is far worse than that. The summary says it is a meltdown! I don't know how IT could cause that, but terrorism must be involved. From what I've heard, they are evacuating New Jersey and calling in the National Guard.

No. Their IT infrastructure is so power-hungry that they co-located a nuclear plant with their main data center.

Does the government not have responsibility to:1) Manage the delivery and implementation of the contracted items, and2) Verify that what was contracted for is actually delivered?

Are you actually suggesting that a bunch of "average salary" mid-level IT drones would have done a better job at implementing a high availability / fault tolerant system than a private contractor that specializes in design and implementation of this type of system, and has done it dozens of times?

I'll tell you exactly how. Some manager somewhere said that it cost too much to add redundancy. It's happened over and over at my extremely large company, and it will continue to happen as long as money is the prime concern.

Heh, it shouldn't be about the money, though... they should have specified high availability from the very beginning. They often throw it out during the prototyping stage, saying they need to Keep It Simple Stupid just to get things working, but then all the software is never designed to be able to handle redundancy, and shoehorning it in later becomes pretty much like starting again from scratch.

Also, designing in redundancy is usually worse than having no redundancy at all if it's never tested. There sh

This is a private sector failure. NG is the culprit here, not the government.

This is why you should be very wary of bidding out work to 3rd party. They don't care about your city. They are not thinking about how their decision impact the city in 10-20-50 years.

and while infrastructures is far more complex and expensive then people who don't deal with it realize, 2.5 billion of 10 years? 240million a year? That is a price where they should have a tested redundancy system. I single point SAN failure? Shame on NG.

I hate to burst your preconceive bubble, but in my years in the private sector and public sector as taught me, most government agency are far better at keeping there own infrastructure. More reliable and long standing.

My 'preconceive bubble' is based on my current job for the US government, and the situation we have in our department.

It might be true on average that government agencies are better at keeping their own infrastructure, especially if they can manage to keep their accounting and design of that infrastructure at a lower level. However, once those decisions pass the level from the internal to the external (or: From those hired for the job, to those elected/appointed into it), that long-term planning appears to

However, once those decisions pass the level from the internal to the external (or: From those hired for the job, to those elected/appointed into it), that long-term planning appears to break down, in favor of political squabbles.

As someone who's worked both sides of the public/private line, allow me to assure you that this is not unique to government. I've seen plenty of boneheaded design decisions made by upper management for obscure/bizarre/just-plain-wrong reasons in both private and government gigs.

It MAY be a government failure as well. When you write the impossible into a bid, make the bidding process tremendously complex and make the cost of even bidding too high for most potential contractors (by expecting a complex analysis up-front for free) you eliminate all but the largest contractors with a fat legal department. If you then require acceptance of the lowest bid with no allowance for confidence level you set up a perfect storm for a ripoff. You assure that each bid you receive will be a lie bas

Everyone seems to think that a network outage is no big deal, until the network goes down. That's when people start thinking of the burn rate of an entire organization sitting on their thumbs while that network of off-the-shelf Linksys routers is replaced by some kid at Best Buy. Or how that 5k dollars per year for a backup external line suddenly pales in comparison to the 5k dollars per hour your organization is wasting because you were a cheap bastard.

What does mean mean again? Oh, that's right. If you want a MTBF of 50 years, you can either get one unit and run it for 50 years to prove yourself, or you can get 100 units and run them for 6 months... To be sure, it doesn't automatically take into account mechanical wear but any engineer worth their salt can extrapolate acceptable wear rates with 6 months of data (and that's only if you are talking about systems with moving parts)...

Which works if you presume a linear fail rate, which is bonkers. Systems always run better at the beginning of their lifecycle. Static buildup, electrical interference, repeated heating and cooling cycles, etc all take a toll on the electronics. Would you really personally estimate a real-world MTBF of off-the-shelf SATA drives at 70 years? No, because they work perfectly well for the first year, start having trouble the second, and are all dead by the 8th. But if you presume linear dropoff using just that first year of testing, they look pretty damn bomb proof because that's when they work best. It's a stupid system that's only valid if you replace all of your hardware every year.

And all systems have moving parts. Electrons move. The circuit boards expand and contract. Crap builds up on important components. Electroplating can move move metals from one part of the design to another. Stuff gets plugged in and unplugged.

I realize that MTBF has a very technical definition that is different than marketing departments utilize it as. I might agree with you that any engineer worth their salt can extrapolate a proper MTBF. But most of the MTBF's I've seen are just stupidly wrong. If people really believe those published fantasy numbers, no wonder they don't put enough redundancy in their systems.

Probably involving executives vacationing in nice tropical locales by rewarding themselves with hefty bonuses. Meanwhile some poor IT guys weren't given the budget that reflected how much the State was paying out, and had to cobble together a SAN solution, or pick the cheapest one off the shelf. The IT guys will, of course, be the patsies for this whole episode, with the CEO and CTO all huffing and puffing and vowing to State officials and lawmakers that they're doing everything they can to get to the bot

Well, as Sherlock Holmes' greatest axiom goes "When you have eliminated the impossible, whatever remains, however improbable, must be the truth." Using that logic, the answer is simple. They're not using a SAN. Somewhere along the line someone is bullshitting, and my gut tells me its management. A lot of folks who get government contracts pretty much view them as an opportunity to skim off the top. Why, take what should be a $50,000 solution and mock something up for $10,000, and that's $40,000 profit.

You'd think they'd at least do RAID 1 Mirroring. Then they could just hot swap in another drive, sync it, and be on their merry way. Why centralize your data services if you're not going to do it right?

Depends on the SAN. The article (as most tech articles are) is very short on scope & details. So "one chip" went bad. Should that bring everything to a screeching halt? The answer should be "no" but in practice we can all say that it's more often a case of "not usually." From TFA:

It was hailed as being able to suffer a failure to one part but continue uninterrupted service because standby parts or systems would take over. But when the memory card failed Wednesday, a fallback that attempted to shoulder the load began reporting multiple errors, Nixon said.

So Array Alpha shits the bed. You follow your failover procedures and start running on Array Zappa. That immediately starts throwing errors. Ok armchair QB's, let me switch to my Keeanu Reeves voice and ask "What do you do?" You built a pretty damned redundant system there and you're still down. Sure, it'd be nice if they had a backup in another DC they could fail to, but they don't. Doesn't matter, eventually you're playing the double / triple / quadruple hulled oil tanker game. Either way, Redundant SAN's aren't cheap and aren't all that easy (it's not exactly a "the bosses nephew who 'knows all about computers' set it up last weekend" level of complexity.) The TFA also has these points:

Full function may not be restored until Monday.

Experts who examined the system determined that no data were lost except for those being keyed into the system at the moment it failed, Nixon said.

Other than the fact that proofreading and the usage of proper grammar are no longer a requirements to work for a Virginia newspaper, what do those points tell us? Sounds to me like they hit the last line in the DR procedures: Restore from backup. Depending on what their backup strategy is (maybe they're splitting several terrabytes across a tape robot that only supports 200/400gig tapes because that robot is the only device the vendor supports.) and how truly important the affected system is (This may be a system where the powers that be said "fsck it, they can process renewals by hand and we'll bring everything back up on Monday after we test on Saturday") a return to business on Monday might be SOP. But that wouldn't sell newspapers (or make talking points with the voters...) now, would it?

Maybe there was a major screwup here. Maybe they never tested their failovers and maybe that 2nd SAN was bad out of the box. I'm a little more willing to cut some slack and say "man, that sucks. Glad it's not my ass on the line." Karma's a bitch like that. I like to take these stories as an opportunity to rethink my own single points of failure are rather than point & laugh and tell everyone how I'll never lose and data because it's I'm running RAID 5......

When getting the bid, NG promised redundancy.NG stalled and then was behind schedule.the redundancy system became less 'important' due to timeNG went liveNG let a bunch of contractors goNG says there in house staff will take care of it.NG new hires get stuck at the end of the project, do enough to consider it 'done'. Several amateur mistakes were made.

What happening right now:People who work for the state IT are showing everyone the email the they got from NG saying the system was

Guys, accidents happen. This "Northrop Grumman", whoever they are, will no doubt be fired and not receive any more contracts once word of this gets out. This will put pressure on them to provide better services, or be out-competed by other entrepreneurs. Our free market system works, you just need to expect this kind of thing when it's government doing the hiring.

Guys, accidents happen. This "Northrop Grumman", whoever they are, will no doubt be fired and not receive any more contracts once word of this gets out. This will put pressure on them to provide better services, or be out-competed by other entrepreneurs. Our free market system works, you just need to expect this kind of thing when it's government doing the hiring.

The problem is that it's the government selecting the vendor. If the government would just get out of the vendor-hiring-business maybe the Free Market could fix this mess.

My company works on a project that N G lost on a re-compete bid. I can not go much into details, but suffice it to say: I am not at all surprised that they screwed up maintenance and management based on what I have had to deal with on the software they developed.

This is what you get for hiring a military contractor to do a civilian persons job. All 2.5 billion gets you in the military is a manger and toilet seat. You don't start getting functional hardware until the budget reaches 100 billion.

Funny that I should receive an email today inviting me to a Northrop Grumman Information Systems Hiring Event. The event occurs on the 25th of August and I received the email on the afternoon of the 27th. Failed there too!

Ok, in this case it probably is the bureaucracy at fault. But it isn't in all cases. In my previous job we had an architect who would take it upon himself to "value engineer" a vendor's solution, with unpredictable results. I'm not sure why -- we had budget. Maybe it was his way of seeming more valuable? This led to "solutions" like a SAN cobbled together from disk arrays, controllers and switches from three different vendors that were not meant to work together, had never been tested in the chosen co

What I don't fully understand is how NG could get what amounts to a quarter billion dollars a year to manage the state's IT infrastructure and still allow a situation like this to occur. I mean, I understand how it can HAPPEN, I don't understand why it's allowed to.

What makes you think Northrop Grumman had a choice? They still work for the state IT department at the end of the day. If the state IT department says "buy this POS because it's cheaper and don't build in redundancy because it's too expensive," t

> What makes you think Northrop Grumman had a choice? They still work for the state IT department at the end of the day.

There are typically very high penalties for not meeting your service levels. A 24 hour unplanned outage can blow a half year's profits for the contract. Like any outsourcing company, NG did have a choice -- don't take that contract under those conditions.

But $2.4 billion over ten years comes out to $240,000,000 per YEAR! With that kind of money they could replace their infrastructure a few times over every year.

This is a clear example of the malfeasance that happens when government gets corrupted by corporate interests. Taxpayers in VA should be up in arms about this one.

Here's my story of state agency screw-ups. Two jobs ago I was working for the Secretary of State's office here. We had the opportunity and funding to get our IT infrastructure in order when the Help America Vote Act (HAVA) became law. We were able to build out a secure and redundant room to house our critical infrastructure.

Physical access by key and alarm code only, Redundant power which included an APS Symmetra UPS system, backed up by a 125kW natural gas fired generator. Even made sure to extend tendrils from the redundant power out to the MDF so the ISP could use our power system. Also had redundant cooling tied to the generator.

The one Achilles Heel of the operation was DNS. Ours was provided from outside our space.Suggested they build a zone locally that way we'd have DNS services if the state's went down. But they quashed it as being too difficult! Ut si!

Well one day there's a massive power outage in the city. They were still up and running, lights on, air conditioning on but couldn't get in or out of the internal network even though the ISP circuits were still up. Yup, DNS!

* maintain tens if not hundreds if not thousands of proprietary (legacy) applications* maintain the many, many workstations* maintain the fabric for many, many workstations* maintain the servers which provide services, many of which are interconnected and do not cope with modern technologies well.* maintain the storage for all of that* SECURE all of the above* make it as fault tolerant as possible

In my first job, I changed the boot-up message on the VAX to "If only my girlfriend when down as often as this computer!" I kinda assumed it would scroll up off the terminal and nobody would see it. It, uh, didn't. One of our female programmers, who was famous for overreacting, came into work and threw a hissy fit. We fixed the message and decided to tell everyone we couldn't figure out who put it there. This is why you shouldn't give all developers administrator privileges!

To anybody who feels incredulous at the notion of a single point of failure taking down a purportedly redundant system:I suspect you have limited experience with the issues and challenges of managing a very large system infrastructure. The complexity of such systems goes well beyond the knowledge of any individual, so notions of fault tolerance across the enterprise are highly theoretical. Even with extensive planning and testing, the gotcha is in what you don't know. Sometimes, one of those What-You-Don't-

Memory go bad in a "san device" (I say in quotes because nobody in their right mind would actually think a singlepathed non-redundant disk array is really san-grand hardware) from a fruit-flavored vendor before, I can actually have some pity for the guys responsible/working on it. Debugging it is a great time too, because your filesystem rebuild generally works. As does copying small amounts of data. It is only once you try to copy a couple terabytes things go to hell.

My wife gets pissed when I have to stay late or go in on the weekend to replace a switch or move some wires around. "Plumbers don't do that... Electricians don't do that..." she says. "No, they don't, and everybody gets pissed off when you can't flush the toilet and all the lights are off."