In the data center and beyond, you're only borrowing time with quick and dirty fixes, not saving it

InfoWorld|Apr 30, 2012

Many IT projects start with an optimistic air, a feeling that no matter what's occurred in the past, this one will be different. It'll be done right the first time, no shortcuts will be taken, enough time will be available for proper planning and execution, and the result will be a shining example of IT done right. All those involved will be lauded by the rest of the company for a job well done.

It generally takes just a few hours before those lofty goals start to lose altitude. Unexpected problems arise, existing workloads eat into planning time, budgets shrink, and deadlines loom. An opportunity to escape the shackles of poor implementation gets stuck in the same slog that took down the last dozen projects.

At some point along the way, several admins realize there's no way the project is going to happen unless they bust out the Chainsaw of Reality and begin lopping off parts of the project. What remains is not nearly as ambitious as the original plan, but should be feasible in the time and budget allotted.

Those who have no visibility into the situation assume this downsizing is mere laziness or the "usual" IT jackassery and complain bitterly about the reduction of scope. Awkward meetings drag on, more time is wasted, and when all the posturing is over the project limps forward in reduced form. All the joy is gone. The project becomes an albatross, a loose noose among many surrounding IT's collective neck.

After all the hue and cry, IT knows that there's no going back for further reductions or reorganization, so it's gonna happen one way or the other. This is where the duct tape and sealing wax come into play. As with any project, there are unforeseen circumstances all the way down the line, but there's no time to deal with them properly, and everything becomes good enough, yet not really good at all.

In some cases, you get lucky: What seemed at the time like a shoehorned solution to a problem turns out to be just as good as a "proper" solution after the dust settles and the fix proves reliable. But in other situations, improper implementation of services and applications become wounds that dog IT continually until someone finally decides that they need to be addressed properly, and the whole scenario plays out once more.

The truth is, what we think of as "modern" IT is just growing older, not more mature. The 1990s were the Wild West, before anyone had a grip on exactly how all these computers were going to work together. The 2000s proved to be the decade where IT developed a more formal mind-set, in which standard practices became well known and well worn.

But IT today is still haunted by the ghosts of the past. I'm sure that anyone reading this who is involved in the care and feeding of an IT infrastructure older than a decade knows precisely what I'm talking about. There are cracks in the foundations because nobody knew any better when they were laid, and now there's no interest in fixing them unless and until they finally collapse.

At a minimum, you need to document these dicey situations clearly. But you must also allocate enough time and resources to address the worst of them. It's like cleaning your room when you were a kid: You really didn't want to do it, but once it was done, everything seemed so much nicer.

I've seen many situations in which the lack of visible problems with those foundations created a false sense of security -- and eventually, the problem became so ingrained in the entire infrastructure that ripping and replacing it took far more time and money than if it had been fixed sooner. "A stitch in time saves nine" is fundamental to IT.

We may pride ourselves in our ingenuity for developing unique solutions to unique problems, and the truth be told, that's the only way anything works. But we should also make sure that our solutions are as elegant and sustainable as possible. In most cases, the future aggravation we're saving is our own.