How much infrastructure integration should you allow?

There’s an old adage that the worst running car in the neighborhood belongs to the auto mechanic. Why? Because they like to tinker with it. We as IT pros love building and tinkering with things, too, and at one point we all built our own PC and it probably ran about as well as the mechanics car down the street.

While the mechanic’s car never ran that well, it wasn’t a reflection on the quality of his work on your car because he drew the line between what he can tinker with and what can sink him as a professional (well most of the time). IT pros do the same thing. We try not to tinker with computers that will affect our clients or risk the service level agreement we have with them. Yet there is a tinkerer’s mentality in all of us. This mentality is evidenced in our data centers where the desire to configure our own infrastructure and build out our own best of breed solutions has resulted in an overly complex mish mash of technologies, products and management tools. There’s lots of history behind this mess and lots of good intentions, but nearly everyone wants a cleaner way forward.

In the vendors’ minds, this way forward is clearly one that has more of their stuff inside and the latest thinking here is the new converged infrastructure solutions they are marketing, such as HP’s BladeSystem Matrix and IBM’s CloudBurst. Each of these products is the vendor’s vision of a cleaner, more integrated and more efficient data center. And there’s a lot of truth to this in what they have engineered. The big question is whether you should buy into this vision.

There is a clear inflection point in the market opening up an opportunity for change. Infrastructures are being abstracted by virtualization and the desire to move to Infrastructure as a Service cloud architectures, which let more workloads run on consistent hardware configurations. And configuring the hardware in as standard a configuration as possible enables greater use and reuse. Technologies including blade systems, 10GbE and network storage support the concept of wire-once, virtually configure infinitely. And some of these vendors are packaging their solution using these technologies and a host of virtualization, management and automation tools integrated together so you can simply drop them in as a virtual pool and expand them with highly repeatable building blocks – Cloud Legos, essentially.

And the major hardware manufacturers have proven that their superior QA and integration capabilities can churn out known good configurations in high volume and lower cost than we can build them ourselves. This is why we don’t build our own corporate PCs or servers anymore. So who’s to say we shouldn’t let them integrate and drop ship full infrastructures for us? Isn’t that the next logical step in this evolution?

There’s a fundamental tension point in IT that these solutions pull on – lock-in versus standardization. It’s a core focus of the advice we provide to clients in the Sourcing & Vendor Management Role and in the back of the minds of every IT Infrastructure architect. How much of a single vendor’s technology do you standardize on at risk of weakening your leverage with the vendor?

We know that every infrastructure & operations professional is under pressure to bring cloud technologies to your company – and fast. This is a fast path to doing so. The question is how far down this path is it safe to go.

Thank You

By registering you become a member of the CBS Interactive family of sites and you have read and agree to the Terms of Use, Privacy Policy and Video Services Policy. You agree to receive updates, alerts and promotions from CBS and that CBS may share information about you with our marketing partners so that they may contact you by email or otherwise about their products or services.
You will also receive a complimentary subscription to the ZDNet's Tech Update Today and ZDNet Announcement newsletters. You may unsubscribe from these newsletters at any time.