Closed

Bonus Detail

Posted

Industries

Justifying Your Datacenter Management Improvements

Closed: 14 Dec 2009, 11:59PM PT

Earn up to $100 for Insights on this case.

If you haven't noticed, we're starting an ongoing series of cases here to develop interesting, engaging and useful discussions for our new sub-site, IT Innovations. We're looking for insights that might help IT managers stay informed and keep their operations competitive.

For this case, we're looking for engaging content and experts to be featured who can help educate IT decision makers on the management of mission-critical applications in datacenters.

The topics for this case will focus on datacenter management services and solutions. We're looking for at least 300 words in the form of a blog post that can serve as a discussion starter, and we'd also like to encourage commenting on the submitted insights. Appropriate topics for these discussions include:

How do you effectively communicate datacenter priorities to non-technical managers?

What datacenter management services can you not live without? How do you justify the costs?

How do you assess datacenter priorities? Do you think there's a better way?

How often do you review your datacenter management tools and compare them to alternative solutions?

These topics are not exhaustive, and you do not need to address all of these suggested conversations. We welcome additional proposals for alternative subjects, and if you have any questions, please do not hesitate to ask.

I believe that datacenter justification isn't too difficult when you wrap your head around what is most important to those that use the services in your day-to-day operations and the services are more or less transparent to them. For example, email and web sites are only noticed when it reacts slow, or are down. For those that are non-technical, no one really pays attention to the uptime, bandwidth or anything else. So your focus should be on the negatives of passive technology whenever you advocate for datacenter improvements. Some of the questions that you should be asking would be:

"Do you think the downtime for your email is acceptable?"

or

"If we run into some sort of problem such as [insert problem example], are you willing to have decreased productivity?"

In this day and age, most people understand the effects that technology downtime have on them even if they are not technical. For example, most users will never really think about their cell phone service until a problem happens. Perhaps a dropped call, or bad connection, but the technology becomes a passive part of their every day life. So to justify improvements, one methodology is to approach it from the passive end with terms that they would understand.

There is another component to this. Often to bridge the gap between technical and non-technical people, the key is to listen to the specific questions and respond in terms that the person can understand. Technical people often feel most comfortable using acronyms and technical words that the other person might not have knowledge of and when asked a question, answer with those types of terms. If this communication gap is not bridged, then you'll get a frustrated non-technical manager with an equally frustrated technical person that is trying to seek for a larger budget for passive technology. I believe that to advocating any sort of tool, especially those of a datacenter, require this skill set in communicating with others.

Communicating data center priorities to non-technical managers can be challenging, especially when many of these non-technical folks don’t usually concern themselves with day-to-day operations in the data center.

That’s when branding like a consumer vendor (think Budweiser or Coke) can come in handy.

Here’s a lesson from the world of higher education. Last year, after some identity theft incidents, the folks at Louisiana State University in Baton Rouge rolled out an in-house credit monitoring system designed to raise awareness among users (read: students) about phishing and the dangers associated with it. Admittedly, these are pretty dry topics. But the IT department got the word out in a fun and catchy way.

Their strategy revolved around an on-campus advertising campaign around "Tad Ramey," a dolt who regularly did dumb stuff like reply to email with personal information from a public terminal and shout out his Social Security Number in an open quad. The message of the campaign, "Don’t be a Tad," (warning:pdf link) went out on bus ads and banners across campus.

Users heeded these warnings almost immediately, signing up for the credit monitoring service in droves. Reports from the school’s CIO indicate identity theft is down considerably (of course until the next big hack attack, but we digress).

Obviously, this example deals with security, not the data center. Still, the idea here is the same—by livening up complicated (and, to some, dull) concepts and priorities, it’s easy to build a consistent campaign that resonates with non-technical users. Once you’ve got the attention of these users, it’s that much easier to communicate with them.

Inside an enterprise, particularly concerning the data center, this might mean flyers in a lunchroom about new strides in datacenter energy management, ads on the company Intranet touting a new commitment to virtualization, even a Microsite about the importance of changing passwords.

The bottom line: innovation in communication comes with approaching the everyday from a new perspective. Just because it's the data center doesn’t mean it can’t be sexy.

Of all the trends I have witnessed in my 10 years of being a network administrator/manager, the one that has seen little if no argument, is the need to reduce server sprawl. When your average server was once in the 4U to 6U range, the amount of rack space and cooling needs quickly got out of control. It was common to only have 4 to 6 servers in your standard 19-inch rack. These racks themselves could cost upwards of $1000, and then there the KVM needs, maintenance on each physical box, etc. We all know/remember the pain points. Although, as an admin, it was impressive to show off 10 racks full of servers with blinking LED's (as long as they were all green!). But once that equipment aged and caused additional support headaches, we all wanted nothing more than to reduce the "head count".

So started the consolidation initiative. Consolidating file and print servers was easy. Throw more horsepower in a single box, and you could put more into it. You could even knock down some database servers that way. So, what were the real challenges? The insistence from application vendors to have dedicated systems for their apps. Seems like every app my company purchased meant 2 more servers (app + db). I know this was probably my biggest fight. A fight that I escalated all the way up to the CIO. So we started insisting to vendors that we would not accept this as a requirement. We probably turned away some really decent applications because of vendors who wouldn't accept their app living on shared systems. And then there were existing apps that we just migrated to shared systems, and I can tell you that doing that caused some support nightmares.

When virtualization and blade servers came on the scene, or should I say came down in price and went up in dependability, we saw drastic declines in space requirements, cooling needs, power consumption, etc. Between virtual servers and blades, I have roughly 100 servers housed in my first 2 racks. And that includes the storage and fiber switches. My general rule of deploying new servers is virtual first, blade second, and stand alone as a last resort if deemed necessary. What I didn't give immediate thought to was reviewing my stance on the above mentioned practice of consolidating my app servers, even if it was against the vendor's requirements.

Now I'm not saying that it's a good practice to separate every single app, database, or infrastructure server. Many vendors (because of customer's insistence) fully support their apps running on shared systems now, and I do that whenever it is supported and causes no issues. There is still OS license and maintenance costs for each server instance. And of course each one has to be maintained, secured, patched, etc. even if it is virtual. But what I am saying is that virtualization and blade servers have given me the ability to add dedicated systems with little or no impact on some of the previous pain points. If there is ROI in deploying an app, or just a definitive need, I don't hesitate in provisioning a dedicated system for it, if it is required. Doing so has helped tremendously in application support from the vendor, maintenance windows (which did not always coincide when running multiple apps on one box), and it keeps one rogue application from taking down 2 or 3 others. Again, I am not promoting server sprawl, but rather recommending that returning to dedicated application systems in certain circumstances, is not as painful with the newer technologies available to us.

I would be interested in hearing how other admins feel on the subject.

I worked for a tiny department which handled more records than the IRS. Our datacenter had grown from a two-person operation to over a dozen employees in just a few years - and I was impressed with how successful they were at communicating their priorities.

For starters, there were ongoing efforts to educate the entire department on what the datacenter did. (One manager even performed tours of the datacenter facility for all of the new-hires!) And a systems architect once sent around an article from Slashdot about a security breach -- just its URL, with a single-sentence warning that this is what happens when you don't secure your datacenter. There were also helpful e-mails sent about how new privacy legislation which would affect both our datacenter and our managers. I think an occasional news article can reinforce datacenter priorities better than an abstract explanation - and it also starts a larger conversation, which is where real understanding can happen.

Yes, this falls under the general heading of "communication," but there's some very specific ways to make that happen. When our data team held their weekly meetings, they'd also encourage other managers to audit the discussion so they'd learn about the datacenter's issues. And though it sounds corny, we even tried creating a departmental newsletter that would familiarize the department with faces in the datacenter and news about the latest challenges. But in our case, the most important factor was support from the head of the department. When the top manager understands the importance of the datacenter's work, that helps dictate the conversation for the rest of the organization!

I think what helped us most was something I'd call "showing not telling." The datacenter is in a unique position to cultivate good will by provide customized data read-outs and applications. Managers want, more than anything, good information -- they want fresh data, comprehensive data, and if at all possible, interactive data. Our top manager ensured that the datacenter had the resources and employees to provide this kind of service - which got the managers and the datacenter working together.

The standard way for assessing priorities is security, "preemptive redundancy", and then uptime. While these are paramount, I think there's also a case to made for the depth of the data -- how far back it goes, how quickly it can be accessed. But here's a creative: how about looking at how many employee-hours are spent maintaining the data? We had one application that only refreshed its data because one employee drove in at 5 a.m. every morning and then manually launched the scripts which would clean up the raw inputs and update one crucial database. When the same operations can be performed more efficiently and effectively, it leaves more time for maintenance and troubleshooting!

A datacente can fight for their priorities in lots of different ways - including education, communication, high-level support, and demonstrated value. But here's the bottom line. It's easier to get support for your priorities when managers understand what the datacenter does!

David Cassel documented data systems for a major corporation in Southern California.