Third Key to Brokering IT Services Internally: Create Your Own Menu of Services

Dick Benton, a principal consultant for GlassHouse Technologies, has worked with numerous Fortune 1000 clients in a wide range of industries to develop and execute business-aligned strategies for technology governance, cloud computing and disaster recovery.

DICK BENTONGlasshouse

In my last post, we outlinedthe second of seven key tips IT departments should follow if they want to begin building a better service strategy for their internal users: figure out what it costs. That means developing a cost model to determine the cost per deployable unit of your compute and storage resources (you can read about the first step, knowing what you’ve got, here). The next step, which we’ll explore today, is to create your own menuof services. You must identify what services you will offer in small, medium and large packages or in a standard, advanced and premium class. Just like an L.L. Bean catalog, this is a service catalog of your offerings, including what sizes and what styles are available in each offering.

To compete with the public cloud provider (PCP), the internal cloud provider (ICP), or IT department, needs to be able to offer a Web-based capability for consumers to peruse the available offerings, and to select the offering and quantity of their choice. This requires two things of the ICP. The first is an understanding of what end users really want. The second is figuring out how to package these needs into easily recognizable and adequately differentiated service offerings.

Setting Up the Catalog

The first of these is often the hardest. Historically, IT has tended to build what they it was needed, with marginal input from the user community. Few organizations would have had discussions outside the developer community on growth projections, traffic analysis and response time requirements. This has resulted in an IT design founded on a “just in case” philosophy. The IT design team ensures that, to the best of their capabilities, the configuration they design and deploy will have a substantial safety factor to ensure they can meet their own (limited) perception of the needs of the organization. This often results in a gold-plated configuration: high-end storage systems when SATA2 drives would suffice; dedicated servers instead of virtual machines: low density virtualization instead of pooled high density environments. The permutations (and expenses) are endless.

Fortunately, the internal provider is typically going to be serving the developer community rather than the legacy application environment. This means that the partnership is often within the borders of IT, albeit with some aggravation among the various contentious tribes internally. Unfortunately, the developer is drawn to the PCP by the quick availability of infrastructure, platform and storage. Developers invariably require (or would like to have) multiple generations of their test/development environment, and that need drives the internal IT storage administrator and the system admin staff to distraction. And to rub salt into the wound, the developers are probably already using Amazon, so you are now in the unenviable position of having to compete against Amazon with a MUCH smaller and far less scalable environment.

But there is hope. First, you will probably not be allowed to cross-charge for your services, giving the internal provider the advantage that services appear “at no cost” to the profit center’s bottom line. Of course, there may well be monthly reports of consumption and, at the end of the year, there will no doubt be a cross-charge; however, day-to-day, the internal provider doesn’t require a credit card to get resources, but the PCP does!

Tailoring Your Offerings

Secondly, the public cloud provider usually has a wide variety of services to choose from. Some might say a dazzling array, while others might see it as a confusing mish-mash. You can build an advantage here by tailoring your offerings to your organization’s immediate needs without worrying about how it will impact every other organization in North America. In fact, with virtualized compute environments, you can offer to build your service offerings in a “Lego block” approach, by offering the consumer the ability to select compute power by number of CPUs, memory (performance) in 4 or 8 gigabyte chunks and storage in configured gigabytes. In addition, you can offer backup and disaster recovery services simply as a multiplier of the production CPU, memory and storage. This means it is your provisioning process that uses the quantity attribute, allowing you to effectively configure-to-order while eliminating the complexity of service offerings by putting the quantity into the order and provisioning process.

The public cloud provider often creates small, medium and large packages, and the internal cloud provider can do this, too; however, while this is competitive among PCPs, it is not going to give the internal cloud provider a competitive edge. Configure-to-order can provide you with something few of the public cloud provider can effectively do. This approach can also dramatically simplify your monitoring and reporting efforts because your basic deployable resource unit is inherently manageable through native software and utilities provided by your platform vendor. While you may not need a layer of complex and expensive orchestration software to do this task, such software can make pseudo-invoicing a feasible early objective.

In our next post, I will look at the terms and conditions needed in a service level agreement to formalize your “promise” to the consumer and their options for exception and recourse.

Related Stories

You must develop a cost model so you can determine the cost per deployable unit of your compute and storage resources. You don’t have to have a charge-back, but you do need to be able to show costs and report on costed usage, writes Dick Benton of Glasshouse.
Read More

The idea behind cloud computing is to efficiently utilize shared resources and maximize computing density. Bill Kleyman of MTM Technologies takes a look at some of these offerings and how the cloud plays a role in their delivery. Read More

Internal Cloud delivery is about managing your supply and demand to meet and exceed your client’s expectations, writes Dick Benton of Glasshouse Technologies. In this series of posts, he explores what teams can do to build a better strategy of service for internal clients. Read More

IBM has expanded its public cloud services for enterprise IT customers, announcing new SmartCloud offerings, including its entry into the Platform as a Service market, tools to run enterprise applications from SAP and the integration of cloud storage technology from Nirvanix. Read More

IBM has announced a multitude of next generation cloud services and technology advances for clients moving key enterprise business processes into production cloud environments to innovate, reduce costs and increase agility. Read More