Welcome to the cloud and service catalog blog. This blog and its LinkedIn communityare dedicated to the creation, sharing and distribution of cloud management and service Catalog best practices, ideas and trends

Service Catalog Community

stats

The New York Times as a great article on Balthazar, one my favorite restaurants in NY, titled "22 Hours in Balthazar"

There's a lot of goodness, if you can read above, below and at the lies. Here's an excerpt.

When the first party sits down, a waitress takes their coffee order, turns around and heads straight to a computer terminal.

Balthazar’s waiters — like waiters at most restaurants — spend a lot of time doing the seemingly duplicative work of punching orders they have already written down onto touch-screen computers.

The point-of-sale terminals (or P.O.S., as the occasionally vexing machines are known) remove a certain degree of human error, but more important, they keep waiters on the floor, selling food and booze, turning tables as efficiently as possible.

Food runners bring the plates to the dining room; busboys take them back to the dishwasher. There is a coffee runner whose sole task is to bring coffee from the barista to guests. For the most part, a waiter’s job is to manage the flow of plates around the restaurant.

As Wendt explains, “Anything that we can do to keep the waiters on the floor more, we do.”

I love the breakdown of the work structure. And the last line is pure gold, keep the waiters on the floor! and the P.O.S. = waiter self-service means more time on the floor. If they are not on the floor, they are not selling and customers are not being served.

Basically, The German company said on Tuesday that it will launch a global,
vendor-neutral marketplace for compute and storage capacity in 2014

When I dig deeper, it seems that, yep they wanto be a broker of sorts. And impose fees, flatten service definitions so comparisons can be made across providers, and generally turn computing power into a commodity.

Remember Enron's bandwith exchange? Allow me to remind you. Enron set up a subsidiary to exchange bandwith between providers - which is not a bad idea as bandwith is more fungible than datacenter services and the telecom industry already had peering and interconnect practices.

"We were trading bandwidth a few months ago, but the market wasn't
there," said Al Butkus, vice president of energy trader Aquila (ILA).
"It didn't really have to do with Enron so much as the dot-coms.
There's just too much capacity out there and too little demand."

.....

Currently, Butkus said, few ISPs and telecom firms want to use a
commodity-style exchange to sell contracts. Deals tend to be negotiated
over the phone, often with a broker who gets a fee for arranging the
transaction.

...

Although Butkus believes that most bandwidth will
eventually be sold through a commodity-type market, it won't happen
overnight. His latest estimate is that it will take three to five years
to develop a healthy and viable business for trading bandwidth
contracts.

Three comments on the above quotes:

1) Compute moves at Moore's law speed. Much faster than bandwidth grows. There's no shortage of capacity.

2) As the man says, deals are negotiated. Contracts are in place.

3) The article was written in 2001 - Over 12 years ago. The prediction that it would be 3-5 years is wrong not by 12 years, but so far it's wrong by forever.

Here's a choice quote from the GigaOm article from yesterday July 2, 2013.

Like any exchange, a fee will be involved for both buyers and
sellers. However, Baumann argued that this will still prove
cost-effective and efficient for large buyers:

“A certain percentage
goes to trade, but this is 10 times better than current procurement
processes in cloud computing, which take one to one-and-a-half years.
Procurement costs are enormous and time-to-market is very enormous.”

The Deutsche Boerse Cloud Exchange will begin operations in Frankfurt
and New York around February or March next year, before adding
Singapore for the Asian market 4-5 months later. “It will be a global
play by the end of 2014,” Baumann claimed.

Clearly they haven't heard about Amazon services - it takes a credit card and 5 minutes. As for 10 times better than current procurement processes, as I said yesterday, I don't buy it.

There's a very nice academic paper from IBM Zurich (pdf) on this topic that makes some of these points and more in a more formal way. Basically, these are illiquid markets due to insufficient commoditazation that lead to wide spreads in pricing introducing high risk for the broker of services.

There may be some aspect of the offering that I'm missing and makes this scheme workable - but it hasn't become apparent.

Recently I've been involved in a number of conversation around the relationship between service brokers, service definitions, the service catalog and the design of service interfaces. I've encountered lot confusion and wrong assumptions, which have the potential of costing a lot of money.

So as a way to clear up my thinking, I'm going to note a few thougths on this today. It's not a finished piece, by any means; wear a hard hat while reading it. Pieces will fall.

Let me start by saying I'm vexed by the phrase "Service Broker". I'm often vexed by people spray painting new words on existing concepts.

One notion is that a service broker is the control plane and panel to obtain on-line services from external vendors. Which is fine, but this is also what a good, modern, non-service desk restricted service catalog provides. I have covered this topic in my last 837 blog posts.

The second meaning for a service broker emphasizes the word "broker". The broker enables the customer to aggregate and arbitrage services to enable the selection of the service that is cheapest, best, compliant, etc (name-your-favorite-characteristic).

A common example used by proponents of "brokering" is for the contracting of compute resources, where we may want to decide where to place a workload based on price. After all, a Gigahertz is a Gigahertz everywhere, right. Hmm well, no. The network, storage, location, distance, latency, iOPS, applicable laws and many other factors (Will someone pick up the phone? Is there a number to call) matter.

I don't believe there's any commoditization of infrastructure services coming any time soon (like the next ten years). There are just too many facets to internet services such as quality, latency, richness of offerings, laws, SLA's, data sovereignty and others that prevent making effective like-to-like comparisons on the fly. We will need procurement specialists to do these type of comparative evaluations.

Also, if you drink coffee, you'll know that coffee, which used to be a commodity, is anything but today. Coffee is a lot simpler to acquire than datacenter services. I'll have my server with half-caf, soy-latte with a twist of lemon.

But even if we could flatten service definitions so they were more comparable (they can be flattened -- sourcing specialists do it all the time), the acquisition process still doesn't match the way the enterprise procures today.

Enterprises engage on two classes of spend: contract and spot. The majority of spend is contract. Why? Risk, pricing, security, control, quality, availability and convenience.

By the way, that's why enterprises use e-procurement and constantly try to reduce the number of vendors under management. It's easier to control a smaller number of providers that are a large part of the enterprise spend, than thousands of small vendors to whom the enterprise's spend is not meaningful.

For example, take the issue of liability: AWS asssumes no liability for anything. Most enterprise contracts have extensive sections around what happens when things fall apart and the network operations center cannot hold.

In my experience reviewing these type of contracts, these type of documents spend a fair amount of time defining the "service" - and it's not an API call, but it's the request process, approvals, support, training, incident management, remediation, service level that define the "service".

By the way, I don't meant to imply these contracts are actually workable or actionable --most are not-- just that a lot of effort goes into creating them to try to define the "service."

I once spent a week with a team of 15 people trying to convert a massive outsourcing contract into a service catalog. Turns out to be surprisingly easy to do it with 2 people, but impossible with 15.

Two recent examples help make the case for contract buying. One, Amazon, who
does offer by compute pricing on a hourly basis, now offers one and three year
leases. Why? By the hour pricing is like by the hour hotels, very expensive if you plan to live there a year.You are better off with a lease.

So it does seem like the service broker concept staples a service catalog to a procurement system; but will it cross the cloud?

As for the idea that somehow an intermediary can establish a market place of services and "broker" between enterprise and service provider, I don't buy it - pun fully intended.

This approach did not work in the dot-bomb era with e-marketplaces. Turns out the buying power of large companies is significant, the internet flattens information brokerage and margins are way too thin for this intermediaries to earn much money.

As for brokerages working on the SMB space, I'd say yeah, sure. They are called resellers, like PC Connection, or TigerDirect, or yes, Amazon retail.

In summary, there are considerable transaction costs that need to be accounted for in the notion of service brokers. In the service catalog world we talk about policy, governance and automation as the way to get those contract terms implmented. in fact, most enterprise buying is contract buying and not, spot buying.

To recap, I've argued that a service catalog already provides the functionality of a service broker and that there's unlikely to be a marketplace of services that is profitable or that enterprises will change buying behaviors in the next ten years.

So is there anything new about this service broker concept that is worth considering? And the answer is YES. The advent of service interfaces aka API's opens a new service channel.

So for the first time we have the opportunity to design services that are born-automated. How do we think abou them? What are their characteristics?

That is worth exploring in its own blog post.

As I said at the beginning, these are my preliminary thoughts. Comments? Questions? Observations? Organic, heirloom tomatoes?

A story from the frontlines of cloud, service catalog, and self-service.

Since we got to Cisco, we have deployed the former newScale product to create our own, Fortune 20, private cloud. Lots of lessons have been learned and incorporated into our products and practices.

This Information Week story is the latest document the on-going journey of success and also pain that awaits the cloud journey. Here's quote.

"When we first made the self-service portal available, end users thought it was great," said Myers, adding that the addition of Oracle database schemas and middleware to the service catalogue made a huge difference in how useful the environment could be.

Some IT staffers took to the changed environment eagerly; others approached it with great wariness. The first set realized they could automate many time-consuming manual processes. The second set thought, "I've pushed this button [deploying a freshly configured application] for a long time, and now you're automating it."

"There's a very human aspect associated with a transformation like this … You had to get people into the mindset that automation didn't eliminate jobs but freed up workers to take on the task at hand. It was an education process," Myers said.

I've been writing on this topic for a long time, so it's a bit like I've said all that I really wanted to say. Yet the cool thing is that there are many other points of view emerging at the intersection of cloud services and service management.

Setting Up the Catalog

The first of these is often the hardest. Historically, IT has tended
to build what they it was needed, with marginal input from the user
community. Few organizations would have had discussions outside the
developer community on growth projections, traffic analysis and response
time requirements. This has resulted in an IT design founded on a “just
in case” philosophy. The IT design team ensures that, to the best of
their capabilities, the configuration they design and deploy will have a
substantial safety factor to ensure they can meet their own (limited)
perception of the needs of the organization. This often results in a
gold-plated configuration: high-end storage systems when SATA2 drives
would suffice; dedicated servers instead of virtual machines: low
density virtualization instead of pooled high density environments. The
permutations (and expenses) are endless.

My colleague Steve Watkins has written a good critique on pricing models.

Broadly speaking, there are two major approaches to creating a price model for IT. There is the Utility-based model, in which pricing derived from actual consumption of CPU cycles, RAM, bandwidth, storage, etc. In this model, if you stood up a virtual machine for one week you would only pay for the actual amount CPU cycles and storage you consumed.

Alternately, there is Service-based pricing, which advocates a fixed price based on either the service itself or some other unit of measure such as hours, etc. In this model, if you stood up a virtual machine for one week you would pay for how many hours the VM was active, whether you used it or not.

I always council my customers to adopt service-based pricing. I think utility-based pricing is the wrong approach for IT departments, especially infrastructure teams. Here are my reasons: