Of private clouds and zero-sum games

If I interpret the comments on my last post correctly, both online and offline, a small number of you felt that I’d been unduly strong in my bias against the “private cloud”; it sounded like you thought I’d been drinking too much of the Kool-Aid since joining Salesforce.com a year ago this weekend.

Actually, my bias against the private cloud is around a decade old. And it stems from experiences I had during my six-plus years as CIO of Dresdner Kleinwort.

First and foremost, I think of the cloud as consisting of three types of innovation: technology, “business model” and culture. Far too often, I get the sense that people concentrate on the technology innovation and miss out on the remarkable value offered by the other two types of innovation. In this particular post I want to concentrate on the business model innovation aspect.

Shared-service models have been around for some time now, they’re not new per se. At Dresdner Kleinwort, we implemented shared-service models wherever relevant, sometimes within a business unit, sometimes across business units within a business line, and sometimes across the whole company. The principle was simple: investment and operating costs (the “capex” and “opex”) for the shared service would be distributed across all the consumers of the shared service according to some agreed allocation key. Sometimes it was a simple key, like headcount. Sometimes it was predefined each year at a central level, as was the practice with “budget” foreign exchange rates. Sometimes it was hand-crafted by service, involving long hours of painful negotiation. Sometimes it wasn’t even agreed, just mandated from above. One way or the other, there was an allocation key for the shared service.

Dresdner Kleinwort was part of Dresdner Bank, and Dresdner Bank was wholly owned by Allianz. There were shared services at the Dresdner level, and at the Allianz level. So there was a whole juggernaut of allocations going on, at multiple levels.

And God was in His Heaven, All was Well With the World.

Until someone wanted to leave the sharing arrangement.

At which point all hell broke loose.

Because the capex had been spent, and the depreciation tail had to be allocated to someone. If the number of someones grew smaller, the amount allocated grew larger. This wasn’t just about capex; not all of opex was adequately volume-sensitive, so similar effects could be observed.

“Private” models of shared services were fundamentally zero-sum games: the institution coughed up all the capex and opex, and the institution had to allocate all of it. Regardless of the number of participants. Sometimes there was scope for some obfuscation: there was a central pot for “restructuring”, and all the shared-service units ran like hell to reserve as much of it as possible every time the window opened for such a central pot. If you were lucky, you could dump the trailing costs left by the exiting business into the restructuring pool, thereby avoiding the screams of the survivor units. But it was an artificial relief: the truth was that the company bore all the costs.

A zero-sum game.

Shared resources have costs that have to be shared as well. If the only people you can share it with are the people in the company, then the zero-sum is unavoidable. Things are made more complicated by using terms like capex and opex, by choosing to “capitalise” some expenditures and not others, by having complex rules for such capitalisation. Such worlds were designed for steady-state, not for change.

We’re in a business environment where change is a constant, and where the pace of change is accelerating. So there’s always something changing in the systems landscape. Business units come and go; products and services offered come and go; locations and even lines of business come and go; and entire businesses also come and go within the larger holding company structure.

Change is a constant.

So with the change comes even more pain. Lists of “capitalised” assets have to be checked and cross-checked regularly, to validate that the assets are still in use; at Dresdner these were called impairment reviews. If not, the remaining depreciation tail of the “impaired” asset has to be absorbed in the next accounting period.

What joy. [Yes, dear reader, the life of a CIO is deeply intertwined with the life of a spreadsheet jock].

In many respects, the technology innovation inherent in the cloud was foreseeable and predictable. Compute, storage and bandwidth were all going down paths of standardisation, to a point where abstract mathematical models could be used to describe them. As the level of standardisation and abstractability increased, the resources became more fungible. That fungibility could be exploited to change the way systems were architected, higher cohesion, looser coupling, better and more dynamic allocation and orchestration of the resources.

The business innovation in the cloud was, similarly, also foreseeable and predictable. The disaggregation and reaggregation made possible by the standardisation and virtualisation would allow for different opportunities for investment and for risk transfer.

Now it was no longer a zero-sum game. The company that spent the capex and opex took the risk that there would be entrants and exits, high volumes and low; the technology innovations were used to balance loads and fine-tune performance; the multitenant approach often led to lower licence costs, and these could be exploited to defray some of the continuing investments needed in the balancing/tuning technologies.

Individual business units and lines and even entire companies no longer had to carry out impairment reviews for such assets. Because they didn’t “own” the assets: the heart of the cultural innovation was the change in attitudes to ownership.

The private cloud proponents have sought to blur the lines by bringing in arguments to do with data residency.

Data.

Not code.

Data will reside where it most makes sense. Sometimes there are regulatory reasons to hold the data in a particular jurisdiction. Sometimes there are latency reasons to hold data within a particular distance limit. Sometimes there are cultural reservations that take time to overcome. The rest of the time, data can be held wherever it makes economic sense.

Serious cloud computing companies have known this, have been working on it, and will continue to work on it. The market sets the standard.

Code, on the other hand, particularly multitenant code, has no such residency requirement. Unless you happen to ask someone whose business model is to charge licences connected to on-premise processors.

Change is a constant in business life. The cloud is about change. The business model of the public cloud is designed to make that change possible, without the palaver of impairment reviews and capex writeoffs and renegotiation of allocation keys and and and

Like this:

Related

15 thoughts on “Of private clouds and zero-sum games”

Being a CIO must be one of the toughest jobs in the world at the moment with so much change going on. Balancing the numbers, optimising costs, adjusting to the falling rocks and shifting sands of internal restructuring, and the reconfiguration of internal budgets, not to mention technology – I am sure any CIO would be pleased to outsource this complexity to the cloud. One thought I had, is that not only is the Cloud a technical and outsourcing abstraction. Perhaps it is also an accounting abstraction? If that creates more headspace for the CIO to do more business and technology innovation, I am sure they will be pleased.

I agree. To me the “private cloud” has always seemed more of a marketing-led construct for CIOs (Chief Inibitor Officers) who fear the diminution of their freebies from large iron vendors to say they are doing something “in the cloud” but “more safely”. A hollow concept.

Really entrepreneurial businesses seem most expert at driving expenses including capex down to the business unit. Predictably business units drive capex out an purchase Cloud services like SFDC and AWS. Leaving the CFO to ask many people for reports to consolidated the figures. Later the figures become unreliable, the board replaces the CEO with the CFO, who hires a new CIO to get a grip. The CIO convinces the CFO to buy SAP…until…the board under pressure hires a new CEO to install back an entrepreneurial culture…

This is probably the first time I’ve read a convincing attack on the private cloud. And it is entirely a failure in the business model. I wouldn’t put up the hybrid cloud (i.e. in-house plus external) as any defence against the capex / opex issues you describe, but..

..I would hope that a hybrid cloud leaves the private bit just dealing with data that the enterprise has difficulty parting with, leaving all real work on public clouds. But perhaps the hybrid model just causes more head scratching.

So for a company such as Allianz the key question is whether they want to outsource the management of the impacts of the change on IT. A private cloud would state that they have the capability to manage it themselves, whereas suing a public could is a statement that they are willing to pay a premium for someone else to manage it. The fundamental cost is there anyhow, the question only is who is more efficient in managing it.

I think there is a need for the private cloud since there are companies for whom the efficiency gain from the public cloud does not outweight the costs (such as risks) related to it.

Change is constant and Cloud Computing itself too will eventually evolve or give way to something which is even better or makes more business sense in that age/time given the most pressing contemporary needs of that time. Today, for the long tail of ‘on-boarding IT bandwagon OR startups’ cloud makes the world flat – lowering entry barriers. For those who already on boarded the IT bandwagon with on premise DCs, Private cloud has its use in a limited capacity – it sorts of helps wean off on-premise believers onto something that is more efficient and eventually leads them to hybrid cloud, finally paving the way for public cloud. Cloud is a journey in itself. Think of ‘power utility’ in the Big Switch context. There is private and public power available and yet in countries such as India – households and businesses setup inverter or generator set if they can afford – as backup power and to take care of outages. Cloud or Utility model ain’t no panacea afterall.

Ah, now that’s a good line of argument. Much stronger than the first post, IMHO. Thanks very much for taking the time to write this up, JP.

Having set this context, I can now say to you: yes, but. I still stand by my prior assertion that there is some potential benefit to be gained from what The Vendors ™ are referring to as “private cloud” (which, in 99% of cases, is really just an on-premise IaaS landscape) for some customers. And that benefit is a function of a) the degree to which they can adopt a “private cloud” business model internally on top of the tech and thereby b) eliminate some portion of that waste that things like restructuring budget pools and impairment reviews represent.

Is that number likely to be relatively small, in the overall zero-sum scheme of things? Shrug. Yes (if very context dependent). In the same way that genuine public transportation is vastly more economical than commuter lanes are. But is that number zero? No, not at most enterprises, which is what a literal reading of your previous post would have led one to assume you were arguing. Is that number large enough to put a credible business case for the “private cloud” together? Shrug again — also entirely context dependent. Is this all a bit of a nit for picking? Perhaps, but given the rhetorical force you deploy here, I think it’s worth it. And it seems to have helped prompt you to write this post, so yay!

So my (admittedly very nuanced) stance on this is not “this categorically makes no sense” (as you seem to be arguing), but something more like “if you’ve run the numbers and this makes sense to you, whatever. But don’t think that it means that you won’t need to source an extraordinary amount of things from the public cloud, going forward, in order to sustain competitiveness, in addition to your lovely private cloud, and could we please have that conversation soonest, kthxbye.”

I also agree with Hannu’s implication, BTW; limitations of liability are an important aspect of “cost” that your analysis does not yet (explicitly) address, just in case you’re taking suggestions from this peanut gallery for another topic. :D

Perhaps I am in a lucky or unique situation at my 100+ year old railway, but we take a pretty simple approach to IT cost allocation – it’s all corporate overhead. There are no LOB chargebacks or allocations (there are some forms of show back but they’re informal). And since opex hits the operating ratio as a primary measure of success, it actually is better for us to find ways to capitalize more infrastructure. This of course means that budgets are largely politicized and based on the whims and priorities of senior management, and has traditionally lead to erratic IT expenditure levels, but these past few years IT spend has stabilized as they realize it’s core to reducing our overall transaction costs and optimizing our other assets (locomotives, track, equipment, yards, etc.)

We’re transitioning from a large scale traditional outsourced environment, plagued with huge unit costs and up-front provisioning costs (along with high lead times) to what I’m calling an “unified platform” environment – basically appliances like VBlock and Exadata, with production workloads managed by a combination of selective insourcing and managed service out-tasking, and dev/test workloads managed a bit differently, using a commercial private cloud stack provisioning & configuration. We project this will drop our costs by at least 70% and cut our lead times to provision and/or change environments by 90%+.

We use today and will still use public cloud resources for some workloads, mostly dev/test and virtual desktop, which brings us down to more like a 90% cost reduction from our current stance, but I can’t see taking the risk on to move most critical production workloads there yet for another 5 to 7 years. Reasons include a) our legal department is sensitive to data residency issues for compliance and forensic reasons – and this isn’t even due to vendor FUD; b) most legacy code is not going away for a while and relies on the assumption of fast and reliable shared storage SANs which currently are not overly viable on public IaaS, c) we’re making a massive investment in SAP , and while certified & support on certain public clouds is not really ready for prime time there. I’d much rather give the program leads a bit of comfort in their target environments rather than fight them to go to the bleeding edge.

So, in a nutshell, private clouds to me seem to be a natural transition away from traditional managed services outsourcing agreements – the bread & butter of the IBMs, HP/EDSs, CGIs, etc. of the world, into a much more transparent model that has potential for incremental improvements and innovations rather than pure unit cost contractual squeezing. And such private clouds in turn are a stepping stone towards public clouds as we work on modernizing our application development approaches and platforms to deal with the realities of public clouds… though I am not holding my breath there – most shops in my experience are stuck in the 90s. By the time our private cloud is due for a refresh in 5+ years or so, we likely will have leveraged public clouds enough that we’ll be comfortable transitioning our production workloads there. Just not now….

As I understand your point of view, from a management cost perspective both on-premise and private cloud represent two different ways of slicing the same economic pie. In contrast, public cloud shifts various costs and risks to the external provider, which changes key economic relationships.

However, what happens if we put organizational costs aside and look strictly from an end-user and cultural perspective. From this standpoint, what cultural differences are driven by private vs public cloud? I’m not pushing one side or the other here, just wondering about cultural dimension of change in each environment.

I think Gertrude Stein famously answered the question when she said “Cloud is a cloud is a cloud is a cloud.”
Jokes apart, the debate about public vs. private seems to be more driven by the cost/risk attributes of each kind rather than any technical characteristics/differentiators.
I think one key differentiator could be the public cloud’s ability to transform/evolve/ “shape-shift” more rapidly than a private cloud in response to emerging trends. What do you think?

Thank you for repeating the point that the cloud is as much about culture and business model change as it is about technology; it seems to me that this is often overlooked.

I also wonder if failure to understand these broader implications – or maybe because these implications are so uncomfortable to some – explains the attachment many have to private clouds. After all, what is a company for if it is not to provide central services, of which computing is just one?

I realise some will miss the opportunity to sit in their sandpit arguing about capex and opex allocations, because entire careers have been built on the skills required to succeed there. Loosing the sandpit is properly scary though, because it leads to having to think far more broadly about developing personal reputations, creating one’s own opportunities, making connections across multiple firms (and none) and all the other changes and opportunities that cloud introduces.

In this context, private clouds seem to me to be like ‘change lite’ or ‘diet change’; thirst quenching in so far as they fulfil the objective of ‘moving with the times’ in a way only a sandpit dweller would think but ultimately, pointless.