As I board the plane to head to Las Vegas for 2011′s Interop and Enterprise Cloud Summit, predictably, yet somewhat rhetorically, the great AWS outage post mortem continues to rumble on.

Exactly two weeks after the event, blog after blog and article after article continues to serve up a veritable range of delights from the easily digestible to the downright unpalatable, each doing its best to provide levels of interpretation and added commentary on AWS’ frankly excellent and honest explanation of what really happenedon April 21. Quite why, I am not sure.

Here’s the good news. I’m not going to add anything to the debate. As far as I (and a number of others I’ve spoken to on the topic) are concerned, it’s water under the bridge, a closed chapter, time to move on…..and for those affected by it, once they learn to love their provider again, I am sure they will be much wiser and better prepared for a recurrence, should there be one (and there likely will), by having understood the parameters in which their application or service is operating and if deemed necessary, designing to address the potential of such a failure.

In the fortnight (yes, fortnight – please excuse my Britishness) since the outage, I have had a number of interesting conversations with a range of folks across the industry on what the potential effect, or otherwise, the outage may have on the speed of traditional enterprises adopting public cloud services and indeed, whether this kind of outage serves to strengthen the case for private cloud adoption in those same enterprises. The conversations have hinted at the prospect of a shift, perhaps, signaling a movement away from “security” to “reliability” as the principle barrier to entry. Could it be more FUD or could it be a true reflection of the general feeling at the CIO level ?

Interesting question, but again, in my opinion, largely a rhetorical one and certainly not a question that is easily answered. What it does do, however, is serve to underline that despite the tremendous buzz around “cloud” and the undeniable success stories of those organizations who have embraced public cloud and been hugely successful, we are at such an incredibly early stage in this massive shift that I believe as a true enterprise proposition, no denomination of cloud has reached sufficient levels of maturity and understanding to be a strategy that more than just a handful of CIOs will feel comfortable signing off on.

Dice or no dice ? You decide.

One truly surprising undertone that comes up time and time again in these conversations is the presupposition that one day, all enterprises will operate exclusively on public cloud infrastructures, platforms and software delivery mechanisms (IaaS, PaaS, SaaS) and that any private cloud effort is merely a stepping stone. If I had a crystal ball, I would almost certainly be looking for next week’s lottery numbers and not prediciting cloud futures, but I have to say that even if an eventual only-public-cloud-deployment became the norm, the tail until the very last enterprise crapplication(TM) is re-architected, abandoned, retired or otherwise eradicated from the “on premise” data center will be so long, that it could almost be comparable to that of the Halley’s Comet *

* geek fact – Halley’s Comet is approximately 24 million miles long. See what I did there ?

So, that, in a rather labored way, brings me to the point of this post. Private Cloud. The great undead Private Cloud. Yep, here we go again.

If I were an enterprise CIO today, I would be basking in the sound advice of my good friend Chris Hoff – I’ve quoted this before and I will quote it again, today and in the future:

Use the right tool for the right job and the right time and the right cost

I have absolutely no doubt that given the incumbent complexities, legacy application architectures, existing sunk investment and cautious risk aversion that Private Cloud will play a part, sometime and someplace in enterprise technology strategy. It has, without any doubt worked brilliantly for us (despite the incorrect conclusion others have jumped to) and if planned and executed with diligence and commitment, will certainly work for others.

I don’t usually “do” Top 10′s as I find them to be largely pointless – too high level and not prescriptive enough to have any value – but for once, I will break with that tradition and provide my simple ten most important reasons why it just might make sense to consider a Private Cloud deployment.

Improve deployment time metrics for existing applications individual servers or combinations of servers as full application workloads in a single data center or across multiple geographically dispersed data centers by automating many of today’s manual end-to-end deployment processes

Remove the IT function from the critical path of service delivery by empowering self-service functionality for business users to decide when to deploy their applications and allowing visibility into the cost of the components that comprise the LoB application (note: I never said “chargeback” but that could be applicable in certain businesses)

Reduce human error rate, complexity and variability by using application or server templates within the private cloud automation and orchestration platform to ensure consistency with each new application deployment

Retain visibility into physical (and logical) resource allocation and line of business usage of those resources with comprehensive monitoring, measurement and reporting

Eliminate vendor lock-in by choosing Private Cloud automation solutions that are open source and support mutiple hypervisor technologies

Automate previously labor intensive tasks such as server builds and application configurations, enabling the IT organizations to free up human resources to work on tasks with more business value-add (differentiators)

Use Private Cloud to leverage and extend existing investments in virtualized infrastructures that are today simply providing better levels of utilization than traditional physical server deployments

Use Public Cloud “pay as you use” paradigms to help drive flexible commercial discussions with traditional enterprise software vendors as you build the Private Cloud environment

Free up committed opex or planned capex funding (not necessarily cost savings) by consolidating numbers of physical locations, standardizing on a common infrastructure platform, streamlining operations and allowing redirected funds to be used for opportunities to drive innovation

Don’t think of Private Cloud as just “compute, storage and bandwidth”, think of how to consolidate, streamline and better provide any IT service as if your IT organization was a multi-faceted service provider, with your users (and business) as your customers.

Of course, I am not ruling our Public Cloud, why would I ? The above is simply based on my experience and assessment of where some other large organizations that I talk to are at in their thinking. I couldn’t end a post without making the point that it is not only completely possible, but already proven that a full-scale move to Public Cloud is, in fact, a winning strategy for those who’s business lends itself.

Wondering how ? Then check out this slide deck from Adrian Cockcroft, my friend and sparring partner on the other side of the fence. In it, he provides some excellent insight in how Netflix have become the poster child for bold Public Cloud adoption – Fortes fortuna adiuvat – in his case, for sure.

And there you have it. A post that raises some interesting questions and deliberately answers none. We all seek the “right” information as we try to quench our insatiable desire for knowledge to be better predictors of the landscape. Well, I’m not a betting man but I’d happily double down on the odds that more than one person this week is going to ask the question of public versus private here at Interop……

Stay thirsty, my friends.

Share:

Christian currently serves as Manager of Product & Demand Management at Bechtel Corporation, working in a niche position between the business and technology delivery teams to help identify opportunities to drive worldwide innovation in the mobile and cloud computing areas. Prior to this, Christian was Principal Technology Architect at Manager of Global Systems Engineering at Bechtel. Having gained hands-on experience in 15 different countries designing and managing complex IT environments in support of worldwide project execution, Christian brings a wealth of enterprise experience and led a team that architected and deployed one of the world's first true private cloud infrastructures. Christian is one half of The Loose Couple Blog team and his disclaimer can be found here.