The Cloud For Clunkers Program…Security, Portability, Interoperability and the Economics of Cloud Providers

Cloud providers are advertising the equivalent of the U.S. Government’s “Cash for Clunkers” program:

“You give up your tired, inefficient, polluting, hard to maintain and costly data centers and we’ll give you PFM in the form of a global, seamless, elastic computing capability for less money and with free undercoating. The value proposition is fantastic: cost-savings, agility, the illusion of infinite scale, flexibility, reliability, and “green.”

There are some truly amazing Cloud offerings making their way to market and it’s interesting to see that the parallels offered up by the economic incentives in both examples are generating a tremendous amount of interest.

The case remains to be seen as to whether or not this increase in interest is a short-term burst that’s simply shortening the cycle for early adopters or if it will deliver sustainable attention over time and drive people to the “showroom floor” that weren’t considering kicking the tires in the first place.

As compelling as the offer of Cloud may be, in order to pull off incentivizing large enterprises to think differently, it requires an awful lot going on under the covers to provide this level of abstracted awesomeness; a ton of heavy lifting and the equipment and facilities to go with it.

To get ready for the gold rush, most of the top-tier IaaS/PaaS Cloud providers are building data processing MegaCenters around the globe in order to provide these services, investing billions of dollars to do so…all supposedly so you don’t have to.

Remember, however, that service providers make money by squeezing the most out of you while providing as little as they need to in order to ensure the circle of life continues. Note, this is not an indictment of that practice, as $deity knows I’ve done enough of that myself, but just because it has the word “Cloud” in front of it does not make it any different from a business case. Live by the ARPU, die by the ARPU.

Cloudiness Is Next To Godliness…

What happens then, when something outside of the providers’ control changes the ability or desire to operate from one of these billion-dollar Cloud centers? No, I don’t mean like a natural disaster or an infrastructure failure. I mean something far more insidious.

Like what you say? Funny you should ask. The Data Center Knowledge blog details how Microsoft is employing the teleportation equivalent of vMotion by pMotioning (physically) an entire Azure Cloud data center to deal with changing tax codes thanks to a game of chicken with a local state government:

“Due to a change in local tax laws, we’ve decided to migrate Windows Azure applications out of our northwest data center prior to our commercial launch this November,” Microsoft say on its Windows Azure blog (link via OakLeaf Systems). ” This means that all applications and storage accounts in the “USA – Northwest” region will need to move to another region in the next few months, or they will be deleted.” Azure applications will shift to the USA – Southwest region, which is housed in Microsoft’s 470,000 square foot San Antonio data center, which opened last September.

The move underscores how the economics of data center site location can change quickly – and how huge companies are able to rapidly shift operations to chase the lowest operating costs

Did you see the part that said “…all applications and storage accounts in the “USA – Northwest” region will need to move to another region in the next few months, or they will be deleted.” Sounds rather Un-Cloudlike, no? Remember the Coghead shutdown?

Large scale providers and their MegaCenters face some amazing challenges such as the one presented above. As these issues become public and exposed to due diligence, they are in turn causing enterprises to take stock in how they evaluate their migration to Cloud. They aren’t particularly new issues, it’s just that people are having a hard time reconciling reality from the confusing anecdote of Cloudy goodness that requires zero-touch and just works…always.

And while cloud computing is all the rage in Washington D.C., it seems the state of Washington doesn’t much care for cloud computing. Instead of buying cloud computing services from home-grown cloud computing giant Amazon, (or newly emergent cloud player, Microsoft), the state has opted to build a brand-new, $180 million data center, despite reservations from some state representatives. Microsoft is moving the data center that houses its Azure cloud services to San Antonio, Texas, from Quincy, Wash. — mostly because of unfavorable tax policies. Apparently, the data centers are no longer covered by sales tax rebates — a costly proposition for Microsoft, which plans to spend many millions on new hardware for the Azure-focused data center.

By the way, Washington is the second state that has decided to build its own data center. In June, Massachusetts decided that it was going to build a $100 million data center. The Sox Nation is home to Nick Carr, author of “The Big Switch,” arguably the most influential book on cloud computing and its revolutionary capabilities.

These aforementioned states are examples of a bigger trend: Most large organizations are still hesitant to go all in when it comes to cloud computing. That’s partly because the cloud revolution still has a long way to go. But much of it is fear of the unknown.

Some of that “unknown” is more about being “unsolved” since we understand many of the challenges but simply don’t have solutions to them yet.

But I Don’t Want My Data In Hoboken!

I’ve spoken about this before, but while a provider may be pressured to move an entire datacenter (or even workloads within it) for their own selfish needs, what might that mean to customers in terms of privacy, security, SLA and compliance requirements?

We have no doubt all heard of requirements that prevent certain data from leaving geographic boundaries. What if one of these moves came into conflict with regulations such as these? What happens if the location chosen to replace the existing once causes a legal exception?

This is clearly an inflection point for Cloud and underscores the need to drive for policy-driven portability and interoperability sooner than later.

Even if we have the technical capability to make portable our workloads, we’re not in a position to instantiate policy as an expression of business logic need to govern whether they should, can, or ought to be moved.

If we can’t/dont’/won’t work to implement open standards to provide for workload security, portability & interoperability with the functionality for “consumers” to assert requirements and “providers” to attest to their capabilities based upon a common expression of such, this will surely add to the drive for large enterprises to consider either wholly-private or virtual private Clouds in order to satisfy their needs under an umbrella they can control.

I’ll Take “Go With What You Know” For $200, Alex

In the short term, customers who are mature in their consolidation, virtualization, optimization and automation practices and are looking to move to utilize IaaS/PaaS services from third party providers will likely demand homogeneity from 1-2 key providers with a global footprint in potential combination with their own footprint to pull this off whilst they play the waiting game for open standards.

The reason for the narrowing of providers and platforms is simple: continuity of service across all dimensions and the ability to control one’s fate, even if it means vendor lock-in driven by feature/function maturity.

Randy Bias alluded to this in a recent post titled “Bifurcating Clouds” in which he highlighted some of the differences in the spectrum of Cloud providers and the platforms they operate from. There are many choices when it comes to virtualization and Cloud operating platforms, but customers are becoming much more educated about what those choices entail and often times arrive at the fact that cost isn’t always the most pressing driver. The Total Cloud Ownership* calculation is a multi-dimensional problem…

This poses an interesting set of challenges for service providers looking to offer IaaS/PaaS Cloud services: build your own or re-craft available OSS platforms and drive for truly open standards or latch on to a market leader’s investment and roadmap and adopt it as such.

Ah, Lock-In. Smells Like Teen Spirit…

From the enterprises’ perspective, many are simply placing bets that the provider they chose for their “internal” virtualization and consolidation platform will also be the one to lead them to Cloud as service providers adopt the same solution.

This would at least — in the absence of an “open standard” — give customers the ability to provide for portability should a preferred provider decide to move operations to somewhere which may or may not satisfy business requirements; they could simply pick another provider that runs on the same platform instead. You get De Facto portability…and the ever-present “threat” of vendor lock-in.

It’s what happens when you play spin the bottle with your data, I’m afraid.

So before you trade in your clunker, it may make sense to evaluate whether it’s simply cheaper in the short term to keep on paying the higher gas tax and drive it into the ground, pull the motor for a rebuild and get another 100,000 miles out of the old family truckster or go for broke and get the short term cash back without knowing what it might really cost you down the road.

This is why private Cloud and virtual private Clouds make sense. It’s not about location, it’s about control.

Both hands on the wheel…10 and 2, kids….10 and 2.

/Hoff

*I forgot to credit Vinnie Mirchandani from Deal Architect and his blog entry here for the Total Cloud Ownership coolness. Thanks to @randybias for the reminder.