Rackspace

July 29, 2013

Randy Bias CTO of Cloud Scaling sparked an interesting debate around the need of OpenStack to align itself with major public cloud offering, namely Amazon and Google clouds. According to an interview on GigaOM the timing behind Randy's proposal is a result of the concern that Rackspace is heading to a different and potentially opposite direction with their view of Amazon as a rival to OpenStack and also from the thought that relying on AWS API could be too restrictive from an innovation perspective.

Portability Defined

While the debate was sparked around the use of the so called Native (Nova) API according to the twitter thread between Randy Bias @randybias, Dave Neary @nearyd, and Simon Wardley @swardley, it appears that the three look at portability in its broader sense (not just about around Nova API):

@nearyd: “@randybias user confidence that they can easily move work off one OpenStack to another"

@nearyd: “@randybias Also ability to burst off private cloud to a public cloud will be important.”

According to this definition what were really looking for is application portability from non-OpenStack environments to OpenStack.

API Interoperability != Application Interoperability

I think that most of the community would agree with the general sentiment behind Bias’ call that OpenStack should align itself with that of major public clouds. Now zero-in on the Nova API as the means for portability – this is where the debate lies and where I feel that Randy may be wrong in his assessment for the following reasons:

The permutations and complexity of doing this across many vendors has sunk many previous attempts over the last decade.

Randy’s reference to CloudStack AWS API bridge is actually a reference to why API bridges aren't that useful after all. Based on my experience, most if not all CloudStack users do not really use the CloudStack AWS bridge as it doesn't expose many of the CloudStack features, it’s not well tested and doesn't always work well simply because it's not considered a main path. I also heard similar feedback from Adrian Cole, the founder of the JClouds project - a popular cloud abstraction framework.

API Portability != Application Portability - API portability covers only a small part (and not necessarily the important part) of what is needed to get true application portability, as noted in Chief Engineer at eBay Subbu Allamaraju's post OpenStack is not Cloud and also in OpenStack COO Mark Collier’sresponse:

"I do agree that APIs sometimes get more credit than they deserve. Sometimes people oversimplify the concept of a platform and the compatibility of a platform based on just saying, 'If you've got the API, you've got compatibility.' The reality is that an application architecture depends on the behavior of the whole system it's interacting with. The API is simply the way it talks to that system."

What We Can Learn from the Android vs IOS Experience on Cloud Portability

Apple IOS and Google Android are actually good references for achieving fairly good application portability between two environments without necessarily agreeing on the same API.

“AWS API compatibility is what an operator should worry about above all these? Nah. You can fix API incompatibility with glue code.”

Jeremy Jarvis, Co-founder of Brightbox, suggested a similar idea in his comment to Randy's post:

"Anyone writing code to interact directly with APIs of popular clouds is probably doing something wrong (and certainly wasting effort). There are a number of cloud abstraction libraries which are doing a great job in providing a unified interface to multiple clouds. Fog (Ruby) is a great example - sponsored by Heroku no less - of how a community/ecosystem is developing around multi-cloud interaction."

“If the future is, as I suspect, made up of enterprises using lots of different cloud offerings (not so much in a hybrid/cloud migration model but more of a discrete multi cloud approach) then there is less of an urgent need to resolve to one common API set. So long as there are products that give visibility and management over these discrete resources (and this is where Dell, with its recent acquisition of enStratius and dropping of its own public cloud play, is thinking) the actual interoperability or commonality of the different products API sets is secondary."

My Take

Given the dynamic nature of the cloud environment, there is no doubt in my mind that portability is important - not just between AWS, GCE and OpenStack, but also between different OpenStack providers, as well as between traditional data-center environments and OpenStack as I noted in my previous post To OpenStack or Not to OpenStack? Moving Enterprise Applications onto OpenStack Today. Having a native cloud API that is compatible with AWS and GCE is wishful thinking and, unfortunately, makes it less practical at this point in time.

Application deployment is better described in recipes, which cover a wider spectrum of application deployment and behavior - such as the tiers of the application, the metrics, the scaling policies, the configuration etc. The OpenStack Heat project, which is actually driven from its AWS equivalent CloudFormation, can get us much closer to the desired portability, even if the underlying Cloud API doesn't conform to the same API. If done right, this means that we could execute our application recipes in a similar way on both AWS and OpenStack.

Having said that, in order for OpenStack Heat to become the solution for cloud portability there is more work to be done in designing the framework in a way that will be more loosely coupled with the underlying OpenStack infrastructure and also to adopt richer deployment recipes that can cover more aspects of our application deployment in a declarative way. It looks like there are already plans in this direction, but the progress seems to be fairly slow. Therefore, I think that it would be both simpler and less disruptive for the community to align its effort in maintaining portability around the Heat project rather than re-writing the Nova API.

This is aligned also
with what seems to be a general trend where enterprisesuse OpenStack to reduce
their VMware cost. They often start with a hybrid approach in which they move
the less mission critical apps to OpenStack and keep the rest for VMware. I assume that this is only a transitional stage until we will see almost all the
workload moving to OpenStack as soon as the organization will feel confident
with the technology.

Paypal is not the only
one who's moving away from VMware; Koby Holzer from Live Person will be
speaking on how Live Person uses OpenStack internally to reduce their VMware
cost during the upcoming OpenStack
Israel event. I'm sure that there are many others who have already taken that path, but were not as public about it.

While the article
focuses on the general trend, I thought that there is a bigger opportunity here worth noting.

The Opportunity

Clearly a failure of
one player (VMware in this specific case) is an opportunity for others.

The opportunity is for
OpenStack providers, such as HP, Redhat, Rackspace and IBM, to take-on VMware
directly rather than focus on winning on Amazon.

Why?

VMware has a huge enterprise footprint

Private and hybrid clouds are at the heart of
enterprise cloud strategy

OpenStack is well positioned for the private cloud, given
its open source foundation

Having organizations such as HP, Rackspace,
Redhat, Mirantis and recently IBM behind OpenStack should also provide
enough comfort that the gaps that still exist will be bridged
shortly in a way that will fit the enterprise requirements.

Timing is Everything..

OpenStack was created initially by Rackspace and NASA as a response to the strong dominance of Amazon
as the leader in the cloud space. At that time, Amazon was the benchmark and the
reference for the OpenStack project.

Many things have
changed since OpenStack was first born that, IMHO, need to change some of the
fundamental strategies for many of the OpenStack players.

Amazon was quick to realize the threat and moved
remarkably fast up the stack, leaving enough of a gap to preserve its
leadership position.

VMware's response, on the other hand, was fairly poor, despite various attempts to move up the stack, including major acquisition
of SpringSource in 400M$, followed by a few other failed attempts, such as
VMforce which was later forked into CloudFondry which recently moved out
to a new pivotal fork under EMC. Now VMware seems to be taking another turn
with its continuously changing cloud strategy, but without real substance. This creates lots of skeptisim in the market over their ability to execute their plans, as noted in Bernard Golden's report: VMware
Tantrum Shows It's Not Connecting With Cloud Buyers or Sellers

Like VMware, many of the
traditional enterprise players are now threatened by Amazon and need to fight
back to defend their current position in the enterprise space.

From a technology
perspective, the gap between OpenStack and VMware's offering is much smaller than
with AWS. Most of those gaps can be easily bridged by the likes of HP, Redhat,
Ubuntu, IBM and Rackspace.

All this makes VMware
the right target for OpenStack providers rather than Amazon, at least at this
point in time. With that in mind, the right strategy for OpenStack providers is
to offer a VMware alternative based on OpenStack and have both a private and
public offering of OpenStack that can be easily plugged together as a single
compute cloud.

March 14, 2012

Earlier this month Zynga announced its move from Amazon AWS to its own private Z-Cloud. Sony also started to move increasing parts of its workload from Amazon to Rackspace OpenStack.

There isn't so much in common between these different use cases, except for the fact that they may indicate the beginning of a trend (I’ll get back to that toward the end) where companies start to take more control over their cloud infrastructure.

So what really brought Zynga and Sony to make such a move?

Zynga moves from Amazon to their own private cloud

Zynga ran their gaming services on Amazon for a while. It was noted that the cost of running these games on Amazon cost Zynga $63M annually. This cost and the continuous increase of their workload forced Zynga’s management to look for ways to control their cloud operational costs. Zynga realized that at that scale, they are just better off building their own cloud infrastructure. By doing so, they could also optimize the cloud infrastructure for their own needs and reduce the operational cost margins substantially as a result.

According to Zynga their private cloud operation is reported to increase utilization by 3x, which means that they will need 1/3 of the servers that they would need from Amazon for the same workload, as noted by CTO Allan Leinwand on Zynga’s engineering blog:

For social games specifically, zCloud offers 3x the efficiency of standard public cloud infrastructure. For example, where our games in the public cloud would require three physical servers, zCloud only uses one. We worked on provisioning and automation tools to make zCloud even faster and easier to set up. We’ve optimized storage operations and networking throughput for social gaming. Systems that took us days to set up instead took minutes. zCloud became a sports car that’s finely tuned for games.

What's interesting in this case, is the cost analysis. Quite often when we measure the cost of running machines, we don't measure what would be the cost of running a particular workload, which is a combination of many factors, not just the cost of servers.

The other thing that is interesting with Zynga’s move is that they didn't make the move just to cut costs. The move was part of a more strategic direction to create a gaming platform for additional games. Apparently when your service becomes a platform, controlling your infrastructure becomes more strategic than just the cost factor. It’s a big differentiator that can put Zynga in a completely different ball game than its competitors; it will also enable them to have better control over the dependency on Facebook and build their own independent ecosystem.

According to Dave Wehner, CFO at Zynga, the company will lower its cost of revenue over the next 18 to 24 months as third-party hosting costs decrease. Wehner said that Zynga plans to “roll off the majority of those third-party hosting arrangements.” Zynga’s capital expenditures in the fourth quarter were $50 million, down from $63 million in the third quarter. Most of that spending is focused on zCloud. For 2011, capital investments were $238 million.

The building of its own infrastructure will help the bottom line. Zynga can depreciate its gear and lower quarterly expenses. “We believe this investment will have a short payback period and enable us to expand gross margins in the long term,” said Wehner

The important thing to note is that Zynga’s move wasn't because of Amazon’s failure as it first reads. It’s more of a natural evolution and maturity cycle of the company.

Sony move from Amazon to Rackspace/OpenStack

Sony’s gaming arm faced a number of security breaches last year, as reported by NetworkWorld, which compromised the personal identity information of millions of players within Sony’s gaming network. This event forced Sony to look for ways to have better control of their infrastructure exposure. Their approach was to move some of their workload to Rackspaces’ OpenStack.

By splitting their operations between Amazon and Racksapce they are able to have better control of a particular cloud failure. They are also better positioned to control their cloud infrastructure costs by reducing their dependency on a particular cloud provider, thus being in a stronger position to negotiate their cloud operational costs.

My take:

We’re often taught that infrastructure is a commodity that we shouldn't care much about, and should essentially outsource everything. It turns out, that as the cost of infrastructure get bigger, and as our needs become more unique, controlling the infrastructure becomes a critical part of our business. Controlling your infrastructure could mean having your own private cloud as is the case with Zynga, or minimizing your dependency on a particular cloud provider as is the case with Sony. The good news is there are enough free OSS tools out there such as Chef, Puppet, and Cloudify that can help to reduce the complexity that is often involved with such a move.

The shift in the thinking about the enterprise cloud consumption also poured water into the “DevOps” concept advocated by vendors and pundits with their foot in the IaaS world. When organizations embrace PaaS instead of infrastructure services, we don’t need the DevOps marriage and the associated cultural change (believe me, this cultural change is giving sleepless nights to many IT managers and some consultants are even making money helping organizations realize this cultural change). With PaaS, organizations can keep the existing distinction between the Ops and Dev teams without worrying about the cultural change. In fact, with cloud computing, the role of the Ops is not going away but it stays in the background offering an interface which developers can manage themselves.

Krishnan represents one of the common attitudes and subjects of debate between two main paradigms for developing and managing applications on the cloud:

PaaS -- PaaS takes a developer, application-driven approach. A PaaS platform provides generic application containers to run your code. The PaaS platforms deals with all the operational aspects needed to run your code such as deployment, scaling, fail-over, etc.

DevOps -- DevOps takes a more operations-driven approach. With DevOps, you get tools to automate your operational environment through scripts and recipes, and keep full visability and control over the underlying infrastructure.

The Difference Between PaaS and DevOps

Both PaaS and DevOps aim toward the same goal -- reducing the complexity of managing and deploying applications on the cloud. But they take a fairly different approach to deliver on that promise.

Developers may ask: "if I have a self-service portal for deploying applications (aka PaaS), do I need SysAdmins at all?"

SysAdmins may ask: "isn't PaaS just a monstrous black box that prevents me from provisioning the specific services we need to deploy real-world apps?" ... The typical SysAdmin thinks that they can get to 75% of PaaS functionality with DevOps tools like Chef without giving up any systems architecture flexibility.

I thought that Carlos Ble's post Goodbye Google App Engine (GAE) is a good example that illustrates why the initial perception behind GAE as a simple platform that provides extreme productivity can be completely wrong.

...developing on GAE introduced such design complexity that working around it pushes us 5 months behind schedule.

Part of the reason that brought Carlos through that experience IMO is that in the course of trying to make GAE extremely productive the owner made the platform too opinionated, to the point where you lose all the potential productivity gains by trying to adopt their model. In addition, with a platform like GAE you have very little freedom to leverage existing frameworks such as your own database, or messaging system, or any other third-party service that can in itself be a huge contributor to productivity.

Instead, you're completely dependent on the platform provider's stack and pace of development, and that in itself can work against agility and productivity in yet another dimension. In this specific example, Carlos couldn’t use a specific version of a Python library that would have made his productivity higher, and instead had to work around issues that were already solved elsewhere. This is a good example how the lack of flexibility leads to poorer productivity even in the case of simple applications.

Putting DevOps & PaaS togatehr

It looks like more people in the the industry have come to recognize that rather than looking at DevOps and PaaS as two competing paradigms, it might be best to combine the two, as Christoper Knee pointed out in his post:

What if you could get a PaaS that wasn't a black box, enabling developers to deploy apps easily while still giving SysAdmins the ability to provision any services they needed (a la Cloud Foundry)?

In 2012, we’ll see many of the DevOps tools, such as Chef and Puppet, integrated into application platforms, making it easier to deploy complex applications onto the cloud. In the same way, we’re going to see more Application Platforms adopting the automation and recipe model from the DevOps world into the application platform. The latter have the potential to transform the opinionated PaaS offerings as we know them today, with Heroku and GAE leading that trend, into a more open PaaS offering that better fits the way users develop apps today, and provide more freedom to choose your own stack, cloud, and application blueprint.

What Makes a Cloudify and CloudFoundary a PaaS Jailbreaker?

A PaaS that allows developers to use whatever tool they want to build their cloud applications and the platform tackles the deployment, scaling and management of these apps in the cloud data center.

VMware CloudFoundry is one of the more notable references in that category. Quoting Christoper Knee:

CloudFoundry runs anywhere, incuding on your laptop. CloudFoundry's service container concept is particularly strong, kind of an appliance on steroids.

These ideas were the founding concept behind Cloudify, i.e. putting DevPops & PaaS togather in a single framework. As with CloudFoundry, Cloudify enables you to break away from the "blackbox PaaS". However, even though CloudFoundry is significantly more open than most other PaaS alternatives, at the core it is still based on the "my way or the highway" approach (aka opininated architecture), which forces you to fit into a specific blueprint that is mandated by the platform. Cloudify, on other hand, pushes the envelope even further by adopting the concpet of recipes that was first introduced by DevOps frameworks such as Chef and Puppet. It introduces more application-driven recipes through the introduction of Domain Specific Language that extends upon the groovy language.

The Cloudify recipes give you the full power to plug in any application stack on any cloud (including a non-virtualized environment) and manage them in similar way to the way you would manage them in your own datacenter or machines. You can also call Chef and Puppet from within the recipes. All this, without hacking the framework itself. As with other similar DSLs, the Cloudify DSL was designed to express even the most complex application management tasks, such as <recovery from a data center failure> in a single line and avoid all the verbose script and API calls that are often involved when you work at the infrastructure level.

All this makes Cloudify an even more open alternative that fits a large variety of the current enterprise application stacks, including:

JEE applications

Big Data applications

Multi-tier applications

Native applications (C++,..)

.Net, Ruby, Python, PHP applications

Multi-site applications

Low-latency applications (that can't run on VMs)

It also makes Cloudify more open for special customization of the existing stack, like in the case of:

An application that needs a certain version of MySQL (not the one that comes with the framework)

An application that needs to run on Redhat (not Ubuntu). or even more interesting -- a case where there are mutiple applications, each needing a different OS served at the same time.

Cloudify also provides a more advanced level of control geared for mission-critical apps, including:

Monitoring the application stack and topology

Adding custom application metrics

Adding custom SLAs

It can also work on wide variety of cloud environments, including Microsoft Azure and non-virtual enviroments.

One of the great powers of the recipe is that it is a great collaboration tool. Once you develop a recipe, it is very easy to share it wtih different groups -- whether internal groups like developement, QA, and operations, where the recipie provides a programmatic definition of thier environment, or it can be shared between the product team and the pro-services team, where your sales and pro-services can easily install and update product versions in a consistent way, as well as reproduce customer scenarios and share them with the support team. Recipes are also a great tool for collaboration through a wider community network, where users can collaborate by sharing common recipes and best practices over the web.

Quick Introduction to Cloudify Recipes

Below is a short snippet that shows a simple application recipe, of a typical java-based web application based on JBoss as the web container and MySQL as a database. The application recipe describes the services that comprise the application, and thier dependencies. The details on how to run MySQL and JBoss is provided in a seperate recipe for each of the individual services. A more detailed description of how a service recipe would look like can be seen here.

To get a taste, you can try one of the available recepes such as Cassandra, MongoDB, Tomcat, JBoss, Solar, etc., just as a simple way to try out these products on your own desktop or on any of the supported clouds, without the hassle that is often involved in doing so and without even direct relation to Cloudify per-se.

November 15, 2011

Three years ago when we started working on the first generation of our PaaS offering, cloud portability seemed to be pretty much “mission impossible.” At the time, we made a conscious decision to focus only on Amazon for our first generation PaaS as practically it was the only cloud in town.

Now, there are many different public and private cloud offerings: platforms like GoGrid, VMWare, Citrix/Xen, and Cisco UCS, with recent additions being OpenStack, Cloud.com, and Microsoft Azure. Frameworks like JClouds have been developed to make cloud portability an easier goal to reach.

As a result, cloud portability is not only a possibility, but it's easily done with the right constraints in mind.

Having said that, there are still too many options from the standpoints of API standardization, portable virtual machines, abstraction frameworks, orchestration frameworks, etc. None of them fully address the challenge and therefore a solution has to combine these options into one cohesive unit. Finding the right combination of features is a pretty tricky challenge and involves lots of trial and error, as the chances that you’ll pick the wrong ones are still pretty high, as we experienced ourselves.

In this and the next few posts, I wanted to share some of our experiences when developing Cloudify , our unified cloud deployment/management tool. I will start by covering five common misconceptions people have.

Five Common Misconceptions on Cloud Portability

1. Cloud portability = Cloud API portability

This is perhaps the most important observation in this entire discussion. When people think of Cloud portability, the immediate thing that comes to mind is Cloud API portability - a standard API, or a common abstraction, that maps all the different API’s into one common façade, right? Well... wrong.

What you're really looking for is the ability to run your application on different cloud platforms without any code change. Having a common or standard API is one way to achieve that goal but not necessarily the most practical one, given the speed in which cloud APIs evolve. This brings me to the first observation: Application Portability != Cloud API portability. Let me explain:

Most applications don’t need to interact with the cloud-specific API from the application itself. Most of the cloud API deals with stuff that happens outside of the application code, like starting new virtual machines or services, or providing elasticity. In many cases, systems management tools are the primary specific aspect for a given cloud platform; most provide support for managed middleware like MySQL (RDMS), Tomcat, memcached, or even map/reduce (through Hadoop, for example.) The mechanisms for using these are standard and common, but the APIs for managing the services are not. (There are exceptions, such as SimpleDB and SQS, which do use proprietary and localized APIs. I believe that over time, you'll see less and less of this as there will be enough nonproprietary options to restrict their use.)

The problem of dealing with application portability between clouds is vastly different than the problem of dealing with API portability. API portability is easy; cloud API portability is not.

I’ll spend more time on how we can use this realization to simplify the task of achieving cloud portability in a follow-up post.

2. The main incentive for Cloud Portability is - Avoiding Vendor lock-in

Interoperability and vendor lock-in were ranked the next most significant challenges, both with 25% of response. Though we saw vendor lock-in fade a bit as a concern for customers two to three years ago, it has risen again as a major issue in cloud computing. We believe this is in part because of the early nature of cloud computing and a desire from users to avoid getting stuck with a cloud vendor, framework or technology.

Many (myself included) view cloud portability as a mean to address vendor lock-in as the main drive for Cloud portability. If we can run our application on any cloud without any code change, we can in fact avoid vendor lock-in.

This brings me to the second realization. Cloud portability is more about business agility than it is about vendor lock-in. Let me explain:

It is not very likely that you would switch entirely between one cloud provider to the other, at least not frequently. If you are going to change cloud providers, its going to be more of a one-off event than something that you would practice on a regular basis.

On the other hand, it is probably more likely that you would use cloud portability to choose the right cloud for the job. A common example would be to run your test on the cloud and production on your own private cloud. Another example would be to run your demos and trials on the “cheapest” cloud and production with a cloud provider that offers a higher service level. A more advanced scenario would be to use cloud portability for cloud bursting.

In other words, cloud portability is about business flexibility. It gives us more choices between cost, SLA, and security tradeoffs between the different cloud offerings and product lines. Portability between our local data center and the cloud would also help in making the transition to the cloud more smooth.

3. Cloud portability isn’t for startups

If you're a startup, when the discussion on cloud portability comes up the immediate reaction is often “hmm.. interesting but not for me.” Being part of a startup company myself, I think that I can relate to that reaction. As I mentioned before, when I faced the choice for dealing with cloud portability a few years back, cloud portability was one of the first items that I took off of my list of TODOs to meet our time-to-market goal, and settled for Amazon.

Recently, I came across different cases that shows that dealing with cloud portability is often forced on us even when we don’t plan for it.

The first case is with MixPanel, a web analytics company who switched from RackSpace to Amazon:

When MixPanel started up, cost mattered as they were running out of their own pockets. In this case, every dollar counted and therefore the right choice at the time was Slicehost. As their business grew, Mixpanel switched from Slicehost to Linode and later to Rackspace due to a Ycombinator deal. As their business grew, they switched to Amazon who happened to provide more features and greater flexibility .

First was Slicehost back when everything was on a single 256MB instance.. Second was Linode because it was cheaper (money mattered to me at that point). Lastly, we moved over to the Rackspace Cloud because they cut a deal with Ycombinator... Even with all the lock in we have with Rackspace (we have 50+ boxes)..it’s really not about the money but about the features and the product offering, here’s why we’re moving”

In the case of BeaTunes, the trigger for switching off of Amazon was an Elastic Block Store (EBS) crash. During that time, I had to rely on basic support that Amazon provided. Given that BeaTunes is a small startup, they had almost no leverage to escalate their issue fast enough for the business need and had to hope that their particular issue would hit enough users in the forum to attract Amazon's support staff's attention.

They ended up choosing EQ8. Being a smaller hoster than Amazon made EQ8 a better fit for BeaTunes, a small startup itself. Even a small startup can run mission critical software, and when there is a failure, it is even more important to get a timely response as your entire future may rely on this. Unlike big companies, you often don’t have the means to survive such failures easily. It also happened that by switching to EQ8 BeaTunes could choose better hardware configuration that fit their needs and at half the price of the original services!

These are only two samples, representative of a much bigger trend in the industry at large.

The interesting thing in both MixPanel's and BeaTunes' histories is that their choice of cloud changed over time due to changes in their maturity as a company, as well as changes in the business requirements which involve cost, support level, flexibility and feature set, etc. At each point in time, the right cloud happened to be a different cloud.

Personally, I found that issue around supportability in the case of BeaTunes quite interesting as quite often we tend to go for the brand thinking that it’s the “safest bet” where perhaps the right choice would be to choose the one that fits our size (and stage).

The main point in both examples is that even during a relatively short period of time, the two startups found themselves switching from one cloud to another. In the case of MixPanel, this happened four times during their company lifetime.

The thought that you can avoid such a move between cloud platforms is likely to be too optimistic, even if you’re a startup. Given that there are more options to deal with cloud portability today, and the effort is not as a great as was the case few years back, I would encourage every startup that is expecting rapid growth to re-examine their deployments and plan for cloud portability rather than wait to be forced to make the switch when you are least prepared to do so.

4. Cloud portability = Compromising on the least common denominator

By definition, standards tend to lag behind implementations. Standards are therefore a compromise on the least common denominator. In a dynamic environment, such as the one we're experiencing with cloud today, compromising on the least common denominator is a choice which may come with great cost on “reinventing” things that are already available outside of the standard or common API.

If we think of application portability, we don’t need to compromise on the least common denominator as most of the interaction with the cloud API happens outside of our application code anyway, to handle things like provisioning, setup, installation, scaling, monitoring, etc.

There are orchestration tools such as Chef and Puppet that can provide higher levels of abstraction for automating those processes between clouds without relying on a common cloud API. Frameworks like JClouds provide common abstractions to the common set of APIs (such as Compute and Storage APIs) while still allowing the application to interact with the underlying cloud-specific APIs and thus minimizing the areas of differences between clouds that would require specific handling.

5. The effort for achieving cloud portability far exceed the value

Many would argue that cloud portability comes with a cost and rightly so. Indeed, there is no such thing as free lunch, and cloud portability is no exception.

However, the cost isn’t as great as many of us think, especially now that there are more tools and frameworks available.

The effort to achieve cloud portability is far less than it used to be, in most cases, making it a greater and more valuable priority (with less investment) than it used to be.

Final notes

The term "cloud portability" is often considered a synonym for "Cloud API portability," which implies a series of misconceptions.

If we break away from dogma, we can find that what we really looking for in cloud portability is Application portability between clouds which can be a vastly simpler requirement, as we can achieve application portability without settling on a common Cloud API.

As in the case of MixPanel and BeaTune, choosing the right cloud for the job may vary over time. The right cloud when we create a new startup can be different than when we grow, and if we're very successful, we may find out that managing our own cloud is the right choice.

If we focus only on what’s needed to ensure application portability between clouds, we may find that cloud portability can be easier than it seems at first glance. If done correctly, it can result in greater flexibility for our businesses:

Choose the right cloud for the Job

Reduce vendor lock-in

Enable advanced deployments such as a hybrid cloud, and cloudbursting

In the next post I’ll touch on what it takes to turn cloud portability into a practical reality.