Do Amazon’s APIs matter?

For those who have been wondering where I personally stand in the brouhaha over Amazon, Citrix, Eucalyptus, CloudStack, OpenStack, Rackspace, HP, and so on, along with the broader competitive market that includes VMware, Microsoft, and the Four Horsemen of management tools… I should state up-front that I hold the optimistic viewpoint that I want everyone to be successful as possible — service providers, commercial vendors, open-source projects, and the customers and users that depend upon them.

I feel that the more competent the competition in a market, the more that everyone in the ecosystem is motivated to do better, and the more customers benefit as a result. Customers benefit from better technology, lower costs, more responsive sales, and differentiated approaches to the market. Clearly, competition can hurt companies, but especially with emerging technology markets, competition often results in making the pie bigger for everyone, by expanding the range of customers that can be served — although yes, sometimes weaker competitors will be culled from the herd.

I believe that companies are best served by being the best they can be — you can target a competitor by responding on a tactical basis, and sometimes you want to, but for your optimal long-term success, you should strive to be great yourself. Obsessing over what your competitors are doing can easily distract companies from doing the right thing on a long-term strategic basis.

I’ve been thinking about the implications of Amazon API compatibility, and the degree to which it is or isn’t to Amazon’s advantage to encourage other people to build Amazon-compatible clouds.

I think it comes down to the following: If Amazon believes that they can innovate faster, drive lower costs, and deliver better service than all of their competitors that are using the same APIs (or, for that matter, enterprises who are using those same APIs), then it is to their advantage to encourage as many ways to “on-ramp” onto those APIs as possible, with the expectation that they will switch onto the superior Amazon platform over time.

But I would also argue that all this nattering about the basic semantics of provisioning bare resource elements is largely a waste of time for most people. None of the APIs for provisioning compute and storage (whether EC2/S3/EBS or their counterparts in other clouds) are complicated things at their core. They’re almost always wrappered with an abstraction layer, third-party library, or management tool. However, APIs may matter to people who are building clouds because they implicitly express the underlying conceptual framework of the system, though, and the richness of the API semantics constrain what can be expressed and therefore, what can be controlled via the API; the constraints of the Amazon APIs forces everyone else to express richer concepts in some other way.

But the battle will increasingly not be fought at this very basic level of ‘how do I get raw resources’. I recognize that building a cloud infrastructure platform at scale and with a lot of flexibility is a very difficult problem (although a simple and rigid one is not an especially difficult problem, as you can see from the zillion CMPs out in the market). But it’s not where value is ultimately created for users.

Value for users is ultimately created at the layers above the core infrastructure. Everyone has to get core infrastructure right, but the real question is: How quickly can you build value-added services, and how well does the adaptibility of your core infrastructure allow you to serve a broad range of use cases (or serve a narrow range of use cases in a fashion superior to everyone else) and to deliver new capabilities to your users?

Thanks for an insightful opinion. I agree that massive value for customers will be created above the core infrastructure. But will that mean that a battle won’t be fought on the basic level? Will that mean that value won’t be created on the IaaS layer? I would argue that massive value is emerging on the IaaS layer, and that the API matters to application workloads needing deployment freedom.

Professor Clayton Christensen talks about “the law of conservation of attractive profits”. His thesis is that in a value chain, the most value (and hence, the biggest profits) is generated in segments that are difficult to do. This difficulty and value-generation shifts to different portions of the value chain over time. Generally speaking, in the IT industry, first it was hardware. Then it was software. Now hardware is difficult again, and storage startups – to give an example – see great opportunities to create value.

So is infrastructure orchestration difficult to do, and is it valuable to the customer? I’d say yes and yes. Amazon Web Services is 6 years old and still without significant competition. It must be difficult to build such a service. On the on-premise side, the most mature solutions are in their fifth year of development. Other prominent attempts are commonly seen as not production-ready yet despite years of development. Notwithstanding these long times to technology maturation, customers are eager to deploy. AWS is seeing stellar growth, and Gartner’s Tom Bittman predicts that on-premise solutions will grow 10x in 2012. It seems that IaaS is both difficult to do and important to customers.

But even if we may agree that IaaS is a valuable battleground in its own right, your question was actually not about value creation but whether a common API matters.

That’s a very good question, and the jury may still be out on it. Some people will argue that in the situation we have, interoperability is not in the interest of vendors. If the market is lucrative and the product is difficult to build, it traditionally has meant that vendors have been unwilling to standardize across an API or other commonality (case in point: Apple now and Microsoft 30 years ago). They standardize only once the market matures.

But in IaaS (including both public and private cloud), we are already seeing standardization around the AWS API. Why is that? Does it mean that this part of the market already is mature? Or are vendors seeing a major value in API commonality in this early and tumultuous phase of the adoption of IaaS?

In my view, IaaS API compatibility is not a question of the maturity of the market. It is a question of an important duality. Public clouds will take over a significant portion of computing in the world. As they do so, they will drive on-premise environments to follow the same paradigm. Some will argue that for public clouds to reach their full potential, there needs to be a pendant – a counterpart – inside the firewall. Public clouds will need on-premise IaaS environments, and on-premise IaaS environments will need the public cloud. Applications will be free to roam. Otherwise both public and private clouds will finish short of their potential.

I think that the base IaaS layer (the on-demand infrastructure fabric layer, to use the parlance I introduced at the NEA event) is absolutely going to be a layer at which a great deal of value is created, especially if you go beyond the very basic layers of a cloud management platform (CMP). Moreover, it is a problem that has to be solved efficiently in order to ensure that the service provided (whether via an external service provider, or internal IT), is as cost-effective as possible.

In the short term, I think the leading IaaS providers are going to need to have significant control over the solution they build, until the turnkey software solutions are truly complete and mature — many years out, I think. (I compare this to the way Parallels has gradually come to replace custom-written solutions for mass-market shared hosting.) I also believe that open source is going to play a significant part in this solutions development, because the leading providers cannot wait to wait for vendor development — or for vendor bugfixing of critical issues.

I believe that a common API is valuable, but the really interesting question is whether that common API is really the direct “raw” API — the Amazon API, the OpenStack API, etc. — or some higher-level API that is likely more abstract and perhaps richer. Of course, that simply shifts where lock-in is created. For instance, our clients who tell us that they’re locked into Amazon normally explain that they’re not locked into Amazon — rather, they’re locked into RightScale.

Interesting thoughts and discussion. I do think that the answer to the question “Do Amazon’s APIs matter” largely depends on the context in which it is posed. IaaS is very nascent industry and there are two main movements: enterprises with existing VMWare experience are looking at the possibility to supplementing their resources with capacity from IaaS providers. On the other hand, there is a significant number of people who have had their Amazon experience and are looking to bring bring some of the shadow IT projects and steady state workloads in house. There is a great deal of momentum on both sides and despite the dominance of VMWare on the private side and Amazon on the public side it is way too early to call out a winner. It is a battle of API and both Amazon and VMWare are encouraging the adoption.
What is telling is the way they are encouraging the adoption of their respective API. Amazon chose Eucalyptus and not any of the larger CSPs as their partner likely because Eucalyptus would bring the Amazon API to the private cloud without being any kind of a threat or distraction on the public cloud side. VMWare at the same time is working hard on building an ecosystem of public cloud providers that enterprises can see as being API compatible with their on premises VMWare infrastructure.
With Citrix announcement of moving CloudStack to Apache, there has been a lot of debate about the impact of this move on OpenStack. I think the impact of the Citrix move is going to have a greater impact on the Amazon vs. VMWare API race than on the OpenStack. In other words, the future landscape of IaaS and possibly cloud computing in general has a lot to do with the success of Amazon in propagating their API.

I appreciate your insights in this post and also those of Marten’s in his reply. I agree with many of the points made.

One of the things that I think has been somewhat lost in the general Stack news over the past few weeks is that I firmly believe that large enterprises — those that run complex heterogenous (both scale-out and legacy app) compute landscapes — will *first* buy on that cloud stack’s ability to meet required enterprise performance attributes and economics as opposed to it being a part of a particular ecosystem, under particular governance, and/or because it has embraced a particular set of APIs.

We see a mix of public and private solutions dictated by application use case requirements and the need to actually bring real cloud (versus virtualization 2.0) solutions to legacy enterprise apps which represent such substantial workload volumes in these enterprise landscapes — things such as intelligent/dynamic app server sizing, IOPS guarantees within multi-tenant architectures, latency sla’s, vm “optimization”, etc. It is our burden as service providers to deliver the most suitable cloud solution to the varying application use cases required as efficiently as possible under an intelligent software management layer. Proper scale-out capabilities are of paramount importance for modern applications within these landscapes and yes, interoperability to freely move workloads to alternate providers will be an absolute value driver — but not at the expense of sacrificing the core feature capabilities of the underlying base layer.

We are finding this to be the focus of our F500 client and prospect base right now.

This is also possible if they offer SEO as part of their service.
Dream – Host is an online hosting company that offers
products like domain registration, shared website hosting, Virtual Private Server (VPS), and public cloud services.
You can be diligent about protecting your website but someone else on the same server might not be.