Summary

At its AWS re:Invent conference in November 2012, Amazon presented a compelling story about how it intends to permanently alter the enterprise computing landscape generally and infrastructure services specifically. Executives from Amazon were clear that they had no intention of mimicking the business models of traditional enterprise service providers. Rather they intend to disrupt the current order and bring Amazon’s low margin retail mentality to enterprise IT.

As a result, IT organizations (ITOs) will see continued pressure from CEOs and “Shadow IT” to improve agility, simplify IT operations, and cut costs. While prudent CIOs will focus on data governance, privacy, security, and other organizational risks associated with moving to the Public Cloud, the reality is that intensified partnerships with cloud service providers are probably in their future.

Recent Wikibon research shows that successful cloud service providers will not try to take Amazon head on in the infrastructure-as-a-service (IaaS) space. Rather they will put forth a clear value proposition to CIOs that delivers business value through a partner ecosystem. The most appealing to enterprise IT customers will come from services that are either industry-focused and/or best-of-breed and offer significantly better shared-risk models than Amazon.

Research Methodology

Research for this study consists primarily of in-depth interviews with approximately 25 customers and cloud service providers. Our main focus was on companies providing or considering outsourced infrastructure services (e.g. IaaS). We also spoke with some software-as-a-service (SaaS) players to understand their infrastructure requirements. However this group was not a main area of focus. We complemented this primary research effort with background study using publicly available sources. This effort included an analysis of service contracts for both Amazon AWS and other cloud service providers (CSPs).

This research had four main objectives:

Understand the impact Amazon AWS is having and will have on enterprise infrastructure deployments going forward;

Assess Amazon’s strategy and its applicability for CIOs at mid- and large-sized organizations;

Evaluate the risks associated with moving to Amazon’s AWS and assess the viability of alternatives – both internal and external;

Provide a framework for our CIO members to assess the changing nature of cloud service provision throughout the year.

Research Premise

We have evolved the following premise for this research effort:

Amazon’s aggressive entrance as a horizontal player in the enterprise IaaS market will put new demands on organizations to further cut costs and improve agility. While prudent CIOs will focus on data governance, privacy, security, and other organizational risks, enterprise service providers will not take Amazon head on. Rather they will put forth a clear value proposition to CIOs that delivers business value through a partner ecosystem. Success will come from services that are industry-focused and/or best-of-breed.

Figure 1 – Research Premise.Source: Wikibon 2013

Amazon is the new enterprise gorilla, generating nearly of $2B in AWS revenue in 2012. We estimate that In 2013 AWS generated $3B in revenue. Subsequent to posting this research in 2012, Amazon has provided access to certain executives and customers. In addition to these data points, we assessed the viability and effectiveness of Amazon’s strategy by observing company statements and reviewing Amazon documentation. We talked to customers and competitive service providers to capture their views.

The implications of our premise for Wikibon CIO members are as follows:

Amazon’s strategy is to compete on the basis of massive scale. Very few if, any IT organizations and competitors will be able to match Amazon’s size, cost structure, and pace of functional delivery.

The complex nature of enterprise IT infrastructure is a reality that Amazon will try to minimize by picking off the “low hanging fruit.” This will naturally mean a good fit for developers, startups, and smaller businesses, but core enterprise IT apps are best not placed on AWS for the foreseeable future.

Amazon marketing, however, will attack enterprise IT suppliers as “gross margin pigs.” This could have a negative ripple effect on IT organizations much in the same way, as Nicholas Carr’s incendiary article and subsequent book “Does IT Matter” did.

The reality is that typically enterprise IT supports hundreds or even thousands of apps and tens-of-thousands of users, whereas Amazon success stories (e.g. Netflix) typically are with a small number of apps supporting large numbers of users. CIOs must exercise caution and find ways to communicate this dissonance to executive management.

In the next five years, CIOs and their ITOs will spend more time architecting and brokering cloud services and less time setting up and managing infrastructure plumbing. Understanding the right strategic fit and the best “horses for cloud courses” will become more important.

Assessing the Amazon Web Services Value Proposition

Amazon launched its AWS offering in 2006 providing infrastructure services to organizations as Web-based services. This service catalyzed what is commonly known today as cloud computing. AWS brought an alluring and compelling value proposition to the table. In particular, it promised to shift CAPEX to OPEX and provide substantially improved agility and elasticity to customers with a “pay-by-the-drink” model.

In late November 2012, Amazon Web Services Senior Vice President Andy Jassy laid out the AWS value proposition and cited six key values for the service as shown in Figure 2.

Figure 2 – Amazon’s AWS Key Value Points.Source: Wikibon 2013

One of the most famous early success stories and proof points of AWS was that of New York Times programmer Derek Gottfrid who was tasked with making available the entire NYT archive dating back to 1851. He was able to accomplish this task for a mere $240. While most readers are familiar with this story it’s worth re-reading because it gave an early glimpse into the power of what today is known as the “Big Data” movement and was one of the first popular examples of the axiom “Big Data gives the Cloud something to do.”

In retrospect, what was significant about the NYT story is it underscored that this new thing called cloud computing wasn’t necessarily about moving existing compute and storage infrastructure and applications into AWS. Rather it was more about enabling organizations to do new tasks that weren’t previously possible or practical. It allowed people to envision IT without traditional constraints, and it showed a new path to how developers could bypass corporate IT to get work done for very short money.

In this context it’s worth examining the AWS value props that Jassy laid out at re:Invent:

Shifting CAPEX to OPEX: This is one of the most appealing and defensible values of AWS. We estimate that the economic disaster of 2008 and 2009 accelerated the move toward cloud computing by perhaps as much as 18 months, directly related to this point.

Lower costs:Referencing an IDC study commissioned by Amazon, AWS executives cite 70% lower TCO. We believe users should be cautious assessing these figures, as our research shows that on an apples-to-apples basis, renting from Amazon is often significantly more expensive than owning, especially for companies with over $1B in revenue. TCO comparisons for public versus private clouds must take into account the complexity of applications running and the resilience, governance and control edicts of an organization. For reference please see:

Elastic – no guesswork This value point is highlighted by the NYT case study and is totally legitimate. In most cases, internal IT organizations have failed to replicate this capability for their clients.

Speed and agility: According to Amazon executives, the IDC study suggests a 5X improvement speed and agility. We infer that to mean speed of application deployment. Wikibon data suggests at least a 5X improvement for small applications, probably more like 10X for AWS. However, for complex applications with significant interdependencies, this ratio will decline substantially and in some cases flip in the negative direction. For customers the reality is, "It depends."

Avoiding non-differentiated heavy lifting (aka infrastructure plumbing): This is a highly defensible benefit and one that is attractive for companies.

Go global in minutes: This is a relatively new capability that Amazon touts. Understanding the caveats requires more study. Regardless, Amazon’s capabilities are impressive in that it currently has infrastructure in 9 geographic “regions” with 25 availability zones (an availability zone is a distinct location within a region designed to provide failure isolation from other availability zones) and 38 points of presence for content distribution (edge locations).

The Killer Strategy of Amazon AWS

Amazon’s AWS strategy is a virtuous circle as shown in Figure 3. The more customers Amazon adds, the more infrastructure it purchases. As it adds infrastructure, Amazon gets better economies, which decreases its costs. As its costs drop, Amazon lowers prices, which attracts more customers.

Amazon competes by bringing a low cost, low margin retail mentality to computing and then introducing new features and function into its infrastructure at a very rapid clip. This serves to expand its total available market (TAM) and drive more customers. Amazon touts that it has cut prices 23 times since 2006 (about once every quarter). In concept, because the price of compute and storage drops every quarter, this seems like a natural progression. The key is because of Amazon’s buying power and massive scale, it is able to pass this savings on to customers at a consistent cadence.

Key Findings: Amazon’s Enterprise Attack Vectors

Our research suggests that Amazon’s 2012 revenue was around $1.8-$2.0B. The company is expanding its TAM and attacking the traditional enterprise aggressively. We believe its attack strategy focuses on eight vectors as shown in Figure 4.

These points are fairly self-explanatory but warrant some commentary. As of this writing, Amazon has trailing 12-month revenue of $57B with a $120B market cap (larger than HP, Dell and EMC combined). While it compares in size and market value to enterprise tech players, it looks financially most like Dell in that it operates on very low margins. Amazon’s gross margins hover in the mid-20% range, while its profit margins operate in the low single digits. Unlike most enterprise companies, its growth rate is rapid, ranging from 25%-30% year-over-year for the past four quarters.

Amazon is able to point to a huge AWS customer base (hundreds of thousands in more than 90 countries) with some blue chip names. Netflix is the poster child of customers running on AWS, notwithstanding that it competes directly with Amazon’s Instant Video service and has suffered some prominent AWS outages recently. Nonetheless, Amazon can tout firms like Shell, Adobe, IBM, Samsung, Dropbox, "Newsweek", "New York Times", "Washington Post", and many government agencies as clients. While impressive, it is important to note that very few of these larger customers rely extensively on Amazon AWS for infrastructure (as does Netflix); rather they often use it for niche shadow-IT initiatives within organizations. Also, customer churn is reportedly as high as 30% annually.

Nonetheless, Amazon is actively courting enterprise IT customers and building an ecosystem of partners, many of whom compete with Amazon directly or indirectly. Amazon positions itself as a savior of the enterprise that is being gouged by infrastructure players and the logical way to do IT in the future.

Key Findings: How Amazon’s Competitors are Responding

Our findings show that Amazon’s IaaS competitors are not trying to take Amazon head on. Rather they are attempting to replicate the benefits of Amazon’s value proposition while at the same time bringing specialized capabilities to the market. Specifically:

Differentiation is the key to success and focus is the key to differentiation,

SPs are focusing on a set of customers and/or a market segment that allows them to achieve economies of scale.

Successful SPs are taking an ecosystem approach where the participants each bring value to the table – multiple SPs are collaborating to achieve scale while at the same time preserving differentiation in their respective segments.

Key Findings: Customer Requirements

The customers we spoke with were generally mid-to-large-sized shops. While they often outsource infrastructure to Amazon, this usually was for test-and-dev purposes or for niche initiatives driven by lines-of-business. For more strategic projects, customers indicated they often outsourced to service providers other than Amazon. These customers cited several considerations in choosing a cloud service provider:

Data placement is really important:

Access to cheap communications is vital – either proximate or multiple carriers on site where at least one can offer a good deal to ensure the cost of data transfer is low;

Shipping data (e.g. MPLS) is very expensive and time consuming.

Latency Rules! If you’re going to run a database application, you need low latency.

Leading-edge SPs are appealing to customers and differentiating with backhaul (i.e. connecting resources) to provide access to data and minimize the cost of data movement.

By consolidating data in one place and in one industry sector, customers are migrating to SPs that can find opportunities in data analytics within these specific industries and domains.

Security remains the #1 concern: Customers are drawn to SPs that provide sophisticated (private) networking within their ecosystem that avoids all the issues with public cloud network access.

Hybrid cloud adoption is rising, but hybrid cloud via federated applications is limited. Customers are migrating toward SPs that can help develop hybrid cloud strategies that meet the compliance and governance edicts of their organizations.

Tier 1 Apps are moving to the cloud. Customers are attracted to SPs that offer IOPs and capacity as independent solutions; allowing customers to pay for each independent of the other. We found some instances where all-flash arrays are an enabler and create additional value add.

SLAs remain a sticking point. Customers expressed concerns about Amazon specifically and cloud providers generally with respect to the SP’s shared risk tolerance. In other words, customers are concerned less about the size of refunds for downtime and are more concerned about the SLA as a proxy of the SP’s confidence in their ability to deliver on an SLA.

At re:Invent, Amazon executives claimed they’ve never lost business due to an SLA – we believe differently based on our conversations with customers.

The Role of Open Source

There are numerous players in the cloud space ranging from large service-heavy players (e.g. IBM, HP, Oracle), dedicated cloud service providers and players relying on the evolution of an ecosystem (e.g. VMware and EMC). As well companies like Dell are going hard after this space and even Facebook with its Open Compute Project is participating in earnest. One of the more interesting initiatives is OpenStack. Started by a group of developers inside of NASA, and supported by Rackspace the project has gained steam with mainstream developers and virtually every vendor on the planet.

OpenStack was in its early days a clear effort to provide an alternative to AWS. Many were skeptical initially that such a large undertaking could be successful but it is clear the project has momentum and is maturing. OpenStack is the poster-child for open source cloud, using commodity hardware and frameworks to provide services for applications at a much reduced cost for customers. The OpenStack community has made enormous strides in two ways: 1) By bringing in virtually all major vendors as contributors and supporters and 2) End customers are starting to put in real systems and are demonstrating that OpenStack is stable and cost effective.

The bottom line is organizations are excited about OpenStack, especially because it provides protection against lock-in from technology vendors and cloud service providers. While building clouds on open source is not trivial and may require heavy lifting, the power of the community model appears to be here to stay.

CIOs Should Remain Open to the Cloud but Cautious

Amazon AWS is impressive and has changed the way the industry thinks about enterprise infrastructure. It’s catalyzed an entire trend around so-called “Private Cloud” and forced organizations to improve agility and become more efficient. Moreover, it’s created a new class of cloud service providers that are geared up to better serve the enterprise. In particular, we’re seeing the emergence of cloud service providers that are adding value in specific industries, bringing more robust and complete solutions in certain areas (e.g. backup, DR, networking) and engineering their offerings to serve applications beyond test and dev—Amazon’s clear sweet spot.

The decision to outsource infrastructure to the public cloud, while perhaps trivial for a smaller business or a funded startup, is not simple for many mid-sized and large organizations. We recommend that CIOs investigate several areas of caution and consideration prior to making any moves to the cloud generally but Amazon specifically, including:

Amazon is a horizontal player competing on the basis of scale. It's not about high touch.

Amazon’s SLAs have been described as “we’ll do our best – if we don’t please send us an email.” Think of an SLA as a proxy for shared risk and degree of SP flexibility. Will the SP change terms and condition language in an SLA? If not, it’s a red flag.

Amazon's premium SLA pricing is a complex matrix of options that underscores its aversion to high-touch business models. This is a warning to CIOs, who should understand this thoroughly before making strategic commitments.

Can the service provider support a wide variety of enterprise apps beyond test and dev? An SP statement that it can is not an indicatio. Dig deeper and study use cases carefully.

How transparent is the SP with respect to policies, where data is placed, security data, etc.?

What is the SP's track record with outages, and how has it responded. Amazon has had some high-profile outages. Other SPs may have as well, but they probably weren’t as well publicized.

What kind of access does the SP provide to its professionals? Can the SP be a consultant and trusted advisor?

What other business processes need to be in place to move to a cloud SP? How complex are these? For example, when you write an app on Amazon you have to consider latency management, location management, SLA management; all through Amazon's API. Are you ready for this complexity?

Amazon: The “Canary in the Coalmine”

Amazon is the pioneer and as such takes many arrows. Nonetheless, it’s aggressive move into enterprise spaces warrants consideration and caution by practitioners to use AWS properly and for the right strategic fit.

What follows are three interesting comments from the Wikibon community practitioners regarding Amazon AWS:

"...maybe you find a storage solution that works for you that's better than the Amazon platform, and you've got stuff in the Amazon Cloud, then you’ve got stuff in some storage platform somewhere else, and it's got to integrate with your own internal services. Right there, you've created a WAN network that is extremely complicated that you may not be able to solve.”

"I think that the idea of the homogeneous environment is great for development. That's why I think Amazon's getting a lot of the traction, or for those non-secure or non-concerning events. You know, think about Netflix, right. A homogeneous environment of multiple clouds makes sense for them because it's, 'Hey, we're gonna send videos out the door.' But the guys running Oracle for financial services, healthcare, or you know, an Oracle for manufacturing or SAP. You know when we start talking about critical workloads for the businesses, I think that argument goes out the door."

"You know I talk to folks doing Eucalyptus, running on the Amazon Cloud and the one question I ask 'em is how many people do you have that are just trying to figure out how you operate within that environment for the applications you're writing. And on average, the number is about 35 percent of their resources. That's a killer."

Action Item: Amazon AWS is moving more aggressively into the enterprise, stepping up marketing of its impressive suite of services. Its message to the corner office is 'why waste time doing plumbing when we can simplify IT." This will resonate with CEOs, CIOs, and CTOs at mid-to-large-sized organizations. However, they must understand the right strategic fit for Amazon, which today is essentially test and dev apps and corporate skunkworks programs. At the same time, IT executives should forge relationships with service providers that can mimic many AWS benefits within specific verticals or domains while providing vastly improved partnership models around security, governance, risk management and strategy.

Cloud computing is essentially 'renting' hardware and software resources over the Internet as a service. Network virtualization is creating a pool of logical network resources from multiple physical networks.

Cloud services typically use network virtualization to aggregate multiple physical network resources and logically present these as a pool that can be shared by many users. Network virtualization, however does not necessarily mean networking resources are delivered as part of a cloud service. Network virtualization can be applied within an enterprise (often referred to as a "Private Cloud") and does not necessarily imply delivery over a public network as a rental service.

Quick commentary from the front lines in the field on the below abstract... I am a long time Client Exe at EMC (which is a great company and culture to work in btw... ) so readers can consider that into the equation when I agree and confirm the below, but now on two seperate project use cases with my customers we have worked on the same side of the table to build out transparent comprehensive TCO models, and on both occasions I have customers agreeing to a 2x price premium with Amazon over a 36 month period. It was educational to peel back the onion and apply detail and context around a flat rat $/GB/Month charge... which represents the minority of the cost.

IMO - It's all about T2M and ease of 'provisioning' (read procurement). If ease of procurement continues to get better on the Enterprise SP side of the house (ATT,RSpace, Savvis, et al (which is a multi-faceted topic/challenge I'll admit but if they just ease up on the minimum commitments on pricing tiers...), Amazon will be in a fix because the SP's can be damn good at what they do and I'll suggest offer more reliabily and less risk to customers.

Good read all and all and I see in the field most what's being reported here. Thank you!

"Lower costs: Referencing an IDC study commissioned by Amazon, AWS executives cite 70% lower TCO. We believe users should be cautious assessing these figures, as our research shows that on an apples-to-apples basis, renting from Amazon is often significantly more expensive than owning, especially for companies with over $1B in revenue. TCO comparisons for public versus private clouds must take into account the complexity of applications running and the resilience, governance and control edicts of an organization."