(Formerly "Service Level Automation in the Datacenter")Cloud Computing and Utility Computing for the Enterprise and the Individual.

Thursday, November 06, 2008

A Quick Guide To The "Big Four" Cloud Offerings

We live in interesting times--in fact, historic times. From the highs of seeing the election of a presidential candidate inspire millions to see opportunity where they saw none before, to the lows of experiencing first hand financial pressures that we previously only glimpsed when our parents or grandparents told us tales of hardship and conservation.

For me, in the context of this blog, the explosion of the cloud into mainstream information technology has been undeniably exciting and overwhelming. In the last several weeks, we have seen key announcements and rumors revealing the goals and aspirations of current cloud superstars, as well as the well executed introduction of a new major player. As the dust settles from this frenzy, it becomes clear that the near term cloud mindshare will be dominated by four major players.

(There are, of course, many smaller companies addressing various aspects of cloud computing, and at times competing directly against one or more of these four. However, these are the one's that have the most mindshare right now, and are backed by some of the most trusted names in web and enterprise computing.)

Below is a table that lists these key players, and compares their offerings from the perspective of four core defining aspects of clouds. As this is a comparison of apples to oranges to grapefruit to perhaps pastrami, it is not meant to be a ranking of the participants, nor a judgement of when to choose one over the other. Instead, what I hope to do here is to give a working sysadmin's glimpse into what these four clouds are about, and why they are each unique approaches to enterprise cloud computing in their own right. Details about each comment can be found in the text below the table.

Amazon does not now provide, nor do they show any interest in the the future of providing, an "on-premises" option for their cloud. However, the EUCALYPTUS research project is an example of an open source private cloud being built to Amazon's API specifications for EC2 and S3. Whether this will ever see commercial success (or support from Amazon), is yet to be determined.

Amazon pretty much provides servers and storage, with a few integration and data services thrown in for good measure. The servers are standard OSes, with full root/Administrator access, etc. In theory, you should be able to get any code running on an Amazon machine image (AMI) that you could run on a physical server with the same OS, bar certain hardware and security requirements.

As the AMIs are standard OS images, moving applications off of Amazon should be easy. However, moving data or messaging off of S3/SimpleDB and SQS will probably take a little more work. Still simple, but there is no standard packaging of data for migration between cloud providers today.

Reliability in AWS is primarily provided by replicating AMIs and data across geographically distributed "availability zones". The idea there is to isolate outages to a zone, so by cloning services and data between zones, one should always have at least one instance handy should others go down.

Google provides an open source development kit that allows developers to create App Engine apps and test them on local systems before deploying them into Google's cloud. There is no true replicate of App Engine itself that can be used in a private cloud, not are their plans for one that I know of. To be frank, I'm not sure why you would want one.

Given the unique nature of the Python APIs, the Big Table-based data architecture and the lack of partners exploring clones of the environment, portability is not an option at this point. Nor, it seems, are Google encouraging it, though they are always quick to point out that data itself can be retrieved from Google at will, via APIs. Mapping to a new infrastructure, though, is on the customer's dime.

For high scale-dependent web applications, Google App Engine is the winner hands down. They know how to replicate services, provide redundant architecture under the covers and secure their perimeter. All the customer has to do is deploy their software and trust Google to do their thing.

Microsoft makes a point of defining their cloud platform in terms of a hybrid public/private infrastructure. Their mantra of "Software-plus-Service" is an homage to having parts of your enterprise systems run in house, and other parts running in Azure. In many ways, Microsoft is letting the market decide for them how much of the future is "pure cloud", and how much isn't.

The first platform that Microsoft supports is understandably their own, .NET. If you already use .NET, you've hit the cloud computing jackpot with Azure. If not, you can learn it, or wait for the additional languages/platforms promised in the coming months and years.

Microsoft wants to make portability extremely simple...within its own product line. Like the others, there are ways to get your data via APIs, but there is no simple mechanism to port between Azure and other cloud platforms.

At this point, we can only guess at the reliability of the Microsoft cloud. Will it match the relatively solid record of the current Live properties, or will it run like a Microsoft operating system...

Mark Benioff was adamant at their Dreamforce conference this week that SF is going to kill the idea of on-premises software. It is an ambitious goal; one that smart people like Geva Perry think is going to happen anyway. However, I'm not so sure. The long and the short off it, however, is you can forget any "on-premises" version of Force.com or Sites in the foreseeable future.

Again, while you can go and get you data programatically at will, there are no simple mechanisms for doing so, nor is there anywhere to move APEX code. Portability is not really an option here.

As in the Google case, SF hides so much of the underlying infrastructure that you just have to trust they can handle your application for you. In SF's case, however, they reportedly rely on vertical scaling, so there may be limits to how high they can scale.

About Me

James Urquhart is a widely experienced enterprise software field technologist. James started his career programming a manufacturing job tracking system on the Macintosh (circa 1991), and slowly expanded his experience to include distributed systems architectures, online community and identity systems, and most recently utility computing and cloud computing architectures. He has held positions in pre and post sales services, software engineering, product marketing, and program management for the online developer communities of one of the largest developer sites in the world. His admittedly schizophrenic background is driven by a desire to work with technologies that are disruptive, but that simplify computing overall.

James is also an avid blogger. His primary blog, recently renamed "The Wisdom of Clouds" (http://blog.jamesurquhart.com), is focused on utility computing, cloud computing and their effect in enteprises and individuals.

In addition to his online work, James is the father of two children: a son, Owen; and a daughter, Emery; and the husband of the perfect friend and wife, Mia. James lives in Alameda, CA, plays rock and bluegrass guitar.