Cloud Computing in Someone Else’s Cloud: The Future

Ever hear of a fabless chip company? This is a company that sells Integrated Circuits but owns no manufacturing facilities. They just write software, in effect, and send it out to someone else’s fab. Brilliant. Many kinds of manufacturers often do the same. After all, manufacturing may not be the distinctive competency of a company, or the company may achieve better economies of scale by using centralized manufacturing owned by much large companies.

This is starting to happen big time with web software. IBM just announced they’re going to join Amazon in the cloud computing business with “Blue Cloud”. Companies will be able to buy capacity in someone else’s cloud which they sell as their own. No need to own any hardware or even visit a colo center. Why would you want to own a datacenter if you didn’t have to? Why would you think you can do it as well as Amazon or IBM? Many others including Yahoo, Google, and Microsoft will be a part of this future. Sun is already there with Sun Grid.

So far, the formulas are pretty similar. IBM and Amazon are both Linux-based systems built on virtualization software. At some point, if enough hardware capacity is locked up in this rental data centers, it will become an important sales channel for all server hardware manufacturers. Take Dell for example. They’ve always sold direct. Shouldn’t they consider this kind of business, especially when other hardware companies are going there? What about HP? Look at it as a way for hardware makers to switch from the equivalent of perpetual licensing to the SaaS rental model.

What about Microsoft? Can .NET be as successful if they don’t build a Cloud Computing Service that is .NET based? Seems to me this is a strategic imperative for the OS crowd lest Linux steal the show. Sun is already there with Solaris on Sun Grid. This is the system my old alma mater Callidus Software uses to host their SaaS solution and it works well. IBM is not missing the chance to offer PowerPC as well as x86 servers for Blue Cloud. IBM is also partnering with Google around Cloud Computing, so there may be all sorts of interesting bedfellows before this new paradigm is done rolling out.

A great example that’s being written about by Scoble and others is Mogulus. CEO Max Haot says they don’t own a single server, it’s all being done on Amazon, and yet they’re serving live video channels to 15,000 people with just over $1M in funding. You’ve got to love it! A number of other serverless and near serverless companies commented on Scoble’s post if you want to see more. These big guys are not the only ones in the business. Certainly companies like OpSource and Rackspace count too.

There are many potential advantages, and a few pitfalls. First the advantages: it’s a whole lot easier and cheaper to build out your infrastructure this way. Why have anything to do with touching or owning any real hardware? How does that add value to your business? The real innovators will make it easy to flex your capacity and add more servers on extremely short notice. Take a look at your average graph of web activity:

This is traffic for cnn.com. Notice how spikey it is? Those are some big spikes. If you web service hits one, you must either have a ton of extra servers on tap, or deal with your site getting painfully slow or going down altogether. With a utility computing or grid service such as Amazon EC2, you can provision new servers on 10 minutes notice, use them until the load goes away, and then quit paying for them. Payment is in 1 hour increments.

I know a SaaS vendor whose load doubles predictably one week out of every month because of what his app does. He owns twice the servers to handle this peak. He’s growing fast enough at the moment that he doesn’t sweat it much, but at some point, he could really benefit by flexing capacity.

Now let’s talk about downsides. First, most software doesn’t just run unchanged on these utility grids. Even if it did, most software isn’t written to dynamically vary it’s use of servers. Adding servers requires some manual rejiggering. Amazon has a particularly difficult pitfall: you have to write your software to deal with a server going down without warning and losing all it’s data. In fairness, you should have written your software to handle that anyway because it could happen that you whole machine is toast, but most companies don’t start out writing software that way. There are companies, Elastra is one, that purport to have solutions to these problems. Elastra has a MySQL solution that uses Amazon’s fabulously bulletproof S3 as it’s file system.

The second issue isn’t so much a downside really. We can’t blame these services for it at any rate. What I’m talking about is automation. To really take advantage here you need to radically increase your automation levels. I recently saw a demo of some new 3Tera capabilities that I’ll be writing about that help a lot here.

The bottom line? You’re missing out if you’re not exploring utility computing: it can save you a bundle and make life a lot easier. The subtext is that there are also a lot of new technologies, vendors, and partnerships coming down the pipe to help maximize the benefits.

Related Articles

Nick Carr picks up the theme. One of the commenters raises an excellent point. Using an IBM or Amazon gives peace of mind to customers of small startups.

Bob — Excellent analysis. I particularly liked that you brought up the point, which not too many people have, that it is not necessarily trivial to write your application to reliably distribute with dynamic scaling capabilities on the cloud — especially when dealing with stateful.transactional applications.

That’s one of the things that the GigaSpaces platform let’s you do. You should take a look at what we’ve done for Amazon EC2 integration. The peak load issue is also part of our rationale for offering our Start-Up Program — free software for companies under $5 million in revenues.