February 05, 2013

Hybrid private-public cloud models are the reality in most enterprises in the forseeable future. Developers and business units coninuously go "rogue" and use public cloud services, while IT struggles with maintaining compliance and control and managing legacy apps in the traditional data center.

I've spoken many times about this constant push and pull between flexibility and control in the cloud. And it is becoming apparent that we need a better way.

Well, today, my friends at Ravello Systems announced that they have launched in public beta their Cloud Application Hypervisor and that they have received $26 million in funding from Sequoia, Norwest Venture Partners and Bessemer.

I've had the pleasure of working with the Ravello team in preperation for this launch, and I believe they have a much needed solution -- built on a very unique technology -- for many of the biggest problems associated with cloud deployment.

What is a Cloud Application Hypervisor?

The best way to think about Ravello's technology is using the familiar hypervisor as an analogy. But while the traditional hypervisor holds a single virtual machine in it, Ravello's CAH holds a complex multi-VM app in it. This allows you to encapsulate a complete application (load balancers, app servers, web servers, databases, etc.) AND it's environment (networking, storage, etc.). The result is complete portability across clouds and between on-premise and public clouds. For example, you could take an existing VMWare-based application running on your data center and deploy it on AWS, Rackspace Cloud or HP Cloud as-is.

Development and Testing in the Cloud

So what is this good for? One of the first use cases Ravello Systems is targeting is the need to do development and testing in the cloud, while running the app in production in the on-premise enterprise data center.

The cloud -- with its unlimited resources and ability to spin up machines quickly and then dispose of them -- is ideal for testing and development. But as mentioned earlier, enterprise IT departments still have many issues with running production apps in the cloud. These issues include compliance, security, cost and fear of vendor lock-in and dependence. With Ravello, developers can deploy the application "capsule" in a public cloud, run multiple instances of it for parallel testing, collaborate on development and generally enjoy the flexibility of the public cloud.

When it's time to move the app into production, IT can simply deploy the encapsulated app in the data center.

In the future, Ravello will address additional use cases such as more general cloud portability, cloudbursting and other scenarios.

How It Works

It's important to note that Ravello is delivered as a cloud service itself. You create an account, log in and can then create blueprints of applications (or use pre-existing ones) which can be cloned and shared.

The leadership team at Ravello brings a lot of credibility to the table. Among them, Benny Schnaider, Rami Tamir and Navin Thadani, were the team that created the standard Linux hypervisor, KVM. They sold the company they created to commercialize it, Qumranet, in 2008 to Red Hat.

October 03, 2012

Zenoss just published the results of an open source cloud survey they did. They polled more than 100,000 of their community members to determine the prevailing sentiments concerning open source cloud deployments, the perceived advantages and disadvantages of this technology as well as to gain insight into future open cloud deployments within IT departments.

Survey respondents included more than 600 IT professionals including system administrators and architects, developers, network engineers and CIOs.

Unsurprisingly, OpenStack appears to dominate adoption plans today but CloudStack and Eucalyptus are on the rise.

Check out the infographic they created from the data below (click to enlarge). If you want the full report from the survey go here.

May 08, 2012

This coming Thursday, May 10, I'll be giving one of the keynote speeches at the Citrix Synergy 2012 conference in San Francisco. My talk is in the morning and comes right after a distinguished speaker: Sameer Dholakia, GM of the Cloud Platforms Group at Citrix.

You can see the description of the two talks (and the one by Citrix CEO Mark Templeton who speaks on Wednesday) on this Featured Speakers page.

The title of my talk is "From the Bottom Up: Patterns of Cloud Adoption". Here's the abstract:

The current pattern of cloud adoption in the enterprise may surprise you. Rather than big, strategic, top-down decisions set by the CIO, cloud computing services – IaaS, PaaS, SaaS – are being adopted primarily through a pattern of bottom-up adoption. Rank-and-file developers, IT administrators and business decision-makers are embracing cloud services and using them as a way to get their jobs done and drive the outcomes expected of them. In this talk, Geva Perry will explore this phenomenon, including its causes and the implications for the enterprise, as well as for vendors.

April 03, 2012

Citrix is making a big announcement today. It has two parts. First, it's moving its CloudStack framework, which was owned by Citrix and distributed at CloudStack.org through a GPL license, to the Apache Foundation. Second, it is aligning CloudStack with Amazon Web Services' architecture and APIs.

This is big news that's sure to ruffle some feathers in the cloud computing space. So the questions I have are:

Why are they doing this?

What are the implications?

Why Is Citrix Moving CloudStack to the Apache Foundation?

The move appears straightforward. Citrix acquired CloudStack through its $200 million acquisition of Cloud.com in July 2011. From what I'm hearing, it has had good success with enterprise and provider adoption of the product, but it was far from being accepted as an industry de facto standard, which is what, I assume, they had hoped for. So it would make sense for them to move it to a respected open source foundation like Apache.

But wait. There was already a OpenStack vying for that de facto standard open source platform status, and Citrix announced its support for it two months before the Cloud.com acquisition. The company said it would continue to support both platforms after the acquisition, so what happened?

Citrix claims that the OpenStack foundation wasn't run well: it was dominated by Rackspace and had a "pay-to-play" model. The result was that the APIs were poorly designed and the product lacked stability and maturity (a claim I have heard from others). At the same time, while OpenStack is getting support from the likes of Cisco and HP, the community is somewhat fragmenting, with multiple distributions and extensions from the likes of startups Cloudscaling and Piston Cloud. This started creating a problem for enterprise customers and providers who were getting confused and having a bad experience with OpenStack.

In the meantime, Citrix is feeling competitive pressure from the company it views as its primary rival for actual customers (as opposed to winning the hearts and minds of the "community"): VMWare.

It simply couldn't wait and had to go on the attack with a bold move. This is it -- and it's a pretty good one.

Why Align CloudStack with the AWS APIs?

The final piece in making CloudStack the de facto standard cloud platform is the API. By aligning with the Amazon APIs, CloudStack gains a hghly-adopted, proven API with a massive ecosystem of integrated tools and services around it. They tell me they will have 100% compatibility by the end of this year.

Game of Clouds

With this move complete, Citrix has a production-proven stable product, which is now an open source platform managed by the widely-respected Apache Foundation, using a popular API with a massive ecosystem around it. Pretty clever, I think.

On the other hand, CloudStack is fighting on three fronts now: its traditional arch-enemy VMWare, the OpenStack camp with the dozens of vendors -- large and small -- behind it, and Eucalyptus. A regular game of thrones. And to paraphrase Game of Thrones: In the Game of Clouds, you either win or you die.

March 02, 2012

A quick follow up on my previous post on cloud computing adoption patterns. I have been guest blogging on Compuware's CloudSeluth blog and have written a post on this topic titled Cloud and Bottom-Up Adoption. In it I reiterate some of the points I wrote about in the last post and which I presented in my CloudConnect keynote, but I added another angle, comparing the first attempt of creating a true public cloud, the Sun Grid, to the first successful attempt: Amazon Web Services.

The main difference between the two? You guessed it.

While the former (Sun) targeted the traditional IT customer, the CIO, Amazon went after developers, and that was the secret to their success.

February 20, 2012

Last week I gave a keynote presentation at the CloudConnect conference in Santa Clara. The title of the presentation was: "Surprise! Your Enterprise is Already Using the Public Cloud."

Regular readers of this blog (or those who work with me) know I go on about this a lot: In the enterprise, cloud computing services (IaaS, PaaS, SaaS) are being adopted bottom-up. In other words, by the rank & file (developers, IT admins, business folk) and not top-down with a big strategic decision by the CIO.

That's what the keynote was about and the title was addressing the CIO, who is the last to know about cloud computing adoption within his or her organization.

If you're interested in this topic you can watch the video of the presentation (you need to scroll down to get to it) on the CloudConnect web site.

November 28, 2011

This is a quick follow up on my previous post, 10 Predictions About Cloud Computing, which received quite a bit of attention in the cloudosphere. Specifically, my second point there, which stated:

Public Rules: Internal clouds will be niche. In the long-run, Internal Clouds (clouds operated in a company's own data centers, aka "private clouds") don't make sense. The economies of scale, specialization (an aspect of economies of scale, really) and outsourcing benefits of public clouds are so overwhelming that it will not make sense for any one company to operate its own data centers. Sure, there need to be in place many security and isolation measures, and feel free to call them "private clouds" -- but they will be owned and operated by a few major public providers.

A coouple of weeks ago I had an interesting conversation with Marten Mickos, CEO of Eucalyptus (and of MySQL fame), a private cloud platform provider. I explained that I believe that Internal Private Clouds (i.e., clouds operated on a company's own servers), will become a niche in the long run.

Surprisingly, Marten did not disagree, but he made the following very good point. If we look a few years ahead, let's say 2015, the IaaS market in total will be a roughly $20 billion market according to estimates by IDC and others. If private IaaS is anywhere between 10% to 20% of that, certainly within the bounds of the definition of a "niche", we're still looking at a multi-billion dollar market. Enough to sustain the success of several startups and large vendors.

October 12, 2011

One of the overlooked drivers of cloud adoption is the tightly integrated, I call it pre-integrated, ecosystem you get when you choose the right provider.

Because cloud environments are generally homogenous and consistent within their own boundaries -- and this is true for IaaS, PaaS and SaaS -- and because they are tightly controlled by the provider, the cloud provider is in a position to pre-integrate other systems, components and apps to the infrastructure (or the application, in the case of SaaS).

In traditional IT, integration is one of the most complex, painful and costly processes. A pre-integrated ecosystem allows making these integrations simply by flipping a switch.

Pre-Integration Examples

My first exposure to the concept of pre-integration was in January of 2006, when Salesforce.com launched the AppExchange. This was more than two years before Apple launched the AppStore, mind you. And the idea wasn't fully baked yet, but it was certainly an "a-ha" moment. And it goes to the heart of what's so revolutionary about cloud computing.

Today, for example, you can with a few clicks of a button, integrate between Salesforce.com's CRM app and Google Adwords, Marketo, LinkedIn, VerticalResponse, Zendesk and hundreds of other applications. Again, with traditional on-premise CRM such integrations would have been an expensive and lengthy proposition.

But a pre-integrated ecosystem doesn't only apply to SaaS. It also works well with PaaS and IaaS clouds.

The first chance I got to implement the concept was with the Heroku founders, James, Adam and Orion, in early 2009 with the Add-Ons that can be added to any app a user developed and runs on the Heroku PaaS. A perfect example of pre-integration at Heroku was New Relic, which is Application Performance Managemet (APM) as-a-Service. You basically get New Relic APM capabilities with a click of a button. In the on-premise world, implemeting an enterprise-grade APM (Wiley, for example) takes months of professional services to implement.

In Infrastructure-as-a-Service, there are many examples of a pre-integrated ecosystem particularly around AWS and the OpenStack framework. In particular there are many management and monitoring tools that have pre-integrated with these two cloud platforms, but other software categories as well.

Daisy-Chaining Cloud Services

The notion of pre-integration can be taken even further.

When I hosted the cloud track at QCon 2010, one of the speakers was Thor Muller, CTO & Co-Founder of GetSatisfaction, and in his presentation he introduced me to a phrase I've been using ever since: "daisy-chaining services".

My favorite example, one from SaaS for small business, involves daisy-chaining Bidsketch, Freshbooks, Highrise and RightSignature.

Bidsketch is a SaaS product for creating and sending proposals. It lets an individual or company create a proposal and share it with a prospective client who can then log in and view the proposal on Bidsketch, make comments and changed and ultimately approve the proposal. As all of this happens, Bidsketch automatically updates the events in the Highrise (CRM) entry for that client ("Proposal sent", Proposal approved", etc.).

Once the proposal is approved, Bidsketch then activates another service: RightSignature, which is used for electronic, online signatures. Both sides can sign the approved proposal, and this fact is updated in Biksketch: proposal signed. In turn, Bidsketch, again updates the Highrise CRM system. Bidsketch can then automatically create an invoice in the Freshbooks invoicing service -- and Freshbooks will then update Highrise that an invoice was sent, payment was received, etc.

What Does This Mean for Cloud Customers?

Customers are increasingly becoming aware of the importance of ecosystems of cloud services -- and specifically of the value of pre-integration.

I was recently asked to recommend a CRM system to one of the startups I am on the advisory board of. As much as I dislike the complexity and poor peformance of Salesforce.com, I had no choice but to tell them it's the only way for them to go -- for the simple reason that it's the only CRM SaaS offering that is guaranteed to be pre-integrated not only with every app they need today, but also with ones they will need in the future, which may not even exist yet.

Case in point, when Totango -- another company I am an advisor to -- recently launched its customer engagement SaaS offering (an emerging category, see David Skok's post), it immediately started with support for SFDC. And you can safely assume that any other startup that launches a product that could benefit from integration with CRM, will first support Salesforce (or Highrise if it's targeting SMBs).

In summary, the breadth and depth of ecosystems is becoming a critical factor in how customers choose which cloud services to bet their business on.

I'm hoping to write a separate post on what this means to startups and other cloud services providers. Suffice it to say for now, it's something you should be thinking about...

September 22, 2010

In Shopping the Cloud: Performance Benchmarks I listed a number of services and reports that compare cloud provider performance results, but the truth is that in computing (cloud included) you can throw money at almost any performance and scale problem. It doesn't make any sense, therefore, to talk about performance alone, you want to compare price/performance.

But here's the rub: it is becoming increasingly difficult to compare the pricing of the various cloud providers.

Problem #1: Cloud providers use non-standard, obfuscated terminology

About a year and a half ago I wrote What Are Amazon EC2 Compute Units? in which I raised the issue of how difficult it is to know what it is you are actually getting for what you are paying in the cloud. Other vendors use their own terminology, such as Heroku's Dynos. I'm not just picking on these two, everyone has their own system.

Problem #2: Cloud providers use wildly varying pricing schemes

In addition, the pricing schemes by the various vendors include different components. Take storage as the simplest example, which clearly illustrates the point. Here's a screenshot from the Rackspace Cloud Files pricing page:

It is fairly straightforward, but also contains many elements that are extremely difficult to project (especially for a new application), such as the Bandwidth and Request Pricing. That's OK - you have to make some assumptions.

Problem #3: Not all cloud offerings are created equal

To make things worse, not all cloud storage services were made equal. They have different features, different SLAs, varying levels of API richness, ease-of-use, compliance and on and on.

Problem #4: Cloud computing pricing is fluctuating rapidly

Another big problem with dealing with pricing is that the market is very dynamic and prices change rapidly. Fortunately, most of the movement right now is downwards, due to the increased competitiveness (especially in the IaaS space) and thanks to vendors benefiting from economies of scale and increased efficiency due to innovation.

What to do about it?

So what do you do in such a complex landscape? There seems to be no escape from creating a test application and running it on multiple services to see where the cost comes out. Then again, that may turn out to be a very time-consuming and expensive effort that may not be worth it -- at least not initially. So you should be prepared to have to move your app across cloud providers if and when the costs become prohibitive (which I am seeing happening to more and more companies).

Hopefully, the cloud benchmark services will also start paying attention to pricing and provide a comparison of price/performance and not just performance.

Today Zenoss released a survey it conducted about cloud computing and virtualization. It has some interesting data and they created a very nice looking Infographic with the key findings.

Here's the data point I found most interesting:

I have followed many cloud surveys and reports that measure cloud traction of the different providers (see for example Guy Rosen's State of the Cloud). It has consistently been the case that Amazon is ranked #1 and Rackspace #2 (which is what prompted my Rackspace: The Avis of Cloud Computing post). The Zenoss survey suggests a different story with Google App Engine and Microsoft Azure coming in at #2 and #3 respectively, pushing Rackspace to #4.

Also, GoGrid's penetration, as well as RightScale's (which is a very different animal than the other players on the list) is very impressive.

Note that the wording of the question in the survey was a bit ambiguous: "What are your cloud computing plans for 2010?". I say ambiguous because the survey was conducted Q2 2010, so probably close to the middle of the year. But in any case, it has a forward looking element to it, which gives a little indication of the trends as they are happening.

Anyway, lots of interesting info on both cloud and virtualization. Check out the full survey results (requires registration).