A node between the physical and digital.
The rants and raves of Simon Wardley.
Industry and technology mapper, business strategist, destroyer of undeserved value. "I like ducks, they're fowl but not through choice"

Friday, August 07, 2009

Why open source clouds are essential ...

I've covered this particular topic over the last four years at various conference sessions around the world. However, given some recent discussions I thought it is worth repeating the story.

"Cloud computing" (today's terminology for an old concept) represents a combination of factors that are accelerating the transition of common IT activities from a product to a service based economy. It's not per se a specific technology but a result of concept, suitability of activities, change in business attitude and available technology (for more information, see my most recent video from OSCON 2009).

The risks associated with this transformation are well known. For example, the risk of doing nothing and the need to remain competitive (see Red Queen Hypothesis part I and part II). This needs to be balanced against standard outsourcing risks (for example: lack of pricing competition & second sourcing options, loss of strategic control, vendor lock-in & suitability of activities for outsourcing) and transitional risks related to this transformation of industry (for example: trust, transparency, governance, security of supply).

These transitional and outsourcing risks create barriers to adoption, however whilst the transitional risks are transitional by nature (i.e. short lived), the outsourcing risks are not. The outsourcing risks can only be solved through portability, easy switching between providers and the formation of a competitive marketplace which in turn depends upon the formation of standards in the cloud computing field. If you want to know more about second sourcing, go spend a few hours with anyone who has experience of manufacturing & supply chain management because this is where the cloud is heading.

Now when it comes to standards in the cloud space, it's important to distinguish that there will be different standards at the various layers of the computing stack (application, platform and infrastructure). People often talk about portability between different layers but each layer is built upon subsystems from the lower layer, you can't just make those magically disappear. You're no more likely to get portability between Azure Platform and EC2 as you are to get portability from a programming language to bare metal (i.e. you need the underlying components).

At each layer of the stack, if you want portability, you're going to need common environments (defined through defacto standards), multiple providers and easy switching between them. For example portability between one Azure environment and another Azure environment.

In the above example, Azure would represent the "standard". However, if a marketplace emerges around a proprietary standard then in effect the entire market hands over a significant element of strategic control to the vendor of that standard.

The use of an open standard (i.e. in this case an open source including APIs and open data formats) is an important mechanism in creating a free marketplace without vendor control. We learnt this lesson from the network wars and the eventual dominance of TCP/IP.

As I've often pointed out, the standard has to be running code for reasons of semantic interoperability.Documented standards (i.e. the principle) are useful but they are not sufficient in the cloud world because of the complexity involved in describing an environment (such as a platform). Even if you could describe such an environment, it would create significant barriers of implementation.

To achieve the goal of a free market (i.e. free from constraint by one vendor) then you have to solve both the issues of semantic interoperability and freedom from constraint. This means the standard has to be an expression and not principle and the only way to solve the constraint is for the standard to be implemented as an open source reference model (i.e. running code).

This does however lead to a licensing question, if you created an open source reference model for use as a standard, how would you license it? It is important to remember that the intention of a standard is to encourage portability (i.e. limit feature differentiation) but not to limit competition (i.e. to allow differentiation on price vs service quality)

Whilst GPLv3 prevents redistribution of code changes without releasing the modification, it does allow a provider to offer the system as a service with proprietary improvements. GPLv3 encourages competition in the cloud space by allowing providers to operationally "improve" any system and providing it as a service.

In a world where the standard is provided as such an open source reference model (ideallyunder GPLv3), then you'll also need the creation of an assurance industry to provide end user assurance that providers still match the standard (despite of any competitive modifications for operational improvements). This is how you create a truly competitive marketplace and by encouraging diversity in operations overcome the most dangerous risk of all which is systemic failure in the cloud.

We have already staked the ground with Ubuntu Enterprise Cloud, our intention is to continue to push this and create truly competitive markets in the cloud using the only viable mechanism - open source. Of course, this is at the infrastructure layer of the computing stack. Our attention will shortly turn towards the platform.