I suppose it's a function of the company I work for (IBM) and in particular the organization I work within (WebSphere), but much of my focus and interest in the cloud computing space has been on application infrastructure running in the cloud (PaaS). Specifically, I'm keen on offerings that provide users with the ability to quickly provision and access application environments running in a cloud that's either on-premise or hosted elsewhere. It's in these offerings, at least in present day time, that we see a common and key technological enabler: virtualization.

There are many reasons why virtualization is prominent in PaaS. First of all, the use of virtualization, specifically virtual images, offers the benefits of speed (since no software needs to be installed, merely spurred out of hibernation) and consistency (because configurations and entire environments can be abstracted and "freeze dried" into an image). In addition, for the most part it's something that's been used successfully for quite some time thus bringing a certain level of user trust and familiarity. However, benefits and proven history notwithstanding, the state of the art in the virtualization industry has not reached its ultimate state. In fact, I can think of two technical advancements in the area of virtualization that would benefit both consumers and vendors in the PaaS segment:

1) Broadly interoperable virtual disk formats

2) Common communication interfaces for virtualization platforms

The Open Virtualization Format (OVF) specification defines a standards based way to describe both the packaging and deployment, or activation, of a virtual image. This is helpful because it provides a known structure with which a virtual image platform can interact. What does this mean for a cloud platform that creates application environments from virtual images? Ideally, it means that the cloud platform doesn't have to know what type of image packaging (VMware, Xen, Amazon Machine Image, etc.) is being used in order to unpack and activate the image, thus less image-specific code.

While the OVF standard is a great first step, I believe we now need to focus efforts towards standards that help to define broadly interoperable virtual disk formats. As of now, each virtualization platform, such as Amazon's EC2, VMware, and IBM's PowerVM, has its own proprietary disk format for the virtual images it supports. This means that when vendors choose to provide their software in a virtualized format, they must make an explicit choice as to the platforms they wish to support, a choice that will always constrain some population of their customer base. In a market like application infrastructure, where much of the focus over the last several years has been on producing platform-independent software, this seems especially silly and can frustrate potential customers. If the industry can come up with a standards-based disk format or other standards-based manner of describing disk contents, the burden on the vendor of having to produce a unique virtual image for each platform would be lifted. Also, consumers would benefit from having a smaller stable of more widely applicable virtual images than they have now.

The second of the advancements is focused more towards PaaS cloud management devices or systems. These management systems make PaaS clouds a real possibility by allowing users to easily harness virtualization platforms in order to create, deploy, and maintain application environments running in a cloud. Think the WebSphere CloudBurst Appliance or in a much simpler form the Amazon EC2 console or service interface.

To look at this from a user's point of view, I would like to be able to manage a PaaS cloud made up of what is likely a heterogeneous pool of virtualization platforms (VMware, IBM PowerVM, Xen, Microsoft Hyper-V, Solaris Containers, etc.) from a single one of these cloud management devices. While some of these cloud management devices support multiple virtualization platforms, I have not heard of a device that supports a majority of the most popular virtualization platforms. The reason is not because vendors do not recognize the need for broad platform support. However, the nature of what these cloud management devices do implies that they communicate directly with the virtualization platforms and therein lays the problem. Each of these virtualization platforms has its own communication interface meaning that for each supported platform, a device must have a platform-specific communication layer. This slows down the delivery of platform support and forces vendors to make a choice on what they will support. As with the choice of virtual image packaging discussed above, this choice will always constrain at least some portion of users or potential users.

What we need to solve this problem is a standard that governs communication with virtualization platforms. First we need to identify common tasks (activating virtual images, monitoring systems, etc.) and then we need to define an interface through which components can drive these tasks on a virtualization platform. In this way cloud management devices only need to communicate via standard mechanisms portable to different virtualization platforms, and users get closer to the goal of managing a very heterogeneous pool of said platforms from a single PaaS cloud management system.

I realize that the introduction of standards in either of these areas would mean new capabilities would need to be fit into existing products, but I think this is more sustainable over the long haul than the alternative. That alternative is an untenable proliferation of virtual images, and the possibility of an endless array of cloud management interfaces. If you have any thoughts on necessary advancements in the virtualization space, I'd love to hear your thoughts.

Dustin Amrhein joined IBM as a member of the development team for WebSphere Application Server. While in that position, he worked on the development of Web services infrastructure and Web services programming models. In his current role, Dustin is a technical specialist for cloud, mobile, and data grid technology in IBM's WebSphere portfolio. He blogs at http://dustinamrhein.ulitzer.com. You can follow him on Twitter at http://twitter.com/damrhein.

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Please wait while we process your request...

Your feedback has been submitted for approval.

Most Recent Comments

RobW02/04/10 04:06:00 PM EST

Sounds like a great opportunity, and really hails back to the days before Java Enterprise. We've standardized the application and interfaces (Servlets/WS/EJBs/JMS). Now we need to standardize the platform itself. What were the driving factors there? How did we get to a point where this was a reality? Seems to me it was a combination of demand and a single player (Sun) leading the charge and opening their approach to the java community process. Sounds like something one of the players needs to do in order to get this done, and then it's in the consumer's hands to adopt and laud the effort.

At that point they'll compete on efficiency, reliability and robustness like the appservers of today.