Azure

November 21, 2013

The number of open source frameworks that are available today is continuously growing at an enormous pace, with over 1 million unique open source projects today, as indicated in a recent survey by Black Duck.

The availability of these frameworks has changed the way we build products and structure our IT infrastructure products are often built through integration of different open source frameworks rather than developing the entire stack.

The speed of innovation is also an area that has changed as a result of this open source movement. Where previously innovation was mostly driven by the speed of development of new features, in the open source world innovation is greatly determined by how fast we can integrate and take advantage of new open source frameworks.

Having said that, the process of trying out new open source frameworks can still be fairly tedious, especially if there isn't a well-funded company behind the project. Let me explain why.

Open Source Framework Integration Continuum

The integration of a new open source framework often goes through an evolution cycle that I refer to as the integration continuum.

A typical integration process often involves initial exploration where we continuously explore new frameworks, sometimes without any specific need in mind. At this stage, we want to be able to get a feel of the framework, but don't necessarily have the time to take a deep dive into the technology or deal with complex configurations. Once we find a framework that we like and see a potential use for it, we start to look closer and run a full POC.

In this case, we already have a better idea of what the product does and how it can fit in our system, but we want to validate our assumptions. Once we are done with the POC and found what were looking for, we start the integration and development stage with our product.

This stage is where we are often interested in getting our hands on the API and the product features, and need a simple environment that will allow us to quickly test the integration of that API with our product. As we get closer to production, we get more interested in the operational aspects and need to deal with building the cluster for high availability, adding more monitoring capabilities and integrating it with our existing operational environment.

Each of those steps often involves downloading the framework, setting up the environment and tuning it to fit the need of the specific stage (i.e. trial, POC, dev, production). For obvious reasons, the requirements for each of those stages are fairly different, so we typically end up with a different setup process with various tools in each step, leading to a process in which we are continuously ripping and replacing the setup from the previous step as we move from one stage to the next.

The Friction Multiplier Effect

The friction that I was referring to above refers to a process for a single framework. In reality, however, during the exploration and POC stages, we evaluate more than one framework in order to choose the one that best fits our needs.

In addition, we often integrate with more than one product or framework at a time. So in reality, the overhead that I was referring to is multiplied by the number of frameworks that we are evaluating and using. The overhead in the initial stages of exploration and POC often ends in complete waste as we typically choose one framework out of several, meaning we spent a lot of time setting up frameworks that we are never going to use.

How can we make the process simpler?

In a nutshell, we combined two main principles - SaaS as a means to carve out the infrastructure complexity and open source practices and tools to allow for full customization.

We use a SaaS-based model to provide a self-service experience that will allow users to launch their framework of choice through a simple one click deployment. This is particularly useful for the initial exploration stages in which users want to get a preview of many products and don’t have time to spend on setup and installation. The difference between our service and other SaaS services that are often done per product is that in this case, Cloudify provides a single-click deployment for many different frameworks using a single service.

By relying on open source practices and tools, such as Github, we allow different degrees of customization at each stage to the point where a user can take the entire framework that we are using online on his own environment with full customization.

Launching a New Application Catalogue on HP’s OpenStack-based Public Cloud

This week we are launching a new application catalogue service on HP’s OpenStack-based Public Cloud, with the aim to make the use of open source frameworks on OpenStack simpler than on any other cloud.

The service includes the following main features:

1. Hassle-free preview - One click experience for trying out several open source services for 60 min.

2. Trial as a Service - Aimed for simple, 1 week POC.

3. Frictionless transition to unlimited account - Allows you to use the exact same one-click experience, but in this case the service will be launched under your account and managed by you.

4. Built on Github - The catalogue service is built on Git - this means that you can use the Git experience to add new services, customize them as you would any other open source project.

5. Fully open source - Having the catalogue (including the UI), the orchestration engine and the OpenStack-based cloud in open source means that you can also choose to clone this entire framework anywhere you like, including in your own private environment. In addition, leverage the investment that you built for the trial and POC stages for future phases, as you're going to use the exact same tools throughout the entire process.

With this tool, users can launch any app outside of the catalogue service through a unique embed model similar to the way someone would embed a video from YouTube. How is that different that Amazon, GAE or Azure offerings?

All other clouds limit their services to a particular set of pre-canned frameworks. In addition, the infrastructure and tools that most clouds offer are not open source. This means that you will not be able to use the same services under your own environment. It also introduces a higher degree of locking.

This flexibility is critical as the industry moves toward more advanced development and production scenarios. This could be a huge stop gap. If you get stuck with the out of the box offering, you have no way to escape, with your only way out to build the entire environment yourself, using other sets of tools to build the entire environment completely from scratch.

The hassle-free preview model as well as the fact that you can run any of the services outside of the catalogue service makes the Cloudify Application Catalogue significantly simpler. This is especially true when comparing it with any of the existing alternatives on other clouds which often require registration and credit card as a way to get started.

Final Words

In today’s world, innovation is key to allowing organizations to keep up with the competition.

The use of open source has enabled everyone to accelerate the speed of innovation, however, the process of exploring and trying open source frameworks is still fairly tedious, with much friction in the transition from the initial exploration to full production.

With this new service, we believe that we have changed the game, showing that we can now make the experience of deploying new open source services and applications on OpenStack simpler than on other clouds. By taking an open source approach, we guarantee not to hit a stop-gap as we advance through the process and avoid the risk of a complete re-write or lock-in. At the same time we allow hassle free one-click experience by providing an “as-a-service” offering for deploying any of our open source frameworks of choice. By the fact that we use the same underlying infrastructure and set of tools through all the cycles, we ensure that users can take their experience and investment from one stage to the other and thus avoid a complete re-write.

With all this we now allow many users and organizations to increase their speed of innovation dramatically by simplifying the process of exploring and integrating new open source frameworks.

The shift in the thinking about the enterprise cloud consumption also poured water into the “DevOps” concept advocated by vendors and pundits with their foot in the IaaS world. When organizations embrace PaaS instead of infrastructure services, we don’t need the DevOps marriage and the associated cultural change (believe me, this cultural change is giving sleepless nights to many IT managers and some consultants are even making money helping organizations realize this cultural change). With PaaS, organizations can keep the existing distinction between the Ops and Dev teams without worrying about the cultural change. In fact, with cloud computing, the role of the Ops is not going away but it stays in the background offering an interface which developers can manage themselves.

Krishnan represents one of the common attitudes and subjects of debate between two main paradigms for developing and managing applications on the cloud:

PaaS -- PaaS takes a developer, application-driven approach. A PaaS platform provides generic application containers to run your code. The PaaS platforms deals with all the operational aspects needed to run your code such as deployment, scaling, fail-over, etc.

DevOps -- DevOps takes a more operations-driven approach. With DevOps, you get tools to automate your operational environment through scripts and recipes, and keep full visability and control over the underlying infrastructure.

The Difference Between PaaS and DevOps

Both PaaS and DevOps aim toward the same goal -- reducing the complexity of managing and deploying applications on the cloud. But they take a fairly different approach to deliver on that promise.

Developers may ask: "if I have a self-service portal for deploying applications (aka PaaS), do I need SysAdmins at all?"

SysAdmins may ask: "isn't PaaS just a monstrous black box that prevents me from provisioning the specific services we need to deploy real-world apps?" ... The typical SysAdmin thinks that they can get to 75% of PaaS functionality with DevOps tools like Chef without giving up any systems architecture flexibility.

I thought that Carlos Ble's post Goodbye Google App Engine (GAE) is a good example that illustrates why the initial perception behind GAE as a simple platform that provides extreme productivity can be completely wrong.

...developing on GAE introduced such design complexity that working around it pushes us 5 months behind schedule.

Part of the reason that brought Carlos through that experience IMO is that in the course of trying to make GAE extremely productive the owner made the platform too opinionated, to the point where you lose all the potential productivity gains by trying to adopt their model. In addition, with a platform like GAE you have very little freedom to leverage existing frameworks such as your own database, or messaging system, or any other third-party service that can in itself be a huge contributor to productivity.

Instead, you're completely dependent on the platform provider's stack and pace of development, and that in itself can work against agility and productivity in yet another dimension. In this specific example, Carlos couldn’t use a specific version of a Python library that would have made his productivity higher, and instead had to work around issues that were already solved elsewhere. This is a good example how the lack of flexibility leads to poorer productivity even in the case of simple applications.

Putting DevOps & PaaS togatehr

It looks like more people in the the industry have come to recognize that rather than looking at DevOps and PaaS as two competing paradigms, it might be best to combine the two, as Christoper Knee pointed out in his post:

What if you could get a PaaS that wasn't a black box, enabling developers to deploy apps easily while still giving SysAdmins the ability to provision any services they needed (a la Cloud Foundry)?

In 2012, we’ll see many of the DevOps tools, such as Chef and Puppet, integrated into application platforms, making it easier to deploy complex applications onto the cloud. In the same way, we’re going to see more Application Platforms adopting the automation and recipe model from the DevOps world into the application platform. The latter have the potential to transform the opinionated PaaS offerings as we know them today, with Heroku and GAE leading that trend, into a more open PaaS offering that better fits the way users develop apps today, and provide more freedom to choose your own stack, cloud, and application blueprint.

What Makes a Cloudify and CloudFoundary a PaaS Jailbreaker?

A PaaS that allows developers to use whatever tool they want to build their cloud applications and the platform tackles the deployment, scaling and management of these apps in the cloud data center.

VMware CloudFoundry is one of the more notable references in that category. Quoting Christoper Knee:

CloudFoundry runs anywhere, incuding on your laptop. CloudFoundry's service container concept is particularly strong, kind of an appliance on steroids.

These ideas were the founding concept behind Cloudify, i.e. putting DevPops & PaaS togather in a single framework. As with CloudFoundry, Cloudify enables you to break away from the "blackbox PaaS". However, even though CloudFoundry is significantly more open than most other PaaS alternatives, at the core it is still based on the "my way or the highway" approach (aka opininated architecture), which forces you to fit into a specific blueprint that is mandated by the platform. Cloudify, on other hand, pushes the envelope even further by adopting the concpet of recipes that was first introduced by DevOps frameworks such as Chef and Puppet. It introduces more application-driven recipes through the introduction of Domain Specific Language that extends upon the groovy language.

The Cloudify recipes give you the full power to plug in any application stack on any cloud (including a non-virtualized environment) and manage them in similar way to the way you would manage them in your own datacenter or machines. You can also call Chef and Puppet from within the recipes. All this, without hacking the framework itself. As with other similar DSLs, the Cloudify DSL was designed to express even the most complex application management tasks, such as <recovery from a data center failure> in a single line and avoid all the verbose script and API calls that are often involved when you work at the infrastructure level.

All this makes Cloudify an even more open alternative that fits a large variety of the current enterprise application stacks, including:

JEE applications

Big Data applications

Multi-tier applications

Native applications (C++,..)

.Net, Ruby, Python, PHP applications

Multi-site applications

Low-latency applications (that can't run on VMs)

It also makes Cloudify more open for special customization of the existing stack, like in the case of:

An application that needs a certain version of MySQL (not the one that comes with the framework)

An application that needs to run on Redhat (not Ubuntu). or even more interesting -- a case where there are mutiple applications, each needing a different OS served at the same time.

Cloudify also provides a more advanced level of control geared for mission-critical apps, including:

Monitoring the application stack and topology

Adding custom application metrics

Adding custom SLAs

It can also work on wide variety of cloud environments, including Microsoft Azure and non-virtual enviroments.

One of the great powers of the recipe is that it is a great collaboration tool. Once you develop a recipe, it is very easy to share it wtih different groups -- whether internal groups like developement, QA, and operations, where the recipie provides a programmatic definition of thier environment, or it can be shared between the product team and the pro-services team, where your sales and pro-services can easily install and update product versions in a consistent way, as well as reproduce customer scenarios and share them with the support team. Recipes are also a great tool for collaboration through a wider community network, where users can collaborate by sharing common recipes and best practices over the web.

Quick Introduction to Cloudify Recipes

Below is a short snippet that shows a simple application recipe, of a typical java-based web application based on JBoss as the web container and MySQL as a database. The application recipe describes the services that comprise the application, and thier dependencies. The details on how to run MySQL and JBoss is provided in a seperate recipe for each of the individual services. A more detailed description of how a service recipe would look like can be seen here.

To get a taste, you can try one of the available recepes such as Cassandra, MongoDB, Tomcat, JBoss, Solar, etc., just as a simple way to try out these products on your own desktop or on any of the supported clouds, without the hassle that is often involved in doing so and without even direct relation to Cloudify per-se.