Information on Dell Cloud Manager tools and general perspectives on cloud computing.

03/26/2014

Enterprise software is often built without dedicating enough time and resources towards designing for a better user experience. This results in software that is hard to use, frustrating, and ultimately reduces productivity. This post is first in a series of posts co-written by Brian Taylor and me that will dive into designing cloud software for better user experience.

The first experience that a user has with a cloud service is most likely to be its registration process. Having worked with dozens of cloud providers for Dell Cloud Manager, we had our share of good and bad experiences. Based on our learnings we wanted to improvise on our own registration process, and to that end, conducted user tests to confirm our theories on what the registration process should be like. This post outlines our findings on what works and provides a glimpse of what customers can expect from our own registration process in the future.

11/05/2013

This is a post by George Reese, Senior Distinguished Engineer and Executive Director, Cloud Computing at Dell.

I’ve been both a vocal critic and supporter of the OpenStack APIs since the early days of the OpenStack project. In fact, the whole “EC2 API versus native API” kerfuffle that’s been making the conference rounds this year goes back at least as far as OpenStack Bexar. This week, at the OpenStack Summit in Hong Kong, I’ll be taking a deep dive into the suite of native OpenStack APIs, discussing their strengths and weaknesses, and even touching on the EC2 compatibility issue.

Any critique of the OpenStack APIs must start with their elegance. The OpenStack APIs are among the most elegant REST (Representational State Transfer) APIs in the cloud computing space. For the most part, they strictly adhere to RESTful principles and are thus very easy to learn and program against. Equally important, these APIs can easily adapt to novel use cases. Conceptually, that’s exactly what you want in a REST API.

If you contrast the elegance of the OpenStack APIs with the ugliness of the EC2 APIs, you may start to wonder what planet the “pro-EC2” people are coming from. The pro-EC2 API crowd, however, is focused on ecosystem. EC2 has a huge existing ecosystem that could easily leverage OpenStack if OpenStack provided fully compatible EC2 APIs. While there are many challenges with making EC2 APIs a de facto standard (see this article I wrote in 2011), the logic behind leveraging the existing AWS ecosystem is sound and compelling. This debate aside, however, Amazon has achieved one critical feature in their APIs that the OpenStack team has yet to achieve with the OpenStack APIs: backwards compatibility.

The single most important feature of any REST API is version negotiation and backwards compatibility. I don’t care how ugly and otherwise unusable your API is. If you handle backwards compatibility right, you are light years ahead of the best designed API that breaks client code with each new release. In the nearly six years I have been writing code against the EC2 and S3 APIs, I have never had Amazon break my existing code with a new release. Code I wrote in 2008 is still working against S3 as 2013 comes to a close.

OpenStack doesn’t fare so well. In fact, from Bexar to Havana, I don’t think I have had client code survive an OpenStack upgrade during the lifetime of the OpenStack project. Compatibility and interoperability continue to be concerns for people adopting OpenStack and nowhere is it more visible than in the way OpenStack handles API compatibility. A client cannot rely on two OpenStack deployments of the same version sharing the same API, much less a single deployment across upgrades. As a result, upgrading OpenStack often means breaking all of the tools you have in place in your infrastructure. The OpenStack APIs thus not only lack the ability to leverage the AWS ecosystem, but they also lack the ability to cultivate an OpenStack ecosystem.

Things have been better with the more recent releases. I actually think it is important for the OpenStack team to support both native and EC2 APIs, but I think the focus of the OpenStack API teams should be to “leave no client behind” and ensure interoperability across diverse OpenStack deployments. Only then can OpenStack truly build an ecosystem of third-party tools, both Open Source and proprietary.

10/10/2013

For the most part, when it comes to security, there are not a lot of fundamental differences between running in your own data center versus on a public cloud provider. The differences that do exist however are important to understand both in terms of their implications and for dealing with them appropriately for your organization. These four differences are even more important if you have to deal with regulatory or legislative compliance regimes. The differences are:

Authentication/Identity Management: Identity management is a huge problem in the cloud space especially for Infrastructure as a Service (IaaS) providers. Very few provide the ability to leverage an enterprise directory such as LDAP or Active Directory (either natively or via SSO protocols such as SAML or WS-Security). This is less of an issue with Software as a Service (SaaS) providers who have a much higher tendency to support SSO of some sort. What this translates to though, is that each cloud you add becomes another IDM end point to manage; each one of which increases the chances of not properly de-provisioning a user when they change roles or leave the organization. An ideal response to this is to deploy a cloud management product that provides this sort of integration for you. Not only do you minimize the number of IDM end points to manage, but users can be automatically provisioned and de-provisioned on the basis of their directory groups. This also has the benefit that users no longer have any native accounts on the cloud provider so that they can’t make changes outside of your control systems. This becomes even more important when tied directly with the next concern.

Access control/Authorization: Cloud providers, as a rule do not offer fine grained access control to resources within their cloud. In many cases, the degree of coarseness is inconsistent across the various products that they offer which adds to the confusion of how to properly restrict access across a cloud account. A key example of the course grained access control problem is that with most clouds, once you have the ability to terminate a resource like a VM instance, you have the ability to do that to any VM within that account. This makes it very easy for someone to accidentally or maliciously terminate something they weren’t supposed to. To make matters more confusing, every cloud provider has a different scheme for authorizing users which makes running across multiple clouds more complex. A well designed cloud management tool addresses all of these issues by overlaying a consistent level of fine grained role and attribute based access controls that are cloud independent. For organizations using these sorts of tools, rules that restrict instance termination or firewall changes etc. are usually limited to the user or group that created the instance/rule for instance.

Both authentication and authorization/access control are key considerations when dealing with compliance related data because you need to be able to definitively be able to demonstrate to the auditors that you know who can make a set of changes.

Logging/alerting/auditing: This one is huge for compliance. Every compliance regime ever requires that you be able to demonstrate that you have an end to end log trail of who did what when. A dirty secret of the cloud industry is that most (none?) cloud providers from IaaS as to PaaS (Platform as a Service) to SaaS have the functionality to provide logging auditing or alerting on what users have done. Out of the box, this means that unless you are willing to go to manual logging process, you cannot use cloud providers for any compliance related application whether it falls under PCI, HIPAA, FISMA or anything else. Fortunately, since you’ve been worried about issues #1 and #2 above and you’ve deployed a cloud management product, you also get the logs you need. A high quality product will not only provide logs of what users did and when they did it, but will also provide automatic alerts when users do certain things. This is fantastic for tracking security related changes but also can be extremely helpful when doing incident response.

Key management: When it comes to key management, it’s not that it is different compared to running on premises, but rather that it is much more important to have a strong key management process in place because encryption becomes a lot more important especially when dealing with public cloud. Compliance drives this somewhat because many regimes mandate encryption, also breach disclosure laws usually give organizations a pass on announcing if their data was encrypted. Similarly a lot of organizations who are still uncomfortable with the security of the cloud providers like to encrypt everything so they feel more protected. Perhaps the biggest incentive to encrypting data however is that in the U.S., services providers don’t need to notify their customers if the customers’ data has been subpoena’s. With traditional outsourcing, you can get notification added as a contractual requirement, however with cloud service providers there’s generally only a boilerplate contract that doesn’t include this. As a result many organizations are encrypting all of their data so that should their provider get subpoena’s all the authorities get is encrypted data. To be truly effective though, they encryption keys can’t be stored on the cloud provider or this becomes an exercise in futility. This way, if an agency wants your data, they have to come to you with a subpoena requesting the keys. A good cloud management product will be able to manage encryption keys and author credentials outside of the clouds being managed as well as provide for automatic encryption and decryption of relevant resources as appropriate for an application.

The above four differentiators authentication, authorization, logging/alerting/auditing and key management form the basis of how cloud is different from traditional IT services. While these differences are key to understand, they can easily be addressed with a high quality cloud management solution that not only makes the cloud more usable but also more secure in the process.

David Mortman, Chief Security Architect at Enstratius (now a part of Dell Software), and has been doing Information Security for well over 15 years. Most recently, he was the Director of Security and Operations at C3. Previously, David was the CISO at Siebel Systems and the Manager of Global Security at Network Associates. David speaks regularly at Blackhat, Defcon and RSA amongst other conferences. Additionally, he blogs at emergentchaos.com, newschoolsecurity.com and securosis.com. David sits on a variety of advisory boards such as Qualys and Virtuosi. He holds a B.S. in Chemistry from the University of Chicago and bakes, cooks and juggles in his spare time.

08/12/2013

This is a guest post by Guy Currier, Dell Boomi Senior Product Marketing Manager for Application and Cloud Integration.

How does a team make the most out of a star player? Usually it’s by giving that player a bigger role: more appearances in more places. And not just playing the game, but getting involved anywhere the player will be seen as valuable. In other words, teams make sure to integrate their stars with as much of their activities as possible.

Well, if your cloud management system is a critical part of your infrastructure operations team, you’re going to want to integrate it with as much of your operations as possible. Given the importance of cloud-service reporting, governance, budget management, log management, and authentication and access management to your organization, you should probably be thinking about how to make them appear in more places, too.

The Dell Boomi integration cloud is a quick, secure, collaborative platform you can use to develop and deploy connections for your various operations functions—cloud and non-cloud. Boomi allows you to use the generic and proprietary protocols and APIs of your infrastructure management tools to share and synchronize information and functions, as well as pass data into and out of back-office applications such as billing, ERP, auditing, or asset management.

What does this let you do? While the “single pane of glass” is a useful concept when it comes to integrated reporting, in reality it’s not really what individual team members need. What integration really drives to is “role-based workspaces”: user interfaces for each need and function (monitoring, governance, finance; system operations, application management), each one of which connects with the others as needed.

Then you and your team become more responsive and gain better strategic insights with knowledge and capabilities that are in context, but still focused on the task at hand. And if you think about the connection of IT operations to the back office, you can see how your organization as a whole can become more responsive and insightful. This is what the Dell Boomi integration engine enables.

08/08/2013

In speaking with folks, I often find there is an impression
that the concepts of both Cloud and DevOps are at odds with ITIL®
(the IT Infrastructure Library, which provides a framework of best practice
guidance for IT service management) - and change management in general. This
perception is largely due to the fact that many of the controls and
authorizations are managed by technology rather than by humans. Every
organization has some level of pre-authorization, but when you get into cloud
and devops, that level skyrockets. That scares a lot of people, especially ones
used to dealing with ITIL or ISO 27001 and similar standards that require
change management and separation of duties. The important thing to realize is
that neither cloud nor devops inherently violate either of these principles.

Generally speaking, the idea behind separation of duties is
to ensure that someone isn’t making unauthorized and undocumented changes to
the system. At first blush, operating models such as Continuous Deployment
(CD), where developers can directly push code to production, sound like they
violate separation of duties. The reality, however, is that while the developer
is the one releasing the code, all of the actual pushing to production is being
done by automated software. Before pushing the code, this software performs a
variety of unit, functional, and integration tests to check that the code is
production quality. The CD software
package also does fine-grained logging of and reporting on not only who pushed
the code to production, but also what those changes were, enabling
post-deployment auditing. This is something that is rarely - if ever - captured
effectively by a manual change management process.

In many large organizations, such as Facebook, a
sophisticated release management process exists. Even though operations isn’t directly
involved, there is still a strictly controlled gating process that determines
when code is pushed out, to minimize the chances of downtime and ensure that
the developer who wrote the code is available to troubleshoot any problems that
may occur. And just because a company is doing rapid deployments, it doesn’t
mean they are pushing to every production server. Often they will push to a
small set of servers and only increase that number once the code is
demonstrated to be non-problematic.

IBM did research in 1979 that showed that making smaller
changes in sequence produces significantly less complex code - and fewer bugs -
than making bigger changes all at once.. Fewer bugs mean fewer outages and
longer time between failures. Combine
this with automated systems - which are really good at managing repeated tasks (like
deploying software) uniformly– and outages are reduced even further.

Change management serves two purposes. The first is to
understand what changes happened to a system, when they happened, and who made
the change. This is not only mandated by just about every compliance regime
ever, but is absolutely necessary for minimizing the time to recover (MTTR)
from outages. For these issues, automated systems, like those used by devops
folks, are far superior to manual processes.

The other reason for change management is to ensure that
changes made in one environment don’t impact other environments. What I
outlined above is great for managing single applications. But when you get into
a larger multi-application space where the host organization is heavily siloed,
you are still going to need a manual change management process to help
coordinate changes, so that groups don’t step all over each other. Things like knowing that a major network
upgrade is happening on a particular weekend may mean that you avoid scheduling
a major upgrade to your CRM system at the same time. .

But what about ITIL? ITIL requires the creation of an entire
bureaucracy around change management that I have to follow, doesn’t it? Actually, no.
The fact is that ITIL requires that you have a strong, effective and
auditable change management process. ITIL started back in the 80s and really
took off in the late 90s and early 00s in very large enterprises. At that
point, the only option for most organizations was to everything manually, and
with organizations typically being heavily siloed at that time, the natural
result was a lot of bureaucracy, time-consuming paperwork, and manual
approvals. Nothing in ITIL says that you can’t automate, and nothing in ITIL
says you can’t have preapprovals. What you need is documented policies and
processes, and you need to be able to demonstrate that your organization is
following those policies and procedures consistently and effectively. A well-architected cloud/devops based system
actually does a better job of enforcing these controls with the appropriate
logging than any manual process could hope for.
Which would you rather have, a system that records what is supposed to
happen, or a system that records what actually happened in a much finer-grained
automated fashion?

DevOps and cloud don't conflict with ITIL; in fact, implemented properly as part of an
automated development and release process, they actually support ITIL more
effectively -- and much more efficiently -- than the manual ITIL methods that
most organizations use. Cloud computing supports
the new, automated model of application and infrastructure management, so be
sure to include DevOps in your cloud roadmap plan for the most effective
results.

David Mortman, Chief Security Architect at Enstratius (now a part of Dell Software), and has been doing Information Security for well over 15 years. Most recently, he was the Director of Security and Operations at C3. Previously, David was the CISO at Siebel Systems and the Manager of Global Security at Network Associates. David speaks regularly at Blackhat, Defcon and RSA amongst other conferences. Additionally, he blogs at emergentchaos.com, newschoolsecurity.com and securosis.com. David sits on a variety of advisory boards such as Qualys and Virtuosi. He holds a B.S. in Chemistry from the University of Chicago and bakes, cooks and juggles in his spare time.

05/02/2013

Enstratius today announced support for CloudCentral’s Cloud Platform, adding to the list of more than twenty leading public and private clouds that can be accessed and managed through the Enstratius cloud management solution. The press release can be viewed here.

The addition of CloudCentral is a perfect example of why a cloud abstraction layer is important.

Enstratius uses Dasein Cloud to quickly and easily connect to multiple public and private clouds, hypervisors, storage platforms, and cloud platforms such as CloudCentral.

Dasein Cloud, initially developed by Enstratius co-founder and CTO, George Reese, is an open source meta-data based abstraction layer that enables developers to write an application once that can talk to any cloud. This allows you to add new clouds into your production environment without needing to release new application code.

Dasein Cloud can be included in any JVM based applications (e.g. Java, Scala, Groovy, Jython, and Clojure) under the terms of the Apache Software License 2.0. The source code is available here with binaries published to Maven Central.

04/11/2013

Ever since the advent of Infrastructure-as-a-Service (IaaS)
providers, a common complaint has been the difficulty of migrating machine
images between providers. This is particularly difficult given the wide range
of virtualization formats, hypervisor options and supported custom kernels. For
some providers, this even causes issues between their own regions or clouds. As a result, there is often concern about
vendor lock-in and also the propagation of the idea that building a
multi-region/multi-cloud application deployment is impossible.

The reality, however, is that image portability isn’t the
answer. It’s not even the question.

What you really need
to be asking is: “How portable are my applications and data?” Or, to
rephrase in a more actionable way: “Are the operating systems I need to use
available from my cloud provider(s) in the appropriate region(s)/cloud(s) that
I need to use?”

Idealists will say that if all of your providers are using
the same cloud platform, such as OpenStack, then image portability will “just
work”. Not so.

Even if all of your CSPs are based on the same platform
there are other compatibility issues:

Is the provider using the same hypervisor?

Are they supporting comparable kernels?

Have they made customizations in either that
break functionality?

One approach to dealing with these issues is to abstract
away as much of the operating system dependencies you have as possible, much
like IaaS clouds abstract away the hardware.

Start by using a configuration
management & automation tool like Opscode Chef or Puppet to template
out what your systems look like. When you switch or add another provider, the installations
and configurations of all of the standard pieces you use such as Apache, Tomcat,
MySQL, Riak, Cassandra and all of their dependencies, such as Java, are
automatically handled. This can even include things like installing SSL certs
and creating any necessary users on the VMs.

This approach significantly reduces the amount of data you
need to move between cloud providers, as well as easing the transitions with
regards to dependencies created by variations in operating systems or CSPs. You
can now focus on just moving your applications and their associated
configurations and data.

If you want to go more hardcore, you can abstract things
even further and use Chef/Puppet to install a Platform-as-a-Service (PaaS)
environment to make the application migration even more seamless. Regardless of how far you go down this
path,each bit you do to make yourself
more independent from your cloud provider will ultimately make it easier to
manage multiple cloud providers. Not only will it help you avoid lock-in,
but it also protects you from lock-out from other vendors you might want to use
today or in future.

David Mortman is the Enstratius Chief Security Architect and has been doing Information Security for well over 15 years. Most recently, he was the Director of Security and Operations at C3. Previously, David was the CISO at Siebel Systems and the Manager of Global Security at Network Associates. David speaks regularly at Blackhat, Defcon, and RSA amongst other conferences. Additionally, he blogs at emergentchaos.com, newschoolsecurity.com and securosis.com. David sits on a variety of advisory boards such as Qualys and Virtuosi. He holds a B.S. in Chemistry from the University of Chicago and bakes, cooks and juggles in his spare time.

01/16/2013

CSO Online recently published an article titled "7 Deadly Sins of Cloud Computing" and while the content wasn't completely wrong, the tone of the piece and the level of general negativity caused the useful bits – for me at least – to get lost in the noise. Therefore, I've put together a more positive take on how to be successful with cloud computing.

In order to better explain myself, I've organized my ideas into mitzvahs. What's a mitzvah? A mitzvah is a commandment or rule, but may also have the connotation of a good or worthy deed or action. Without further ado, here are five mitzvahs of cloud computing:

1) Understand ownership and responsibilities.

Cloud is yet another form of outsourcing. What makes it interesting is that as you move up the stack from IaaS to PaaS to SaaS (as well as from cloud to cloud within the same type of aaS), what you are responsible for, versus what the provider is responsible for, changes. Understanding where these ownership lines are and where they overlap is essential to a long-term, happy relationship. This applies not only to operations, but also to security. Among the many things to consider are incident response, disaster recovery/business continuity, compliance, and legal issues around subpoenas and notifications. The Cloud Security Alliance has some great documentation on this as well as a certification program.

2) Build for reliability.

Understand that elasticity and on-demand do not equal reliability. Accept that cloud providers often rely on commodity instead of enterprise-class gear. As a result, failures of individual components are higher. Architect your applications with this in mind. As appropriate, deploy across multiple zones, regions or cloud providers. Keep in mind two key aspects of doing so a) the cost implications and b) the performance implications.

3) Make your applications cloud agnostic.

In other words, abstract away the cloud provider as much as possible. Use tools like Chef and Puppet for systems management – that way all you need to do is switch clouds is to obtain a matching OS and move the data. If possible, up the stack and leverage PaaS' that you can run on multiple clouds, such as CloudFoundry. Finally, invest in a cloud management tool for a uniform user experience.

4) Have an identity management strategy.

There's nothing wrong with leveraging the native authentication and authorization functions provided by your CSP. At some point, whether due to usage increases, compliance needs or some other factor, it may become more effective to switch to leveraging your enterprise directory store. This will relate closely to mitzvah #1. Perform this integration via SSO (most often SAML integration) or some sort of directory synchronization. The key aspect here is to plan ahead and have at least a high level plan of what it's going to take to make that transition when it becomes necessary. Regardless of your approach, there are a variety of commercial and open-source solutions to help you.

5) Monitor costs.

Paying by the unit time is awesome. Self-service is awesome. But the two together can easily translate to a much larger bill than anticipated. Remember that costs are also incurred for data storage and transmission. Keep this in mind when planning mitzvah #2 above. When appropriate, leverage auto-scaling to limit the resource utilization. Keep an eye on those bills or use a service to monitor them for you. Some CSPs provide this functionality natively, and there are a variety of third-party offerings in this space as well.

By keeping these five mitzvahs top of mind, your overall cloud computing experience will become more pleasant, productive, and successful.

David Mortman is the enStratus Chief Security Architect and has been doing Information Security for well over 15 years. Most recently, he was the Director of Security and Operations at C3. Previously, David was the CISO at Siebel Systems and the Manager of Global Security at Network Associates. David speaks regularly at Blackhat, Defcon and RSA amongst other conferences. Additionally, he blogs at emergentchaos.com, newschoolsecurity.com and securosis.com. David sits on a variety of advisory boards such as Qualys and Virtuosi. He holds a B.S. in Chemistry from the University of Chicago and bakes, cooks and juggles in his spare time.

11/13/2012

I attended Bernard
Golden’s session on “The Democratization of IT” at last week’s Cloud Expo. Here’s what he had to say:

Perception is reality

There is a
fundamental difference in the way IT perceives itself vs. the way business users
perceive IT. IT considers itself to be a
hard working, intelligent group, diligently solving the problems of the
business. Business users, however, tend to disagree. They see IT as an organization that throws up
road blocks and red tape, making it more difficult, slow, and painful for said
business users to get their jobs done.

When it
comes to cloud computing, this is no different. AWS, the leading cloud
provider, has experienced phenomenal – almost exponential - growth since it
launched in 2006. IT organizations
typically perceive this as a threat.
They do not want to lose control.
So rather than embracing the new technology, they think of ways to build
their own cloud infrastructure, thinking that it will be more cost effective,
more secure, and customized to their organization’s needs. But this is a major project, and can take
years to complete. In the meantime, the
business users need the flexibility and elasticity offered by the cloud
services available now, and have the same reaction to the delay as they did to
traditional IT. So what happens? Business users are likely to go directly to
AWS or another cloud provider to get what they need.

What do business users need?

The National
Institute of Standards and Technology (NIST) Definition of Cloud Computing was released in late 2011. This document is the most comprehensive
overview of the characteristics, service models, and deployment models that
make up cloud computing today, and can be used as a guideline for ushering IT
into the new, democratized world.

NIST’s 5 Characteristics of Cloud Computing:

On-demand self-service

Broad network access

Resource pooling

Rapid elasticity

Measured service

The IT revolution has arrived

People no
longer have to use what IT provided – if they don’t deliver what business
needs, they will bypass IT altogether.
In order to survive as a valuable part of the business model, IT must
become a service provider.

What does it mean to be a service provider in
a democratized world?

IT is in the
business of infrastructure management.
It doesn’t matter whether it’s an internal or external data center, the
cloud, a single server – IT needs to provide consistent management to run
applications for the business units. Bernard
used going to a bank as an example of this practice: not that long ago, people had
to stand in line and wait for a teller to be available during normal banking
hours. Now, people can go to an ATM at
their own convenience, or even deposit checks using just their smartphone. People aren’t going to wait if there is a way
they can get what they need quickly and easily.

IT must take
action to solve the issues inherent in the new business model, and address the
needs of the business users (as laid out by the five characteristics above).

5 key actions for IT as a service provider

Enable agility

Provide tools

Be a best practices resource

Implement company-wide cloud governance

Provide transparent economics

Each of these actions relates to a corresponding
NIST characteristic, and when properly executed, will gain IT the respect and
trust from business users that they deserve.

About the Author

Danalynne Wheeler has 19 years experience in the software marketing and events industries. Danalynne joined enStratus in January 2012 as Marketing Manager. Prior to that, she spent 10 years as Director of Marketing at Sybase, an SAP company. You can follow Danalynne on Twitter at @dwheeler11.

08/15/2012

For the last few weeks, Rackspace has been quietly testing a tool that makes it easy to set up a basic install of OpenStack for testing or even for small-scale production implementation on your own site, on your own hardware.

The installer downloads as a 2GB ISO image you can burn to a disk. Boot your servers from this disk, and the installer asks some simple questions, mostly involving your network configuration and the role you want the server to play: OpenStack controller, compute node, or an all-in-one setup.

Upon installation, you get a full OpenStack console, with an admin user and one preconfigured standard user on its own tenant ID. All services are registered into Keystone and ready for auth and discovery via API. You get Glance, Horizon, and Nova Management on the control node, and Nova Compute on the hypervisors. You'll need VT-capable nodes, as Rackspace's installers depend on KVM.

I spun up an all-in-one node as a VM on my laptop, with 4GB of RAM, and another node for my enStratus install. Installing enStratus with our new Chef-solo based installers is very easy and straightforward. Connecting it to OpenStack was a simple process of adding the new OpenStack API endpoint to the list of available clouds to our enStratus install, and giving the OpenStack credentials to enStratus.

I launched my first VMs in my new OpenStack install from enStratus within a few minutes of getting the API connected. Honestly, this was one of the most straightforward cloud installs I've worked with in quite some time. Everything's configured and connected for you. The install Just Works, which was a nice surprise.

There are some limitations, however. There's a relatively fixed network model, so you'll likely need to adjust your external systems to this OpenStack install, not vice-versa. Though an implementation that includes Swift is coming soon, today you'll have to rely on some other common object storage if your applications need it. Finally, there's no network install - you'll need to put your install disk into each node you want to install. For small installations, this is not a big problem. If you're managing dozens of compute nodes, it can get a bit unwieldy. Because the install is Chef-based, however, it shouldn't take a lot of work to make it more distributed.

Rackspace plans on adding Swift support, and has been taking feedback from testers. I suspect many of the limitations above will be addressed in the future. The installer is definitely worth a look if you're planning an OpenStack implementation.