Monthly Archives: August 2015

This week at VMworld 2015 in San Francisco, VMware and Rackspace announced a new interoperable OpenStack cloud architecture. OK, that’s a big description for something that is actually very simple in concept, and highly valuable to customers who want to quickly get up and running with production-grade OpenStack clouds without a lot of the complexity associated with OpenStack implementations.

The basic premise of this architecture is that the end game for organizations is, and should be, a focus on delivering OpenStack Infrastructure as a Service to their users. If you write your applications and infrastructure automation using standard OpenStack APIs, then the automation should work with any OpenStack cloud regardless of the underlying technologies. This promise of OpenStack has enormous value for customers. If your application and infrastructure automation are portable, then businesses can move across clouds, leverage regional OpenStack clouds to expand business in new geos and/or swap out vendors to meet their business requirements.

With the interoperable OpenStack cloud architecture, customers can start with either VMware Integrated OpenStack or Rackspace Private Cloud powered by OpenStack. Customers can build all their infrastructure automation using standard OpenStack APIs on top of these platforms. Our two companies will work to ensure this automation works with the respective OpenStack clouds. This is inline with the direction the OpenStack Foundation is taking to enable multi-cloud environments via identity federation.

We are excited to announce the interoperable OpenStack cloud architecture and look forward to engaging with customers. Here’s what the architecture looks like:

For customers that want to run a heterogeneous infrastructure underneath the OpenStack API layer, we believe the best approach is to take a multi-vendor approach to their OpenStack deployment. In this way, their environments are optimized for the underlying infrastructure, which improves operations and simplifies deployment and management of their OpenStack clouds. We are collaborating with Rackspace on exactly this type of multi-vendor architecture for OpenStack – one that removes lock-in at the infrastructure layer.

It is that magical time of the year again, when VMware is honored to host more than 23,000 attendees at VMworld 2015 in San Francisco. The event is an annual destination for organizations looking to learn more about technology, innovation and how they can be more awesome in their job.

We are excited to announce VMware Integrated OpenStack 2 just six months after we released version 1.0 for general availability. Expected to be available before the end of Q3 2015 for download, here’s what’s new in this release:

Kilo-based: VMware Integrated OpenStack 2.0 will be based on OpenStack Kilo release, making it current with upstream OpenStack code.

Seamless OpenStack Upgrade: VMware Integrated OpenStack 2.0 will introduce an Industry-first seamless upgrade capability between OpenStack releases. Customers will now be able to upgrade from V1.0 (Icehouse) to V2.0 (Kilo), and even roll back if anything goes wrong, in a more operationally efficient manner.

Additional Language Support: VMware Integrated OpenStack 2.0 will now available in six more languages: German, French, Traditional Chinese, Simplified Chinese, Japanese and Korean.

LBaaS: Load Balancing as a service will be available supported through VMware NSX.

Ceilometer Support: VMware Integrated OpenStack 2.0 will now support Ceilometer with Mongo DB as the Backend Database

App-Level Auto Scaling using Heat: Auto Scaling will enable users to set up metrics that scale up or down application components. This will enable development teams to address unpredictable changes in demand for the app services. Ceilometer will provide the alarms and triggers, Heat will orchestrate the creation (or deletion) of scale out components and LBaaS will provide load balancing for the scale out components.

Backup and Restore: VMware Integrated OpenStack 2.0 will include the ability to backup and restore OpenStack services and configuration data.

Advanced vSphere Integration: VMware Integrated OpenStack 2.0 will expose vSphere Windows Guest Customization. VMware admins will be able to specify various attributes such as ability to generate new SIDs, assign admin passwords for the VM, manage compute names etc. There will also be added support for more granular placement of VMs by leveraging vSphere features such as affinity and anti-affinity settings.

In our last installment, we discussed the simplicity of the VMware Integrated OpenStack deployment process. Today, we will discuss how VMware Integrated OpenStack users can provision virtual machines. First, we need to get familiar with some OpenStack terminology:

Instance – a running virtual machine in your environment. The OpenStack Nova service provides users with the ability to manage hypervisors and deploy virtual machines.

Image – similar in concept to a VM template. The OpenStack Glance service maintains a collection of images from which users will deploy their instances.

Volume – this is an additional virtual disk (VMDK) that is attached to a running instance. Volumes can be added to instances ad hoc via the OpenStack Cinder service.

Network – the VMware vSphere port group that your instance will be attached to. Your port groups are automatically created by the OpenStack Neutron service.

OpenStack emphasizes the capability for users to manage their infrastructure programmatically through REST APIs, and this is exhibited in the multiple ways that a user can deploy an instance. The Horizon GUI provides the capability to launch instances with a point-and-click interface. The Nova CLI provides users with simple commands to deploy your instances, and these commands can be combined in shell scripts.

For users who want even more control and flexibility over instance deployment, the REST APIs can be leveraged. The important thing to note is that regardless of the interface the user selects, the REST API is utilized behind the scenes. For example, if I use the nova boot CLI command, it translates my simple inputs into an HTTP request that the Nova service will understand.

If you would like to see the API code being generated by your CLI commands, you can use the “–debug” option with CLI tools (ex: nova –debug boot…). An example HTTP Request generated by the nova boot CLI command is included below:

My instance name (“apitest”) may seem too generic, and it’s possible that another user may use the same name. Not to worry, instance names do not need to be unique: OpenStack identifies all resources, including instances, by unique identifiers. In the sample code above, my source image, flavor, and network are all identified by their unique identifiers. Well, what about vCenter? In vCenter, my virtual machine’s name includes its OpenStack identifier:

How vCenter Displays an OpenStack Instance

As we saw in the code above, the user specifies the source image, flavor, network, and security group during instance deployment. In the background, the user’s credentials and the interactions between the various OpenStack components are authenticated by the OpenStack Identity service (Keystone). The following graphic provides an illustration of these interactions:

OpenStack Component Interaction

Check out the following video to see instance deployment in action with the Horizon GUI and the Nova CLI, :

At VMworld US 2015, there are many sessions for attendees to learn more about what VMware is doing with OpenStack.

Don’t miss out on hearing about best practices for running OpenStack on the vSphere platform including lessons learned from deployments. All the OpenStack-related sessions are included at the end of this post.

Today’s entry is the start of a blog series that will cover many aspects of VMware Integrated OpenStack.

OpenStack deployments usually have at least one physical server or virtual machine that is designated to be the “build server”. This build server deploys and configures the various components that make up the control plane including the Nova services that manage the hypervisor components, the Neutron networking services, and so on.

VMware Integrated OpenStack also provides a build server that is referred to as the OpenStack Management Server (OMS). The OMS is packaged in an OVA that also contains an Ubuntu VM template. During OpenStack deployments, the OMS clones the VM template to build the OpenStack control plane (ex: controller, database cluster, etc.). The following image illustrates the components that get deployed on the management cluster of your OpenStack deployment.

VMware Integrated OpenStack Control Plane

The OpenStack deployment process happens in two phases:

The VMware Integrated OpenStack vApp deployment

The OpenStack control plane deployment

Both phases of the deployment happen within the VMware vSphere Web Client so that IT administrators can use a familiar interface to deploy and manage their OpenStack installation. The following videos demonstrate the complete VMware Integrated OpenStack deployment process.