An interesting problem to solve in OpenStack is the management of OpenStack’s services. Whether it’s at the time of provisioning or updating, the OpenStack services could listen on similar ports and require modification of common configuration files.

Because of this, the services could potentially conflict with one another if deployed on the same system. For example, the network service may attempt to listen on the same port as the identity service or the compute service may edit a file that the network service expects to have different values. How do you deal with this problem, particularly when each OpenStack project has a tendency to work as an independent project? It doesn’t seem likely that it would be easy to drive consensus between the various projects on ports to listen on, configuration files to modify – particularly with the speed that OpenStack is moving.

For example, let’s suppose that one wants to deploy a network service. Assuming they are using a build based (sometimes referred to as package based) deployment method they might perform something similar to the following.

The result is a non-working network service and the potential for a non-working identity service if it is ever restarted. This problem is also found in image based deployment, it’s simply found earlier in the workflow, during the image generation phase. After all, the images that are being deployed need to be generated in the first place. The fundamental problem is that understanding what services are deployed on a particular host and resolving the dependencies or making necessary changes is not something the package or image generation tools understand.

One possible solution is to place each service on it’s own unique piece of hardware. This solves the problem of conflicts between the services configuration, but is not optimal as the overhead of the OpenStack services would not justify it’s own physical system until a particular scale is reached. Even then, locating the services close to compute nodes would also inhibit providing each service it’s own dedicated piece of hardware.

Another possible solution is to build into the tools, the logic and understanding of the OpenStack services and their configuration. While this sounds like a small task – it is not. The possible combinations of services that could be combined on a single host does not lend itself to easily creating let alone maintaining this logic.

OpenStack Architecture

Yet another possible solution is to utilize virtual machines. This solves the hardware problem and provides isolation, but it has some disadvantages. Virtual machines are heavyweight. Whether it’s building new virtual machine images because of a simple update or installing the configuration infrastructure necessary to update virtual machines (and the overhead of start/stop operations, less rich interfaces for metadata, etc) virtual machines are not ideal.

It may be possible to use Linux containers to solve this problem. Linux containers offer a lightweight virtualization that provide (among other things) process and network isolation. The isolation provided by containers means that tools such as a build based or image based deployment tools don’t need to maintain the logic of how the services on hosts could be deployed or updated without effecting one another.

I hope to provide more information soon on how projects like systemd might provide a mechanism for solving dependencies between OpenStack services running in containers – maybe even using Docker. Also, how ostree might lend a hand in some of the troubles of package management too.

This page captures my effort to learn about docker images by building a docker image for ovirt-engine from scratch using Fedora 19. At this point I get stuck after launching the image with ovirt installed in it. I’ll be troubleshooting and seeing how I can best package ovirt-engine into a single image or breaking into multiple pieces. Who knows, maybe I’ll even try to make it communicate over etcd?

I was able to create a new base image, publish it to a private docker registry, then create a Dockerfile to create a layered image for ovirt-engine, the open source virtualization management platform. I used Marek Goldmann‘s great blog as a reference and leveraged the work of Matt Miller too.

Install appliance-tools. Appliance tools is one method that can be used for creating a virtual machine that we will then package up into a docker image.# yum install -y appliance-tools libguestfs-tools

You may also want to unmount /tmp if you are running in a VM and have limited space in /tmp.# systemctl mask tmp.mount; reboot

Build a Base Image

In order to build a base image you need to create a virtual machine image, then pack it up into an archive, and import it into docker.

You can use your favorite kickstart file for your base docker image. You would want to make the kickstart install the smallest possible footprint so your base image stays small. The following example kickstart is a good starting point.

If you have issues creating a container you can continue on by pulling an existing image, like Matt’s fedora image, from the Docker index.# docker pull mattdm/fedora

Publish the New Image to a Docker Registry

Docker provides a registry, a place to store your docker images (web server that supports multiple storage back-ends and has hooks for authentication sources). The company behind docker provides an index which is the docker-registry combined with a web front end and collaborative environment.

Now that we have a docker image we can upload it to our private registry. First you’ll need to list the images, tag the image# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
none latest e4a4f6d69590 29 hours ago 131.2 MB

# Run the JBoss AS after the container boots
# ENTRYPOINT /usr/bin/engine-setup --config=/root/answerfile

The FROM line indicates what base image should be used.
The RUN lines will be executed and committed on the image.
The ENTRYPOINT line specifies what should be executed when the image is launched. At this point I’ll leave the ENTRYPOINT commented out. We’ll just launch a shell and then try to execute the engine-setup command before we use an answerfile to install it automatically in a future image.

Now we will build our image.# docker build .

Now we have a new image.# docker images

We can tag this new image.# docker tag 234ad73r7df localhost.localdomain:5000/ovirt-fedora-small

And we can push it to our registry as a new image.# docker push localhost.localdomain:5000/ovirt-fedora-small

On another fedora 19 system with docker installed (or on the same one), you can pull the docker image down and run it.# docker pull youripaddress:5000/ovirt-fedora-small
....
# docker run -i -t localhost.localdomain:5000/jlabocki/fedora-ovirt-small /bin/bash

You can run `docker help run` to understand the options that we just gave to run the image. You can also inspect the images and running containers to get lots of interesting information about it (from outside the container, not from within it). `docker ps a` will list the running containers while `docker images` will list the images you have.

Host fully qualified DNS name of this server [502fbe26fc3c]:
[WARNING] Host name 502fbe26fc3c has no domain suffix
[WARNING] Failed to resolve 502fbe26fc3c using DNS, it can be resolved only locally

--== DATABASE CONFIGURATION ==--

Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

Host fully qualified DNS name of this server [502fbe26fc3c]:
[WARNING] Host name 502fbe26fc3c has no domain suffix
[WARNING] Failed to resolve 502fbe26fc3c using DNS, it can be resolved only locally

--== DATABASE CONFIGURATION ==--

Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:

Conclusion

At this point the engine-setup command is not able to complete successfully because of a dbus error when trying to initialize postgresl-server. I’ll continue to work on this to see if I can make progress in packaging ovirt-engine into a docker image.

This assumes a RHEL6.4 @base installation of Red Hat Enterprise Linux OpenStack Platform (RHELOSP) and registration to a satellite which has access to both RHELOSP Channels and RHEL Server Optional. Much of the Ceilometer installation instructions came from this Fedora QA Test case, but I made a few changes and added a few more details.

At this point you can verify ceilometer is working correctly by authenticating as a user that has instances running (such as admin).

# . ~/keystonerc_admin

Then list the sample for the cpu meter. I pipe this to count lines below and just check to see that the value changes every few minutes depending on what is specified in the /etc/ceilometer/pipeline.yaml (the interval is 600 seconds by default)

ceilometer sample-list -m cpu |wc -l

Add the provider to CloudForms Management Engine and you begin seeing capacity and utilization data for your instances populate in a few minutes.

CloudForms 3 has arrived! There are plenty of new features, including deeper integration with Amazon Web Services EC2 and enhanced service catalog definitions. Along with those, one major new capability is support of OpenStack as a cloud provider. This is a big step forward in bringing the same cloud management capabilities users have come to expect from CloudForms across VMware vSphere, AWS EC2, and Red Hat Enterprise Virtualization to OpenStack. Before diving directly to the capabilities CloudForms provides for OpenStack providers it’s important to know that Red Hat is working on enabling OpenStack for Enterprises in a number of ways. Here are three key areas:

Enabling Red Hat Enterprise Linux to be the most stable, secure, and best performing platform for OpenStack powered clouds.
This is being accomplished in Red Hat Enterprise Linux OpenStack Platform – a stable, reliable, and secure base, and the hardware and application support needed to run in demanding OpenStack environments.

Enabling instrumentation and APIs within OpenStack.
This occurs upstream within the OpenStack project itself. Red Hat works with the community on projects such as TripleO, an installation and operations tool for OpenStack. It has also led by initiating Tuskar – a stateful API and UI for managing the deployment of OpenStack, which is now a part of the TripleO project.

Supporting OpenStack within CloudForms Red Hat’s Hybrid Cloud Management Platform.
Most IT organizations are already virtualizing and building cloud like capabilities on top of datacenter virtualization (self-service, chargeback, etc). These organizations recognize that building a private cloud using OpenStack will provide new advantages such as reducing costs, increasing scale, and fundamentally changing the way developers and operations teams work together. However, IT organizations don’t want to build yet another silo. They’d like to solve the fundamental problem of IT complexity while simultaneously building their next generation IT architecture. CloudForms allows organizations to operationally manage their existing platforms alongside their next generation IT architectures, including OpenStack.

With the OpenStack management background out of the way let’s look at some highlights of what CloudForms 3 brings to OpenStack management in more detail.

Manage New and Existing OpenStack Clouds

CloudForms 3 allows users to manage new and existing OpenStack Environments. As I mentioned in an earlier post infrastructure providers such as VMware and Red Hat Enterprise Virtualization (RHEV) have been separated from Cloud Providers such as Amazon Web Services and OpenStack within the user interface. Within the Cloud Providers screen it’s possible to add a new cloud provider.

After providing the credentials of an OpenStack keystone user CloudForms 3 will discover the Availability Zones, Flavors, Security Groups, Instances, and Images associated with the OpenStack user.

Each of these discovered properties of the OpenStack provider can be inspected further. With instances in particular, the CloudForms user can begin viewing in depth information about the instances running on top of OpenStack.

Users can dive into capacity and utilization data of their openstack instances.

Since CloudForms is also pulling events from the OpenStack message bus it is possible to correlate performance information on instances with events that are taking place.

All of this performance and utilization data is also available for reporting purposes in the CloudForms reporting engine.

Chargeback for Workloads on OpenStack

CloudForms 3 adds OpenStack to a growing list of providers for which chargeback reports can be centrally managed. Using the rate table and tagging functions that already exist in CloudForms users can create rate tables and assign them to their OpenStack environments.

The tagging system continues to provide a flexible and dynamic approach to chargeback which is becoming even more critical as IT organizations build more dynamic platforms with higher rates of change. Chargeback reports can be limited to only show instances or can be combined with virtual machine chargeback.

Provision workloads via self-service catalogs to OpenStack clouds

Finally, CloudForms 3 provides access to instances in OpenStack providers via self-service in it’s service catalog. While self-service of images is a native feature of Horizon within Red Hat Enterprise Linux OpenStack Platform the inclusion of self-service via CloudForms helps organizations looking to implement enterprise class self-service that ties into their existing environments. CloudForms self-service capabilities are integrated with it’s automation engine which bring capabilities such as the abilities to:

Combine multiple instances or combined instances with virtual machines and other atomic services into a single service catalog bundle for ordering

Integrate with existing IT Operations Management solutions, such as CMDBs, CMS, monitoring, or eventing tools

Enforce quotas, workflow, and approval

Provide best fit placement of instances on particular OpenStack providers

CloudForms 3 is a big step forward for enterprises looking to manage their OpenStack private clouds through a cloud management platform that also supports their existing investments in datacenter virtualization and public clouds. If you are attending OpenStack Summit I hope you can join Oleg Barenboim, Senior Director of Software Engineering for CloudForms, and myself as we present on how CloudForms Unifies the management of OpenStack, Datacenter Virtualization, and Public Clouds.