Post navigation

OpenStack Heat and Ansible – Automation Born in the Cloud

Overview

In this article we will look at how Ansible can be leveraged within OpenStack to provide enhanced capabilities around software deployment. Before we get into the details lets understand the challenge. There are typically two layers of automation: provisioning and deployment. Provisioning is all about the underlying infrastructure a particular application might require. Deployment is about installing and configuring the application after the infrastructure exists. OpenStack Heat is the obvious choice for automating provisioning. Heat integrates with other OpenStack services and provides the brains, that bring OpenStack powered cloud to life. While Heat is great for provisioning infrastructure, software deployment is not one of its strengths and trying to orchestrate complex software deployments can be rather clunky. That is where Ansible comes into play and as you will see in this article, they fit together perfectly.

Ansible has two components: Ansible core and Ansible Tower. Ansible core provides the ansible runtime and allows execution of playbooks (YAML definitions of what is being orchestrated). What is missing in Ansible core is the management layer, that enhances team collaboration, extensibility, scalability and visibility. Beyond management, Ansible Tower provides the ability to drive Ansible dynamically through APIs. This is a key requirement for OpenStack and dynamic infrastructure.

Through callbacks we can trigger Ansible playbook runs from within OpenStack Heat. Ansible Tower dynamically discovers instances running on OpenStack as Heat provisions them. Ansible Tower is then able to run playbooks against newly provisioned instances dynamically. The result is an end-to-end automation process, that deploys an entire application including its infrastructure stack. Roles can and ideally should be separated, between infrastructure provisioning and software deployment. Heat templates control provisioning created often by OpenStack administrators. Ansible playbook controls software deployment managed by devops teams. In this article we will see how all that fits together. We will not only deploy Ansible Tower on OpenStack, but also walk through a deployment of an all-in-one WordPress application. In this scenario OpenStack Heat is used to deploy a CentOS image with a private and floating ip. Ansible Tower is then triggered directly from Heat using an API callback, the instance is discovered within Ansible Tower and the appropriate playbook for deploying the WordPress application is executed.

OpenStack Installation and Configuration

Installing OpenStack is not covered in this article, however to stand-up an OpenStack lab environment based on Liberty follow this guide. If you are using your own environment ensure you follow the configuration steps in the above guide after OpenStack is deployed or pass the correct parameters into Heat templates that are representative of your environment.

Note: if your CentOS image is named differently, you need to update Heat templates below.

Create Flavor for Ansible Tower

# nova flavor-create m2.small 50 4096 20 4

Create Flavor for WordPress Applicaiton

# nova flavor-create m2.tiny 51 1024 10 1

Note: if your flavors are named differently, you need to update Heat templates below.

Setup Ansible Tower on OpenStack

As mentioned, Ansible Tower provides management, reporting and most important API callbacks. It makes Ansible core even more powerful. In this case Tower is used primarily for API callback and dynamic inventory. This allows us to make an API call from Heat upon completion of infrastructure provision that 1) dynamically updates Ansible inventory with newly created instance IPs 2) run playbook on newly created instance through ssh using private key from OpenStack.

There are two options for deploying Ansible Tower in OpenStack: using Heat template I have provided or deploying an instance and manually configuring tower. Both options are documented in this article. Here we are of course using CentOS, however RHEL will work as well assuming you have subscriptions.

Configure Ansible Tower

Add license for Tower (settings->license). If you dont have one you can get an eval here.

Add Credentials for OpenStack environment (settings->credentials).

Ansible Tower needs to be able to query OpenStack tenant over API to find out what instances exists, IPs, etc.

Optional: Add Credentials for OpenStack key and OS user, in this case centos.

OpenStack uses ssh keys to access instances for a specific user. In this case we are using the CentOS cloud image and it has a built-in user account named centos. When deploying an instance we need to choose a key. In the OpenStack lab configuration we created this key. Your environment will have a different key of course if you didn’t follow that guide.

Ansible Tower implements a hierarchy, to decide what remote user should run tasks within a playbook, on a given target instance. A default remote user can be specified in the ansible.cfg. This is however overwritten by any credentials stored within Tower and credentials are overwritten by what is in the playbook.

Create inventory for OpenStack (inventories).

Inventories are basically a collection of host groups. Hosts are grouped together based on a common inventory. In OpenStack this is done at the tenant level so a host group is a group of hosts belonging to a tenant.

Add a inventory group for OSP8 and ensure you enable overwrite and update on launch (inventories->OpenStack).

These parameters ensure that inventory is updated prior to execution of playbook. Again this is important because in OpenStack you can’t statically configure instance IPs and as such dynamic discovery is required prior to running playbooks.

Create a new project (Projects).

In this case we create a project called Examples where the playbooks are stored in Git. The following git URL contains the WordPress playbook in addition to other examples: https://github.com/ktenzer/ansible-examples.

Note: I have not tested the other examples but likely you would need to replace the remote_user with centos and allow user to become root.

Create job template (Job Templates).

Job templates bring everything together. You specify what credentials to use, what inventory to run against and of course choose a playbook. In this case we choose the already created inventory (OpenStack), credentials (OSP8) and project (Examples). From the Examples project we will select the wordpress-ngix_rhel7/site.yml playbook.

Note: Copy the callback URL and the host config key for authorizing the callback, this is required later.

Deploy WordPress Application using Heat and Ansible

Now it is time to see everything work together and watch the magic happen. We will create a Heat template to deploy an all-in-one WordPress application. Using curl, we will make a callback to Ansible Tower in order to deploy WordPress application once the infrastructure is provisioned. Notice the wonderful simplicity? Just a one-liner from Heat to deploy anything from the simplest to most complex application imaginable.

Adding Heat WaitCondition

At this point we have separated infrastructure from application blueprints and yet still have the capability to perform end-to-end deployment through Heat. One thing that is missing however, is a way to notify Heat that Ansible Tower completed with either a success or failure. Heat provides a resource type called WaitCondition for this exact purpose. The WaitCondition resource will cause the Heat stack to wait until further notified or timeout. A status of success or failure can also be sent back using JSON format.

'{"status": "SUCCESS|FAILURE"}'

The WaitCondition resource type generates an endpoint URL and authorization token as output. Below the original Heat template has been modified to add the WaitCondition resource type and send required parameters to Ansible Tower.

In order to take action upon failures within playbook I chose to use blocks. These are basically like try/catch statements. This is critical so that if any tasks within the playbook fail, Heat is notified immediately and shows stack as being failed. In block statement we define what is supposed to happen normally. In rescue statement we define what should happen in case of failure.

Looking at the MariaDB role we see how the rescue block is used to send Heat a message if anything fails. This is implemented in all roles where tasks are executed.

Heat will not complete after the instance is launched but rather wait, for input from Ansible Tower.

Once playbook is started the heat endpoint and authorization token will show up as extra variables.

After the playbook completes Heat will be notified. In this case things completed successfully

Summary

In this article we have discussed how OpenStack Heat and Ansible provide a powerful combination for cloud orchestration. We have also discussed some of the advantages Ansible Tower provides, allowing not only central API integration through callbacks but needed orchestration extensibility, security, management, visibility and role separation. Both OpenStack and Ansible were born in the cloud. Together they provide end-to-end cloud automation and orchestration for traditional as well as cloud-native applications. I am really interested in your feedback and thoughts on this topic so please share? Hopefully you found this article useful.

Happy orchestrating everything with Ansible Tower in the OpenStack cloud!

16 thoughts on “OpenStack Heat and Ansible – Automation Born in the Cloud”

In my setup, I am able to see instances are getting added to inventory of ansible, but ansible is trying to connect to instance using private IP instead of floating IP. In gathered facts floating IP is also detected by ansible but “ansible_ssh_host” parameter is set to private IP. If I am manually change this parameter to floating ip and disable the dynamic inventory parameters on inventory group then playbook is running fine but this is defeating the purpose of dynamic inventory.

Any inputs on this issue ?

I have installed ansible tower on packstack machine itself, not reached up-to the point of heat integration.

Tower needs to be installed on tenant network, using floating IPs wont work. The issue is Tower can connect using floating ip but then it checks local host information and of course doesnt see floating ip.

You have two options:
1) install tower in tenant
2) install tower in another tenant and allow the two tenants to talk to one another using security rules

Did you setup ansible tower and configure the playbooks that I use in my example? If so did you configure machine credentials in ansible for user that can login to openstack instances with the appropriate key from openstack?

Hi Keith.
thanks for the post. I actually did almost the same in my project. I have more complex system though. I added another server property in the heat template.
properties:
metadata:
groups: server_group1
Dynamic inventory creates this group in Tower when refreshed. Playbook then installs SW according to the group.
I have a challenge though. I’m using scaling groups with multiple servers to be deployed. How wait_condition can work in case of multiple servers deployment. Every server will trigger separate run of this playbook, won’t it?
Any idea?

The wait_condition works w/multiple servers in stack very well but requires some more logic. In Ansible you would have Single playbook for your heat stack broken into roles, probably per VM Ansible roles map 1:1 to heat server types. When VMs call provisioning callback they give as Parameter their role and only role is executed.

For wait_condition you need to understand how long overall stack should take to provision, the wait_condition timeout I believe is on heat stack not VMs that are part of stack if I remember right.

Let me know if this clears things up? Also if you do this please share love to incorporate a more complex use case into this blog.

#———————————————————————–
# The parameter “backend” should be identical to the name of the group
# as coded in the heat template creating the VM. (beserver)
#———————————————————————–
– hosts: backend
become: true
environment:
http_proxy:
tasks:
– yum: name=tomcat state=present

Less important but another small thing I did. As I am running Tower from another tenant there was a small change in the inventory script
I’m assigning public ip instead of private

if ‘public_v4’ not in meta[‘server_vars’]:
# skip this host if it doesn’t have a network address
continue
server_vars = meta[‘server_vars’]
hostvars[server.name][
‘ansible_ssh_host’] = server_vars[‘public_v4’]

The issue is, every time, when a server is newly created by the Heat it issues a callback to the Tower, Tower runs inventory refresh and then the playbook again. Meaning multiple playbooks run by the Tower. This is how it looks like. It is correct isn’t it?
If yes then the call to heat wait endpoint from the playbook will happen multiple times – as many times as the number of servers. Am I wrong?

As for your question regarding wait condition. I thought this option was for Heat stack itself but I never tried with multiple instances in same heat stack. If you are saying it is per instance not stack then try to only have the following under one instance, the first one:
wait_token: { get_attr: [ wait_handle, token ] }

Maybe this will allow only one call to wait.

In case this doesn’t work and it creates a wait condition per instance then you need to do a bit more work in ansible to identify these and handle them. Thinking about things a bit more this actually makes sense and provides a lot of granularity.

Let me know if you figure something out and I will give it a try when I get some free time?

Yeah. It will be multiple. Imagine an instance is a part of an autoscaling group and initial number of instances is more than one. It should create wait per instance
I’ll try to figure something out.
Thanks!

Hi Keith.
Thanks for the post, really nice work!
I am having problems running the Ansible Tower job. The inventory is refresehd, but the job does not run.
When I run it manually from Ansible Tower it is working fine.

I tried to run the callback manually , it is not working and I am getting this result:

Two possibilities 1) you didn’t setup openstack inventory in tower or didn’t set the inventory to be run automatically when playbook is run 2) you are using public ip to communicate w/tower. There is way to get this working but I didn’t document. In my setup I installed tower in tenant network and used private ip. My idea was a tower per tenant since the subscription for tower is node based and not based on how many instances of tower you have.

Awesome and thanks for linking solution. I was on road at time but this is similar to what I was thinking the issue was. BTW I also did presentation on this at OpenStack Summit in barcelona. May be of interest to share with others.