Provision and Configure OpenStack Instances in One Ansible Run

In order to configure servers with tools like Ansible, they need to be up and running. These servers could be hardware systems in your data center, or, more likely, virtual machines running in any number of infrastructure-as-a-service (IaaS) providers, including a private OpenStack cloud (which is what I will be using here).

The goal in this blog post is not just to provision the instances, but to provision and configure them in one Ansible playbook run. This process is somewhat complicated by the fact that you don’t know the ip address of the instance until it’s created, which typically means provisioning and configuration happens in at least two runs, perhaps even with different tools.

For simplicities sake, all this playbook is going to do is create all of the virtual machines in OpenStack, and then ping them via a role called common.

nova.py and the hosts file

Because of the custom inventory script I’m using, called nova.py (but not the same as what comes with Ansible by default), I have a somewhat unusual ansible hosts file, though it does follow the typical Ansible hosts file format.

Below we can see there is a group called openstack_instances and there are four instances listed there, each with a flavor_id and group variable associated.

curtis$ cat hosts
[openstack_instances]
lb flavor_id=1 group=load_balancers
app flavor_id=2 group=application_servers
db flavor_id=2 group=database_servers
app2 flavor_id=2 group=application_servers
</pre>
The _nova.py_ script reads each line of the hosts file and sets up the flavor_id and group for each instance. If I run _nova.py_ I get json output that looks like this:

As can be seen above, each inventory entry has it's _flavor_id_ meta variable set, as well as being put into a specific group.
## The openstack_instances file
I've included an example openstack_instances file. Copy that to _group_vars/openstack_instances_ and fill it out with your OpenStack credentials.

## ssh_config
I should also mention that I have a gateway server setup in my OpenStack tenant and that is configured to be used with the private OpenStack network for that tenant. So while my OpenStack instances have a private ip address, they can still be accessed remotely via the ssh gateway server. Another option would be to run Ansible from inside the tenant.

## Conclusion
At this point we have an ansible playook that can provision OpenStack instances, find out their ip address, and then configure them, all in one playbook run. Of course to do this we have to use a custom inventory script, but I don't mind that. Python is a great language to do things like this in, and since Ansible is written in Python it's much easier.
## Issues
- My custom nova.py script isn't very smart.
- Depending on your OpenStack provider you may need to change the name of the network in the _set_fact_ task, which below is set to the name "private." Sometimes different clouds have different default network names.

- Also, sometimes a particular instance won't come up. Just run the playbook again and as long as your OpenStack provider is working, everything should complete at some point. I found that sometimes it would take about a minute for some instances sshd to become available; that Ansible would have connection problems on the first run. Right now there is a 30 second pause in the playbook to try to account for that in a non-intelligent way.
- Another problem could be that OpenStack can have many instances with the same name, so there could be 10 "app2" servers. _nova_compute_ is using the name rather than the OpenStack uuid to see if instances are instantiated. Something to look into because that could go sideways quickly.
- Maybe there is a better way to do this? If so, let me know. :)