Pages

In this post we are going to explore a fully automated way of provisioning LXC containers on a set of servers, using OpenStack.

OpenStack is a cloud operating system that allows for the provisioning of virtual machines, LXC containers, load balancers, databases, and storage and network resources in a centralized, yet modular and extensible way. It’s ideal for managing a set of compute resources (servers) and selecting the best candidate target to provision services on based on criteria such as CPU load, memory utilization, VM/container density, to name just few.

In this blog we are going to deploy the following OpenStack components and services:

· Deploy the Keystone identity service that will provide a central directory of users and services and a simple way to authenticate using tokens.

· Install the Nova compute controller, which will manage a pool of servers and provision LXC containers on them.

· Configure the Glance image repository, which will store the LXC images.

· Provision the Neutron networking service that will manage DHCP, DNS and the network bridging on the compute hosts.

· And finally, we are going to provision an LXC container using the libvirt OpenStack driver.

Deploying OpenStack with LXC support on Ubuntu

An OpenStack deployment may consist of multiple components that interact with each other through exposed APIs, or a message bus like RabbitMQ.

We are going to deploy a minimum set of those components – Keystone, Glance, Nova and Neutron - which will be sufficient to provision LXC containers and still take advantage of the scheduler logic and scalable networking that OpenStack provides.

For this tutorial we are going to be using Ubuntu Xenial and, as of time of this writing, the latest Newton OpenStack release.

Preparing the host

To simplify things, we are going to use a single server to hosts all services. In production environments it’s a common approach to separate each service into its own set of servers for scalability and high availability. By following the steps in this post, you can easily deploy on multiple hosts, by replacing the IP addresses and hostnames as needed.

If using multiple servers, you need to make sure the time is synchronized on all hosts by using services like ntpd.

Let's begin by ensuring we have the latest packages and installing the repository that contains the Newton OpenStack release:

The Keystone identity service provides a centralized point for managing authentication and authorization for the rest of the OpenStack components. Keystone also keeps a catalog of services and the endpoints they provide, that the user can locate by querying it.

To deploy Keystone, first create a database and grant permissions to the keystone user:

Keystone uses tokens to authenticate and authorize users and services. There are different token formats available such as UUID, PKI and Fernet tokens. For this example deployment we are going to use the Fernet tokens, which unlike the other types do not need to be persisted in a back end. To initialize the Fernet key repositories run:

Time to create our first project in Keystone. Projects represent a unit of ownership, where all resources are owned by a project. The “service” project we are going to create next will be used by all the services we are going to deploy in this post.

With the admin credentials loaded, lets request an authentication token that we can use later with the other OpenStack services:

File: gistfile1.txt
-------------------

Installing and configuring Image service

The image service provides an API for users to discover, register and obtain images for virtual machines, or images that can be used as the root filesystem for LXC containers. Glance supports multiple storage backends, but for simplicity we are going to use the file store, that will keep the LXC image directly on the file system.

To deploy Glance, first create a database and a user, like we did for Keystone:

The OpenStack Compute service manage a pool of compute resources (servers) and various virtual machines, or containers running on said resources. It provides a scheduler service that takes a request for a new VM or container from the queue and decides on which compute host to create and start it.

For more information on the various Nova services, refer to: http://docs.openstack.org/developer/nova/

· The nova-api service accepts and responds to user requests through a RESTful API. We use that for creating, running, stopping instances, etc.

· The nova-conductor service sits between the nova database we created earlier and the nova-compute service, which runs on the compute nodes and creates the VMs and containers. We are going to install that service later in this post.

· The nova-consoleauth service authorizes tokens for users that want to use various consoles to connect to the VMs or containers.

· The nova-novncproxy grants access to instances running VNC.

· The nova-scheduler as mentioned earlier, makes decisions where to provision a VM or LXC container.

With all the Nova services configured and running, time to move to the networking part of the deployment.

Installing and configuring Networking service

The networking component of OpenStack, codenamed Neutron, manages networks, IP addresses, software bridging and routing. In the previous posts we had to create the Linux Bridge, add ports to it, configure DHCP to assign IPs to the containers, etc. Neutron exposes all of these functionalities through a convenient API and libraries that we can use.

We need to define what network extension we are going to support and the type of network. All this information is going to be used when creating the LXC container and its configuration file, as we’ll see later:

In order to SSH to the LXC containers we can have the SSH keys managed and installed during the instance provisioning, if we don’t want them to be baked inside the actual image. To generate the SSH key pair and add it to OpenStack run:

By default, once a new LXC container is provisioned iptables will disallow access to it. Lets create two security groups that will allow ICMP and SSH, so we can test connectivity and connect to the instance: