The Specific branch is here: https://github.com/CiscoSystems/folsom-manifests/tree/simple-multi-node

+

== Overview ==

+

+

In the Cisco OpenStack distribution, a build server outside of the OpenStack cluster is used to manage and automate the OpenStack software deployment. This build server primarily functions as a [http://puppetlabs.com/puppet/puppet-open-source/ Puppet] server for software deployment and configuration management onto the OpenStack cluster, as well as a [http://cobbler.github.com/ Cobbler] installation server to manage the PXE boot used for rapid bootstrapping of the OpenStack cluster.

+

+

Once the build server is installed and configured, it is used as an out-of-band automation and management workstation to bring up, control, and reconfigure (if later needed) the nodes of the OpenStack cluster. It also functions as a monitoring server to collect statistics about the health and performance of the OpenStack cluster, as well as to monitor the availability of the machines and services which comprise the OpenStack cluster.

== Building the environment ==

== Building the environment ==

-

Build an Ubuntu 12.04 system.

+

=== Assumptions ===

-

Install a ubuntu-based linux server with openssh-server enabled. The rest of the packages and dependencies will be installed automatically by puppet. We are in the process of providing a Virtual Machine (VM) to be used as build node but in the meantime you will need to install your own build server manually.

+

Although other configurations are supported, the following instructions target an environment with a build node, a controller node, and at least one compute node. Additional compute nodes may optionally be added, and swift nodes may also be added if desired.

-

Add the necessary packages to have puppet running and cisco edition enabled

+

When naming your nodes, make sure that

-

<pre>apt-get update</pre>

+

-

Note: The system will need to be restarted after applying the updates.

+

*all compute nodes contain "compute" in their host name

+

*all control nodes contain "control" in their host name

+

*all swift nodes contain "swift" in their host name

-

You will need a couple additional packages:

+

Also, these instructions primarily target deployment of OpenStack onto UCS servers (either blades or rack-mount form factors). Several steps in the automation leverage the UCS manager to execute system tasks. Deployment on non-UCS gear may well work, but may require additional configuration or additional manual steps to manage systems.

To deploy Cisco OpenStack, first configure a build server. This server has relatively modest hardware requirements: 2 GB RAM, 20 GB storage, Internet connectivity, and a network interface on the same network as the eventual management interfaces of the OpenStack cluster machines are the minimal requirements. This machine can be physical or virtual; eventually a pre-built VM of this server will be provided, but this is not yet available.

-

Copy all the content under ~/folsom/modules/ to /etc/puppet/modules/

+

-

<pre>cp -r ~/folsom/modules/ /etc/puppet/</pre>

+

Install Ubuntu 12.04 LTS onto this build server. A minimal install with openssh-server is sufficient. Configure the network interface on the OpenStack cluster management segment with a static IP. Also, when partitioning the storage, choose a partitioning scheme which provides at least 15 GB free space under /var, as installation packages and ISO images used to deploy OpenStack will eventually be cached there.

-

Optional: If you have your set up behind a proxy, you should export your proxy configuration:

+

+

When the installation finishes, log in

+

+

''Optional: If you have your build server set up behind a non-transparent web proxy, you should export your proxy configuration:

<pre>export http_proxy=http://proxy.esl.cisco.com:80

<pre>export http_proxy=http://proxy.esl.cisco.com:80

export https_proxy=https://proxy.esl.cisco.com:80</pre>

export https_proxy=https://proxy.esl.cisco.com:80</pre>

-

Optional: If your set up is in a private network and your build node will act as proxy server, you need to add the corresponding NAT and forwarding configuration.

+

''Replace proxy.es1.cisco.com:80 with whatever is appropriate for your environment.

+

+

All should now install any pending security updates:

+

<pre>apt-get update && apt-get dist-upgrade -y</pre>

+

+

''Note: The system may need to be restarted after applying the updates.

+

+

Next, install a few additional required packages and their dependencies:

Copy the puppet modules from ~/cisco-folsom-modules/modules/ to /etc/puppet/modules/

+

+

<pre>cp -r ~/cisco-folsom-modules/modules/ /etc/puppet/</pre>

+

+

Also, get the Cisco Edition example manifests. Under the [https://github.com/CiscoSystems/folsom-manifests/branches folsom-manifests GitHub repository] you will find different branches, so select the one that matches your topology plans most closely. In the following examples the simple-multi-node branch will be used, which is likely the most common topology:

''Optional: If your set up is in a private network and your build node will act as a proxy server and NAT gateway for your OpenStack cluster, you need to add the corresponding NAT and forwarding configuration.

In the /etc/puppet/manifests directory you will find these three files:

+

<pre>site.pp

+

cobbler-node.pp

+

clean_node.sh

+

</pre>

+

At a high level, cobbler-node.pp defines the hardware properties of the individual servers being deployed in the OpenStack cluster. site.pp defines the various parameters that must be set to configure the OpenStack cluster, and also provides the configuration settings for the build server. clean_node.sh is a shell script provided as a convenience to end users; it wraps several cobbler and puppet commands for ease of use when building and rebuilding the nodes of the OpenStack cluster.

+

+

IMPORTANT! You must edit these files. They are fairly well documented internally, but please comment with any questions. You can also read through these documents for more details: [https://github.com/CiscoSystems/folsom-manifests/blob/simple-multi-node/Cobbler-Node.md Cobbler Node] and [https://github.com/CiscoSystems/folsom-manifests/blob/simple-multi-node/Site.md Site]

+

+

Then, use the ‘puppet apply’ command to activate the manifests:

+

<pre>puppet apply -v /etc/puppet/manifests/site.pp</pre>

+

When the puppet apply command runs, the puppet client on the build server will follow the instructions in the site.pp and cobbler-node.pp manifests and will configure several programs on the build server:

+

+

*ntpd -- a time synchronization server used on all OpenStack cluster nodes to ensure time throughout the cluster is correct<br>

+

*tftpd-hpa -- a TFTP server used as part of the PXE&nbsp;boot process when OpenStack nodes boot up

+

*dnsmasq -- a DNS and DHCP server used as part of the PXE&nbsp;boot process when OpenStack nodes boot up

+

*cobbler -- an installation and boot management daemon which manages the installation and booting of OpenStack nodes

+

*apt-cacher-ng -- a caching proxy for package installations, used to speed up package installation on the OpenStack nodes

+

*nagios -- a infrastructure monitoring application, used to monitor the servers and processes of the OpenStack cluster

+

*collectd --a statistics collection application, used to gather performance and other metrics from the components of the OpenStack cluster<br>

+

*graphite and carbon -- a real-time graphing system for parsing and displaying metrics and statistics about OpenStack

''Replace node_name with the name of your controller, and example.com with your cluster's domain.

-

== Customizing your environment ==

+

clean_node.sh is a script which does several things:

-

Under the manifests folder you will find these three files: - site.pp - cobbler-node.pp - clean_node.sh

+

*configures Cobbler to PXE&nbsp;boot the specified node with appropriate PXE&nbsp;options to do an automated install of Ubuntu <br>

+

*uses Cobbler to power-cycle the node

+

*removes any existing client registrations for the node from Puppet, so Puppet will treat it as a new install

+

*removes any existing key entries for the node from the SSH known hosts database

-

IMPORTANT! You must edit these files. They are fairly well documented, but please comment with questions. You can also read through these descriptions: [https://github.com/CiscoSystems/folsom-manifests/blob/simple-multi-node/Cobbler-Node.md Cobbler Node] and [https://github.com/CiscoSystems/folsom-manifests/blob/simple-multi-node/Site.md Site]

+

When the script runs, you may see errors from the Puppet and SSH clean up steps if the machine did not already exist in Puppet or SSH. This is expected, and not a cause for alarm.

-

Then ‘puppet apply’ it:

+

You can watch the progress on the console of your controller node as cobbler completes the automated install of Ubuntu. Once the installation finishes, the controller node will reboot and then will run puppet after it boots up. Puppet will pull and apply the controller node configuration defined in the puppet manifests on the build server.

-

<pre>puppet apply -v /etc/puppet/manifests/site.pp</pre>

+

This step will take several minutes, as puppet downloads, installs, and configures the various OpenStack components and support applications needed on the control node. /var/log/syslog on the controller node will display the progress of the puppet configuration run.

-

I recommend a reboot at this point, as it seems that the puppetmaster doesn’t restart correctly otherwise.

+

-

And now you should be able to load up your cobbled nodes:

+

''Note that it may take more than one puppet run for the controller node to be set up completely. Observe the log files to verify that the controller configuration has converged completely to the configuration defined in puppet.

-

<pre>./clean_node.sh {node_name} example.com</pre>

+

Once the puppet configuration of the controller has completed, follow the same steps to build each of the other nodes in the cluster, using clean_node.sh to initiate each install. As with the controller, the other nodes will take several minutes for puppet configuration to complete, and may require multiple runs of puppet before they are fully converged to their defined configuration state.

-

or if you want to do it for ''all'' of the nodes defined in your cobbler-node.pp file:

you will still need to log into the console of the control node to load in an image: user: localadmin, password: ubuntu. If you SU to root, there is an openrc auth file in root’s home directory, and you can launch a test file in /tmp/nova_test.sh.

you will still need to log into the console of the control node to load in an image: user: localadmin, password: ubuntu. If you SU to root, there is an openrc auth file in root’s home directory, and you can launch a test file in /tmp/nova_test.sh.

Revision as of 19:41, 8 November 2012

Contents

Overview

In the Cisco OpenStack distribution, a build server outside of the OpenStack cluster is used to manage and automate the OpenStack software deployment. This build server primarily functions as a Puppet server for software deployment and configuration management onto the OpenStack cluster, as well as a Cobbler installation server to manage the PXE boot used for rapid bootstrapping of the OpenStack cluster.

Once the build server is installed and configured, it is used as an out-of-band automation and management workstation to bring up, control, and reconfigure (if later needed) the nodes of the OpenStack cluster. It also functions as a monitoring server to collect statistics about the health and performance of the OpenStack cluster, as well as to monitor the availability of the machines and services which comprise the OpenStack cluster.

Building the environment

Assumptions

Although other configurations are supported, the following instructions target an environment with a build node, a controller node, and at least one compute node. Additional compute nodes may optionally be added, and swift nodes may also be added if desired.

When naming your nodes, make sure that

all compute nodes contain "compute" in their host name

all control nodes contain "control" in their host name

all swift nodes contain "swift" in their host name

Also, these instructions primarily target deployment of OpenStack onto UCS servers (either blades or rack-mount form factors). Several steps in the automation leverage the UCS manager to execute system tasks. Deployment on non-UCS gear may well work, but may require additional configuration or additional manual steps to manage systems.

Creating a build server

To deploy Cisco OpenStack, first configure a build server. This server has relatively modest hardware requirements: 2 GB RAM, 20 GB storage, Internet connectivity, and a network interface on the same network as the eventual management interfaces of the OpenStack cluster machines are the minimal requirements. This machine can be physical or virtual; eventually a pre-built VM of this server will be provided, but this is not yet available.

Install Ubuntu 12.04 LTS onto this build server. A minimal install with openssh-server is sufficient. Configure the network interface on the OpenStack cluster management segment with a static IP. Also, when partitioning the storage, choose a partitioning scheme which provides at least 15 GB free space under /var, as installation packages and ISO images used to deploy OpenStack will eventually be cached there.

When the installation finishes, log in

Optional: If you have your build server set up behind a non-transparent web proxy, you should export your proxy configuration:

Copy the puppet modules from ~/cisco-folsom-modules/modules/ to /etc/puppet/modules/

cp -r ~/cisco-folsom-modules/modules/ /etc/puppet/

Also, get the Cisco Edition example manifests. Under the folsom-manifests GitHub repository you will find different branches, so select the one that matches your topology plans most closely. In the following examples the simple-multi-node branch will be used, which is likely the most common topology:

Copy the puppet manifests from ~/cisco-folsom-manifests/manifests/ to /etc/puppet/manifests/

cp ~/cisco-folsom-manifests/manifests/* /etc/puppet/manifests

Optional: If your set up is in a private network and your build node will act as a proxy server and NAT gateway for your OpenStack cluster, you need to add the corresponding NAT and forwarding configuration.

Customizing the build server

In the /etc/puppet/manifests directory you will find these three files:

site.pp
cobbler-node.pp
clean_node.sh

At a high level, cobbler-node.pp defines the hardware properties of the individual servers being deployed in the OpenStack cluster. site.pp defines the various parameters that must be set to configure the OpenStack cluster, and also provides the configuration settings for the build server. clean_node.sh is a shell script provided as a convenience to end users; it wraps several cobbler and puppet commands for ease of use when building and rebuilding the nodes of the OpenStack cluster.

IMPORTANT! You must edit these files. They are fairly well documented internally, but please comment with any questions. You can also read through these documents for more details: Cobbler Node and Site

Then, use the ‘puppet apply’ command to activate the manifests:

puppet apply -v /etc/puppet/manifests/site.pp

When the puppet apply command runs, the puppet client on the build server will follow the instructions in the site.pp and cobbler-node.pp manifests and will configure several programs on the build server:

ntpd -- a time synchronization server used on all OpenStack cluster nodes to ensure time throughout the cluster is correct

tftpd-hpa -- a TFTP server used as part of the PXE boot process when OpenStack nodes boot up

dnsmasq -- a DNS and DHCP server used as part of the PXE boot process when OpenStack nodes boot up

cobbler -- an installation and boot management daemon which manages the installation and booting of OpenStack nodes

apt-cacher-ng -- a caching proxy for package installations, used to speed up package installation on the OpenStack nodes

nagios -- a infrastructure monitoring application, used to monitor the servers and processes of the OpenStack cluster

collectd --a statistics collection application, used to gather performance and other metrics from the components of the OpenStack cluster

graphite and carbon -- a real-time graphing system for parsing and displaying metrics and statistics about OpenStack

The initial puppet configuration of the build server will take several minutes to complete as it downloads, installs, and configures all the software needed for these applications.

Once the puppet apply is completed, a reboot is recommended to ensure that all installed software is started in the correct sequence.

After the build server is configured and rebooted, the systems listed in cobbler-node.pp should be defined in cobbler on the build server:

# cobbler system list
control
compute01
compute02
#

And now, you should be able to use cobbler to build your controller:

/etc/puppet/manifests/clean_node.sh {node_name} example.com

Replace node_name with the name of your controller, and example.com with your cluster's domain.

clean_node.sh is a script which does several things:

configures Cobbler to PXE boot the specified node with appropriate PXE options to do an automated install of Ubuntu

uses Cobbler to power-cycle the node

removes any existing client registrations for the node from Puppet, so Puppet will treat it as a new install

removes any existing key entries for the node from the SSH known hosts database

When the script runs, you may see errors from the Puppet and SSH clean up steps if the machine did not already exist in Puppet or SSH. This is expected, and not a cause for alarm.

You can watch the progress on the console of your controller node as cobbler completes the automated install of Ubuntu. Once the installation finishes, the controller node will reboot and then will run puppet after it boots up. Puppet will pull and apply the controller node configuration defined in the puppet manifests on the build server.

This step will take several minutes, as puppet downloads, installs, and configures the various OpenStack components and support applications needed on the control node. /var/log/syslog on the controller node will display the progress of the puppet configuration run.

Note that it may take more than one puppet run for the controller node to be set up completely. Observe the log files to verify that the controller configuration has converged completely to the configuration defined in puppet.

Once the puppet configuration of the controller has completed, follow the same steps to build each of the other nodes in the cluster, using clean_node.sh to initiate each install. As with the controller, the other nodes will take several minutes for puppet configuration to complete, and may require multiple runs of puppet before they are fully converged to their defined configuration state.

As a short cut, if you want to build all of the nodes defined in your cobbler-node.pp file, you can run:

for n in `cobbler system list`; do clean_node.sh $n example.com ; done

note: replace example.com with your node's proper domain name.

Testing OpenStack

Once the nodes are built, and once puppet runs have completed on all nodes (watch /var/log/syslog on the cobbler node), you should be able to log into the OpenStack Horizon interface:

you will still need to log into the console of the control node to load in an image: user: localadmin, password: ubuntu. If you SU to root, there is an openrc auth file in root’s home directory, and you can launch a test file in /tmp/nova_test.sh.