<li><p>'''HAproxy Nodes''': Make sure the haproxy/keepalived services are running and the config files look good before proceeding. It is also very important that you test connectivity to Virtual IP addresses (telnet <vip_addr> <port>). If the VIP's are not working then the build-out of OpenStack Nodes will fail.</p></li>

+

<li><p>'''HAProxy Nodes''': Make sure the haproxy/keepalived services are running before proceeding to the next node type.</p></li>

-

<li><p>'''Swift Storage Nodes''': The drives should be zero'ed out if you are rebuilding the swift storage nodes. Use clean-disk.pp from the Cisco repo or use the following command on each storage node:</p>

+

<li><p>'''Swift Storage Nodes''': If you are rebuilding Swift Storage Nodes, the hard disks should be zero'ed out. Use the [https://github.com/CiscoSystems/cisco-openstack-docs/blob/master/examples/clean_disk.sh clean-disk script] from the Cisco repo or use the following command on each storage node before starting the rebuild:</p>

<pre>for i in b c d e f &lt;add/subtract drive letters as needed&gt;

<pre>for i in b c d e f &lt;add/subtract drive letters as needed&gt;

do

do

Line 105:

Line 105:

<li><p>'''Swift Proxy Node #1''': Make sure the ring is functional before adding the 2nd Proxy.</p></li>

<li><p>'''Swift Proxy Node #1''': Make sure the ring is functional before adding the 2nd Proxy.</p></li>

<li><p>'''Swift Proxy Node 2''': Make sure the ring is functional before proceeding.</p></li>

<li><p>'''Swift Proxy Node 2''': Make sure the ring is functional before proceeding.</p></li>

-

<li><p>'''Controller Nodes 1-3''': You must ensure that the HAproxy Virtual IP address for the Controller cluster is working or your puppet run will fail.</p></li>

+

<li><p>'''Controller Nodes 1-3''': You must ensure that the HAproxy Virtual IP address for the Controller cluster is working or your puppet run will fail. Deploy Controllers one at a time starting with Controller 1.</p></li>

<li><p>'''Compute Nodes''': Start off with just 1 or 2 nodes before deploying a large number.</p></li>

<li><p>'''Compute Nodes''': Start off with just 1 or 2 nodes before deploying a large number.</p></li>

<li><p>Test to make sure environment is functional.</p></li></ul>

<li><p>Test to make sure environment is functional.</p></li></ul>

Line 124:

Line 124:

<ul>

<ul>

-

<li>OpenStack Cisco Edition - /usr/share/puppet/modules. '''Note:''' This is the default location for the rake command below. Add the module path to the rake command if you select the Opensource Puppet installation path for the Puppet modules.</li>

The 'puppet-openstack' module was written for users interested in deploying and managing a production-grade, highly-available OpenStack deployment. It provides a simple and flexible means of deploying OpenStack, and is based on best practices shaped by companies that contributed to the design of these modules.

The 'puppet-openstack' module was written for users interested in deploying and managing a production-grade, highly-available OpenStack deployment. It provides a simple and flexible means of deploying OpenStack, and is based on best practices shaped by companies that contributed to the design of these modules.

Line 164:

Line 163:

The 'puppet-cobbler' module is used to provide several key tasks such as, bare-metal OS provisioning, ILO management of servers, etc..

The 'puppet-cobbler' module is used to provide several key tasks such as, bare-metal OS provisioning, ILO management of servers, etc..

* Shared Variables: I will not address every shared variable, as many of them are self explanatory or include inline documentation. Here is a list of shared variables that can use additional explanation:

* Shared Variables: I will not address every shared variable, as many of them are self explanatory or include inline documentation. Here is a list of shared variables that can use additional explanation:

** '''[$multi_host]''' Must be set to true. Establishes the [http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html#d6e7351 Nova multi-host networking model]

** '''[$multi_host]''' Must be set to true. Establishes the [http://docs.openstack.org/trunk/openstack-compute/admin/content/existing-ha-networking-options.html#d6e7351 Nova multi-host networking model]

Line 484:

Line 484:

The servers that act as your Swift Proxies and Storage Nodes should be managed by Cobbler. Make sure you your cobbler-node manifest is properly configured and you have added node definitions for your Swift Nodes. Here is an example:<br />

The servers that act as your Swift Proxies and Storage Nodes should be managed by Cobbler. Make sure you your cobbler-node manifest is properly configured and you have added node definitions for your Swift Nodes. Here is an example:<br />

-

<pre>cobbler::node { &quot;swift-proxy01&quot;:

+

<pre>cobbler::node { &quot;swiftproxy01&quot;:

mac =&gt; &quot;A4:4C:11:13:44:93&quot;,

mac =&gt; &quot;A4:4C:11:13:44:93&quot;,

profile =&gt; &quot;precise-x86_64-auto&quot;,

profile =&gt; &quot;precise-x86_64-auto&quot;,

Line 495:

Line 495:

}

}

-

cobbler::node { &quot;swift-storage01&quot;:

+

cobbler::node { &quot;swift01&quot;:

mac =&gt; &quot;A4:4C:11:13:BA:17&quot;,

mac =&gt; &quot;A4:4C:11:13:BA:17&quot;,

profile =&gt; &quot;precise-x86_64-auto&quot;,

profile =&gt; &quot;precise-x86_64-auto&quot;,

Line 505:

Line 505:

power_password =&gt; &quot;password&quot;,

power_password =&gt; &quot;password&quot;,

}</pre>

}</pre>

-

puppet apply the site.pp file to add the nodes to Cobbler:

-

<pre>puppet apply /etc/puppet/manifests/site.pp -v</pre>

+

Next, edit the node definitions and network settings in /etc/puppet/manifests/swift-nodes.pp. Replace existing node definitions with the hostname/certname of your Swift Storage and Proxy Nodes. The site.pp file should include the ''''import swift-nodes'''' statement.

'''Note:''' Do not define the 2nd Swift Proxy until the Storage Nodes and first proxy are deployed and the ring is established. Also, add additional Storage Node definitions as needed.

+

'''Note:''' Do not define the 2nd Swift Proxy until the Storage Nodes and first proxy are deployed and the ring is established. Also, add additional Storage Node definitions as needed. You must use at least 3 Storage Nodes to create a Swift ring.

-

'''Note:''' You must use at least 3 Storage Nodes to create a Swift ring.

+

Run puppet apply on the site.pp file of the Puppet Master to add the Swift Storage Nodes to Cobbler:

-

This swift configuration requires a PuppetMaster with storeconfigs enabled.

+

<pre>puppet apply /etc/puppet/manifests/site.pp -v</pre>

-

To fully configure a Swift environment, the nodes must be configured in the following order:

+

Verify that the nodes have been added to Cobbler

+

+

<pre>cobbler system list</pre>

+

+

The Swift Nodes must be configured in the following order:

+

+

* First the storage nodes need to be configured. This creates the storage services (object, container, account) and exports all of the storage endpoints for the ring builder (Proxy Node) into storeconfigs. '''Note:''' It is expected that the account, object, and container replication service will fail to start during the initial puppet run. You are ready to move to the first Proxy Node when you have reached this state in your puppet runs for the Storage Nodes.

+

* Next, Swift Proxy 1 should be deployed. The ringbuilder (included in the Proxy Node) collects the storage endpoints and creates the ring database. It also creates an rsync server which is used to host the ring database. The resources are used to rsync the ring database from the Swift Proxy.

+

* If you like, you can verify the contents of the Swift storeconfigs in the Puppet Master database:

+

<pre>mysql

+

+

use puppet;

+

+

select * from resources;</pre>

-

* First the storage nodes need to be configured. This creates the storage services (object, container, account) and exports all of the storage endpoints for the ring builder into storeconfigs. '''Note:''' It is expected that the account, object, and container replication service will fail to start during the initial puppet run. You are ready to move to the first Proxy Node when you have reached this state in your puppet runs for the Storage Nodes.

-

* Next, the ringbuild and Swift Proxy 1 must be configured. The ringbuilder needs to collect the storage endpoints and create the ring database before the proxy can be installed. It also sets up an rsync server which is used to host the ring database. Resources are exported that are used to rsync the ring database from this server.

* Next, the storage nodes should be run again so that they can rsync the ring databases.

* Next, the storage nodes should be run again so that they can rsync the ring databases.

The [https://github.com/danehans/puppet-openstack/blob/master/examples/swift-nodes.pp example configuration] creates five storage devices on every node. Make sure to increase/decrease the following swift-nodes.pp definitions based on the number of hard disks in your Storage Nodes:

+

<pre>swift-ring-builder /etc/swift/account.builder

+

+

swift-ring-builder /etc/swift/container.builder

+

+

swift-ring-builder /etc/swift/object.builder</pre>

+

+

+

The [https://github.com/danehans/puppet-openstack/blob/essex-ha/examples/swift-nodes.pp example configuration] creates five storage devices on every node. Make sure to increase/decrease the following swift-nodes.pp definitions based on the number of hard disks in your Storage Nodes:

<pre>swift::storage::disk

<pre>swift::storage::disk

Line 555:

Line 568:

@@ring_account_device</pre>

@@ring_account_device</pre>

-

Better examples of this will be provided in a future version of the module.

+

+

Next, use Cobbler to power-on the Storage Nodes and begin the deployment process:

Repeat this step for the 2 other Controller Nodes after the first Controller Node has been deployed. Repeat for Compute Nodes, when the Controllers have been successfully deployed.

-

<pre>puppet agent -t -d</pre>

Keep in mind that the deployment '''''MUST''''' be performed in a very specific order (outlined above). You can either make all the necessary changes to your site manifests and keep particular nodes powered-off, or you can change site.pp node name definition one-by-one and perform puppet runs.

Keep in mind that the deployment '''''MUST''''' be performed in a very specific order (outlined above). You can either make all the necessary changes to your site manifests and keep particular nodes powered-off, or you can change site.pp node name definition one-by-one and perform puppet runs.

The OpenStack High Availability Modules:

Introduction

The Openstack High Availability (HA) Puppet Modules are a flexible Puppet implementation capable of configuring Openstack and additional services for providing high-availability mechanisms. A 'Puppet Module' is a collection of related content that can be used to model the configuration of a discrete service.

Dependencies:

Puppet:

Operating System Platforms:

These modules have been fully tested on Ubuntu 12.04 LTS (Precise).

Networking:

Each of the servers running OpenStack services should have a minimum of 2 networks, and preferably 3 networks. The networks can be physically or virtually (VLAN) separated. In addition to the 2 OpenStack networks, it is recommended to have an ILO/CIMC network to fully leverage the remote management capabilities of the Cobbler Module. Additionally, puppet-networking models OpenStack network configurations.

The following provides a brief explanation of the OpenStack Module networking requirements.

OpenStack Management Network

This network is used to perform management functions against the node, Puppet Master <> Agent is an example.

Storage Volumes:

Every Compute Node is configured to host the nova-volume service to provide persistent storage to instances through iSCSI. The volume-group name is 'nova-volumes' and should not be changed.

Node Types:

The OpenStack HA solution consists of 6 Nodes Types:

Build Node

Minimum Quantity: 1

Runs Puppet Master, Cobbler, and other management services.

Provides capabilities for managing OpenStack environments at scale.

Load Balancer Node

Minimum Quantity: 2

Runs HAProxy and Keeplived.

Provides monitoring and fail-over for API endpoints and between load-balancer nodes.

Controller Node

Minimum Quantity: 3

Runs MySQL Galera, Keystone, Glance, Nova, Horizon, and RabbitMQ.

Provides control plane functionality for managing the OpenStack Nova environment.

Compute Node

Minimum Quantity: 1 (recommend having 2 or more to demonstrate nova-scheduler across multiple nodes)

Runs the following Nova services: api, compute, network, and volume.

Provides necessary infrastructure services to Nova Instances.

Swift Proxy Node

Minimum Quantity: 2

Runs swift-proxy, memcached, and keystone-client.

Authenticates users against Keystone and acts as a translation layer between clients and storage.

Swift Storage Node

Minimum Quantity: 3

Runs Swift account/container/object services. XFS is used as the filesystem.

Controls storage of the account databases, container databases, and the stored objects.

Installation

Installation Order

The OpenStack Nodes are required to be deployed in a very specific order. For the time being, you need to perform multiple puppet runs for most Nodes to deploy properly. The following is the order in which the nodes should be deployed. Preface commands with sudo if you are not the root user:

HAProxy Nodes: Make sure the haproxy/keepalived services are running before proceeding to the next node type.

Swift Storage Nodes: If you are rebuilding Swift Storage Nodes, the hard disks should be zero'ed out. Use the clean-disk script from the Cisco repo or use the following command on each storage node before starting the rebuild:

Swift Proxy Node #1: Make sure the ring is functional before adding the 2nd Proxy.

Swift Proxy Node 2: Make sure the ring is functional before proceeding.

Controller Nodes 1-3: You must ensure that the HAproxy Virtual IP address for the Controller cluster is working or your puppet run will fail. Deploy Controllers one at a time starting with Controller 1.

Compute Nodes: Start off with just 1 or 2 nodes before deploying a large number.

Overview of Key Modules

The 'puppet-openstack' module was written for users interested in deploying and managing a production-grade, highly-available OpenStack deployment. It provides a simple and flexible means of deploying OpenStack, and is based on best practices shaped by companies that contributed to the design of these modules.

Note: The encrypted password (password_crypted) is ubuntu. For more information on the parameters, check out the inline documentation in the manifest:

module_path/cobbler/manifests/init.pp

cobbler::ubuntu

This class manages the Ubuntu ISO used to PXE boot servers.

Usage Example:

This class will load the Ubuntu Precise x86_64 server ISO into Cobbler:

cobbler::ubuntu { "precise":}

For more information on the parameters, review the inline documentation within the manifest:

module_path/cobbler/manifests/ubuntu.pp

cobbler::node

This manifest installs a node into the cobbler system. Run puppet apply -v /etc/puppet/manifests/site.pp after adding nodes to cobbler::node. You can use the 'cobbler system list' command to verify that nodes have been properly added to Cobbler.

Shared Variables: I will not address every shared variable, as many of them are self explanatory or include inline documentation. Here is a list of shared variables that can use additional explanation:

This class manages the /etc/network/interfaces file for OpenStack HA Nodes.

Usage Example:

This class will configure networking parameters for OpenStack Nodes. This particular example configures networking for a Controller Node that uses VLAN 221 for the Nova Instance (flat) network and collapses the public/management networks:

This class adds a service user account to the database that allows HAProxy to monitor the database cluster health. Note: If you change the user account information here, you need to also change the mysql-check user definition in the the haproxy::config manifest.

Usage Example:

class {'galera::haproxy': }

If needed, you can define account credentials for the service account. For more information on the parameters, check out the inline documentation in the manifest:

The Openstack compute class is used to manage Nova Compute Nodes. A typical Openstack HA installation would consist of at least 3 Controller Nodes and a large number of Compute Nodes (based on the amount of resources being virtualized)

The openstack::compute class deploys the following services: * nova - nova-compute (libvirt backend) - nova-network service (multi_host must be enabled) - nova-api service (multi_host must be enabled) - nova-volume

Inside the site.pp file, Puppet resources declared within node blocks are applied to those specified nodes. Resources specified at top-scope are applied to all nodes.

Deploying HAProxy Load-Balancers

The servers that act as your load-balancers should be managed by Cobbler.
Make sure you your cobbler-node manifest is properly configured and you have added node definitions for your two load-balancers nodes. Here is an example:

Lastly, use Cobbler to power-on the HAProxy Nodes and begin the deployment process:

sudo cobbler system poweron --name=<name of HAProxy Node1 from cobbler system list command>
sudo cobbler system poweron --name=<name of HAProxy Node2 from cobbler system list command>

Deploying Swift

The servers that act as your Swift Proxies and Storage Nodes should be managed by Cobbler. Make sure you your cobbler-node manifest is properly configured and you have added node definitions for your Swift Nodes. Here is an example:

Next, edit the node definitions and network settings in /etc/puppet/manifests/swift-nodes.pp. Replace existing node definitions with the hostname/certname of your Swift Storage and Proxy Nodes. The site.pp file should include the 'import swift-nodes' statement.

Note: Do not define the 2nd Swift Proxy until the Storage Nodes and first proxy are deployed and the ring is established. Also, add additional Storage Node definitions as needed. You must use at least 3 Storage Nodes to create a Swift ring.

Run puppet apply on the site.pp file of the Puppet Master to add the Swift Storage Nodes to Cobbler:

puppet apply /etc/puppet/manifests/site.pp -v

Verify that the nodes have been added to Cobbler

cobbler system list

The Swift Nodes must be configured in the following order:

First the storage nodes need to be configured. This creates the storage services (object, container, account) and exports all of the storage endpoints for the ring builder (Proxy Node) into storeconfigs. Note: It is expected that the account, object, and container replication service will fail to start during the initial puppet run. You are ready to move to the first Proxy Node when you have reached this state in your puppet runs for the Storage Nodes.

Next, Swift Proxy 1 should be deployed. The ringbuilder (included in the Proxy Node) collects the storage endpoints and creates the ring database. It also creates an rsync server which is used to host the ring database. The resources are used to rsync the ring database from the Swift Proxy.

If you like, you can verify the contents of the Swift storeconfigs in the Puppet Master database:

mysql
use puppet;
select * from resources;

Next, the storage nodes should be run again so that they can rsync the ring databases.

Repeat this step for the Proxy Node 1 after the Storage Nodes have been deployed. Repeat for Proxy Node 2, when the ring is established between the 3 Storage Nodes and Proxy Node 1.

Note: After the Swift environment has been deployed properly, you will see the following errors on Storage Nodes of subsequent puppet runs:

err: Could not retrieve catalog from remote server: Error 400 on SERVER:
Exported resource Swift::Ringsync[account] cannot override local resource on node <node_name>
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run

Comment-out the Swift::Ringsync<<||>> definition under the Storage Nodes or Class that gets imported into your Storage Node definitions.

#Swift::Ringsync<<||>>

Deploying an OpenStack Nova HA Environment

The servers that act as your Nova Controllers and Compute Nodes should be managed by Cobbler. Make sure you your cobbler-node manifest is properly configured and you have added node definitions for your Controller and Compute Nodes. Here is an example:

Next, edit the node definitions and network settings in /etc/puppet/manifests/site.pp. Replace control01, control02, control03, compute01 with the hostname/certname of your Controller/Compute Nodes. Note: Since Controller Nodes need to be deployed in order of 1-3, we suggest you edit site.pp node name definitions one-by-one and perform puppet runs. The same applies to your Compute Node(s). Otherwise the nodes

Repeat this step for the 2 other Controller Nodes after the first Controller Node has been deployed. Repeat for Compute Nodes, when the Controllers have been successfully deployed.

Keep in mind that the deployment MUST be performed in a very specific order (outlined above). You can either make all the necessary changes to your site manifests and keep particular nodes powered-off, or you can change site.pp node name definition one-by-one and perform puppet runs.

Verifying an OpenStack deployment

Once you have installed openstack using Puppet (and assuming you experience no errors), the next step is to verify the installation:

openstack::auth_file

The optionstack::auth_file class creates the file:

/root/openrc

Which stores environment variables that can be used for authentication of OpenStack command line utilities.

Administration

The OpenStack Cisco Edition includes several tools to assist with administration. The clean_node.sh script is a tool that removes the necessary configurations associated to a node and starts the rebuilding process. The script is located at <module_path>/os-docs/examples/clean_node.sh

Usage Example:

/etc/puppet/modules/os-docs/examples/clean_node.sh control01.corp.com

Note: The script will use sdu.lab if you do not specify the FQDN. You can change the default domain name by editing the clean_node.sh script.

Additional administration guides are available for providing details on managing and operating an OpenStack environment.

Participating

Need a feature? Found a bug? Let us know!

We are extremely interested in growing a community of OpenStack experts and users around these modules so they can serve as an example of consolidated best practices of production-quality OpenStack deployments.

The best way to get help with this set of modules is through email:

openstack-support@cisco.com

Issues should be reported here:

openstack-support@cisco.com

The process for contributing code is as follows:

fork the projects in github

submit pull requests to the projects containing code contributions

Future features:

Efforts are underway to implement the following additional features:

Support OpenStack Folsom release

These modules are currently intended to be classified and data-fied in a site.pp. Starting in version 3.0, it is possible to populate class parameters explicitly using puppet data bindings (which use hiera as the back-end). The decision not to use hiera was primarily based on the fact that it requires explicit function calls in 2.7.x

Integrate with PuppetDB to allow service auto-discovery to simplify the configuration of service association