'''Note:''' Before running the installation script for COI it is important to make any modifications to the baseline AIO configuration if you have non-standard interface definitions, hostnames (can be viewed in /root/puppet_openstack_builder/data/role_mappings.yaml file), proxies, etc... Details on setting some of these custom values can be found in the [http://docwiki.cisco.com/wiki/Openstack:Havana-Openstack-Installer Cisco OpenStack Installer (COI)] instructions.

'''Note:''' Before running the installation script for COI it is important to make any modifications to the baseline AIO configuration if you have non-standard interface definitions, hostnames (can be viewed in /root/puppet_openstack_builder/data/role_mappings.yaml file), proxies, etc... Details on setting some of these custom values can be found in the [http://docwiki.cisco.com/wiki/Openstack:Havana-Openstack-Installer Cisco OpenStack Installer (COI)] instructions.

-

Here are two examples that include a way to set custom interface definitions and custom hostnames for the AIO Model 1 setup:

+

Here are three examples that include a way to set custom interface definitions and custom hostnames for the AIO Model 1 setup:

* If you are using an interface other than 'eth0' on your node for SSH/Management access then export the default_interface value to the correct interface definition. In the example below, eth1 is used:

* If you are using an interface other than 'eth0' on your node for SSH/Management access then export the default_interface value to the correct interface definition. In the example below, eth1 is used:

* <pre>export default_interface=eth1 # This is the interface you logged into via ssh</pre>

* <pre>export default_interface=eth1 # This is the interface you logged into via ssh</pre>

Revision as of 18:56, 11 March 2014

Contents

Overview

The OpenStack Havana Release All-In-One (AIO) deployment builds off of the Cisco OpenStack Installer (COI) instructions. The Cisco OpenStack Installer provides support for a variety of deployment scenarios to include:

All-in-One

All-in-One plus additional Compute nodes

2 Node

Full HA

Compressed HA

This document will cover the deployment of two networking scenarios based on the All-in-One scenario:

Model 1

This section describes the process for deploying OpenStack with the Cisco OpenStack Installer in an All-In-One node configuration with Per-Tenant Routers with Private Networks

Assumptions

The Cisco OpenStack Installer requires that you have two physically or logically (VLAN) separated IP networks. One network is used to provide connectivity for OpenStack API endpoints, Open vSwitch (OVS) GRE endpoints (especially important if multiple compute nodes are added to the AIO deployment), and OpenStack/UCS management. The second network is used by OVS as the physical bridge interface and by Neutron as the public network.

The AIO node is built on Ubuntu 12.04 LTS which can be installed via manual ISO/DVD or PXE setup and can be deployed on physical baremetal hardware (i.e. Cisco UCS) or as Virtual Machines (i.e. VMware ESXi).

You have followed the installation steps in the Cisco OpenStack Installer (COI) instructions. Note:A recap of the AIO-specific instructions are provided below.

You are using hostnames for the various OpenStack roles that match those in the /root/puppet_openstack_builder/data/role_mappings.yaml file. If you are not using the default hostnames then you must add your custom hostname and role to the /root/puppet_openstack_builder/data/role_mappings.yaml before running the installation script.

Building the All-in-One OpenStack Node

The deployment of the AIO node in Model 1 will begin after a fresh install of Ubuntu 12.04 LTS and with the network configuration based on the example shown in Figure 1.

Note: Before running the installation script for COI it is important to make any modifications to the baseline AIO configuration if you have non-standard interface definitions, hostnames (can be viewed in /root/puppet_openstack_builder/data/role_mappings.yaml file), proxies, etc... Details on setting some of these custom values can be found in the Cisco OpenStack Installer (COI) instructions.

Here are three examples that include a way to set custom interface definitions and custom hostnames for the AIO Model 1 setup:

If you are using an interface other than 'eth0' on your node for SSH/Management access then export the default_interface value to the correct interface definition. In the example below, eth1 is used:

export default_interface=eth1 # This is the interface you logged into via ssh

If you are using an interface other than 'eth1' on your node for external instance (public) access then export the external_interface value. In the example below, eth2 is used:

export external_interface=eth2

If you are using a hostname other than "all-in-one" for the AIO node then you must update the /root/puppet_openstack_builder/data/role_mappings.yaml file to include your hostname and its role. For example if your hostname is "all-in-one-test1" then the role_mappings.yaml file should have an entry that looks like this:

all-in-one-test1: all_in_one

Export 'cisco' as the vendor:

export vendor=cisco

Export the AIO scenario:

export scenario=all_in_one

Change directory to where the install script is located and start the installation (this will take awhile depending on your Internet connection):

After the install script and Puppet run are completed, you should be at the prompt again with a "Finished catalog run". You can verify that all of the OpenStack Nova services were installed and running correctly by checking the Nova service list:

Neutron Networking

This section will walk through buiding a Per-Tenant Router with Private Networks Neutron setup. You can opt to perform all of the steps below in the OpenStack Dashboard or via CLI. The CLI steps are shown below. Also, please consult the Figure 1 diagram so that you can easily understand the network layout used by Neutron in our example.

Before running OpenStack client commands, you need to source the installed openrc file located in the /root/ directory:

source openrc

Create a public network to be used for instances (VMs) to gain external (public) connectivity:

neutron net-create Public_Network --router:external=True

Create a subnet that is associated with the previously created public network. Note: If you have existing hosts on the same subnet that you are about to use for the public subnet then you must use an allocation pool that starts in a range that will not conflict with other network nodes. One example of this is if you have HSRP/VRRP/GLPB upstream and they are using address in the public subnet ranges (i.e. 192.168.81.1, 192.168.81.2, 192.168.81.3) then your allocation range must start in a non-overlapping range.

Boot an Instance

1. Boot an Instance (Cirros image example shown below). Run the "neutron net-list" command to get a list of networks. Use the ID for the Private_Net10 network from the net-list output in the --nic net-id= field:

Verify that your instance has spawned successfully. Note: The first time an instance is launched on the system it can take a bit longer to boot than subsequent launches of instances:

nova show test-vm1

2. Verify connectivity to the instance from the AIO node. Since namespaces are being used in this model, you will need to run the commands from the context of the qrouter using the "ip netns exec qrouter" syntax. List the qrouter to get its router-id, connect to the qrouter and get a list of its addresses, ping the instance from the qrouter and then SSH into the instance from the qrouter: