ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)

Cisco Validated Design program consist of systems and solutions that are designed, tested, and documented to facilitate and improve customer deployments. These designs incorporate a wide range of technologies and products into a portfolio of solutions that have been developed to address the business needs of our customers.

The reference architecture described in this document is a realistic use case for deploying Red Hat Enterprise Linux OpenStack Platform 7 on Cisco UCS blade and rack servers. The document covers step by step instructions for setting UCS hardware, installing Red Hat Linux OpenStack Director, issues and workarounds evolved during installation, integration of Cisco Plugins with OpenStack, what needs to be done to leverage High Availability from both hardware and software, use case of Live Migration, performance and scalability tests done on the configuration, lessons learnt, best practices evolved while validating the solution and a few troubleshooting steps, etc.

Cisco UCS Integrated Infrastructure for Red Hat Enterprise Linux OpenStack Platform is all in one solution for deploying OpenStack based private Cloud using Cisco Infrastructure and Red Hat Enterprise Linux OpenStack platform. The solution is validated and supported by Cisco and Red Hat, to increase the speed of infrastructure deployment and reduce the risk of scaling from proof-of-concept to full enterprise production.

Automation, virtualization, cost, and ease of deployment are the key criteria to meet the growing IT challenges. Virtualization is a key and critical strategic deployment model for reducing the Total Cost of Ownership (TCO) and achieving better utilization of the platform components like hardware, software, network and storage. The platform should be flexible, reliable and cost effective for enterprise applications.

The audience for this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineers, IT architects, and customers who want to take advantage of an infrastructure that is built to deliver IT efficiency and enable IT innovation. The reader of this document is expected to have the necessary training and background to install and configure Red Hat Enterprise Linux, Cisco Unified Computing System (UCS) and Cisco Nexus Switches as well as a high level understanding of OpenStack components. External references are provided where applicable and it is recommended that the reader be familiar with these documents.

Readers are also expected to be familiar with the infrastructure, network and security policies of the customer installation.

This document describes the step by step installation of Red Hat Enterprise Linux OpenStack Platform 7 and Red Hat Ceph Storage 1.3 architecture on Cisco UCS platform. It also discusses about the day to day operational challenges of running OpenStack and steps to mitigate them, High Availability use cases, Live Migration, common troubleshooting aspects of OpenStack along with Operational best practices.

This solution is focused on Red Hat Enterprise Linux OpenStack Platform 7 (based on the upstream OpenStack Kilo release) and Red Hat Ceph Storage 1.3 on Cisco Unified Computing System. The advantages of Cisco UCS and Red Hat Enterprise Linux OpenStack Platform combine to deliver an OpenStack Infrastructure as a Service (IaaS) deployment that is quick and easy to setup. The solution can scale up for greater performance and capacity or scale out for environments that require consistent, multiple deployments. It provides:

Converged infrastructure of Compute, Networking, and Storage components from Cisco UCS is a validated enterprise-class IT platform, rapid deployment for business critical applications, reduces costs, minimizes risks, and increase flexibility and business agility Scales up for future growth.

Red Hat Enterprise Linux OpenStack Platform 7 on Cisco UCS helps IT organizations accelerate cloud deployments while retaining control and choice over their environments with open and inter-operable cloud solutions. It also offers redundant architecture on compute, network, and storage perspective. The solution comprises of the following key components:

·Cisco Unified Computing System (UCS)

—Cisco UCS 6200 Series Fabric Interconnects

—Cisco VIC 1340

—Cisco VIC 1227

—Cisco 2204XP IO Module or Cisco UCS Fabric Extenders

—Cisco B200 M4 Servers

—Cisco C240 M4 Servers

·Cisco Nexus 9300 Series Switches

·Cisco Nexus 1000v for KVM

·Cisco Nexus Plugin for Nexus Switches

·Cisco UCS Manager Plugin for Cisco UCS

·Red Hat Enterprise Linux 7.x

·Red Hat Enterprise Linux OpenStack Platform Director

·Red Hat Enterprise Linux OpenStack Platform 7

·Red Hat Ceph Storage 1.3

The scope is limited to the infrastructure pieces of the solution. It does not address the vast area of the OpenStack components and multiple configuration choices available in OpenStack.

This architecture is based on Red Hat Enterprise Linux OpenStack platform build on Cisco UCS hardware is an integrated foundation to create, deploy, and scale OpenStack cloud based on Kilo OpenStack community release. Kilo version introduces Red Hat Linux OpenStack Director (RHEL-OSP), a new deployment tool chain that combines the functionality from the upstream TripleO and Ironic projects with components from previous installers.

The reference architecture use case provides a comprehensive, end-to-end example of deploying RHEL-OSP7 cloud on bare metal using OpenStack Director and services through heat templates.

The first section in this Cisco Validated Design covers setting up of Cisco hardware the blade and rack servers, chassis and Fabric Interconnects and the peripherals like Nexus 9000 switches. The second section explains the step by step install instructions for installing cloud through RHEL OSP Director. The final section includes the functional and High Availability tests on the configuration, Performance, Live migration tests, and the best practices evolved while validating the solution.

The configuration comprised of 3 controller nodes, 6 compute nodes, 3 storage nodes, a pair of UCS Fabrics and Nexus switches, where most of the tests were conducted. In another configuration the system had 20 Compute nodes, 12 Ceph nodes and 3 controllers distributed across 3 UCS chassis where few install and scalability tests were performed. Needless to say that architecture is scalable horizontally and vertically within the chassis.

·More Compute Nodes and Chassis can be added as desired.

·More Ceph Nodes for storage can be added. The Ceph nodes can be UCS C240M4L or C240M4S.

·If more bandwidth is needed, Cisco IO Modules can be 2208XP as opposed to 2204XP used in the configuration.

This solution components and diagrams are implemented per the Design Guide and basic overview is provided below.

The Cisco Unified Computing System is an integrated, scalable, multi-chassis platform in which all resources participate in a unified management domain. The Cisco Unified Computing System accelerates the delivery of new services simply, reliably, and securely through end-to-end provisioning and migration support for both virtualized and non-virtualized systems. Cisco UCS manager using single connect technology manages servers and chassis and performs auto-discovery to detect inventory, manage, and provision system components that are added or changed.

The Red Hat Enterprise Linux OpenStack Platform IaaS cloud on Cisco UCS servers is implemented as a collection of interacting services that control compute, storage, and networking resources.

OpenStack Networking handles creation and management of a virtual networking infrastructure in the OpenStack cloud. Infrastructure elements include networks, subnets, and routers. Because OpenStack Networking is software-defined, it can react in real-time to changing network needs, such as creation and assignment of new IP addresses.

Compute serves as the core of the OpenStack cloud by providing virtual machines on demand. Computes supports the libvirt driver that uses KVM as the hypervisor. The hypervisor creates virtual machines and enables live migration from node to node.

OpenStack also provides storage services to meet the storage requirements for the above mentioned virtual machines.

The Keystone provides user authentication to all OpenStack systems.

The solution also includes OpenStack Networking ML2 Core components.

Cisco Nexus 1000V OpenStack solution is an enterprise-grade virtual networking solution, which brings Security, Policy control, and Visibility together with Layer2/Layer 3 switching at the hypervisor layer. When it comes to application visibility, Cisco Nexus 1000V provides insight into live and historical VM migrations and advanced automated troubleshooting capabilities to identify problems in seconds.

The Cisco Nexus driver for OpenStack Neutron allows customers to easily build their infrastructure-as-a-service (IaaS) networks using the industry’s leading networking platform, delivering performance, scalability, and stability with the familiar manageability and control you expect from Cisco® technology.

Bill of Materials

This section contains the Bill of Materials used in the configuration.

Component

Model

Quantity

Comments

OpenStack Platform Director Node

Cisco UCS B200M4 blade

1

CPU – 2 x E5-2630 V3

Memory – 8 x 16GB 2133 MHz DIMM – total of 128G

Local Disks – 2 x 300 GB SAS disks for Boot

Network Card – 1x1340 VIC

Raid Controller – Cisco MRAID 12 G SAS Controller

Controller Nodes

Cisco UCS B200M4 blades

3

CPU – 2 x E5-2630 V3

Memory – 8 x 16GB 2133 MHz DIMM – total of 128G

Local Disks – 2 x 300 GB SAS disks for Boot

Network Card – 1x1340 VIC

Raid Controller – Cisco MRAID 12 G SAS Controller

Compute Nodes

Cisco UCS B200M4 blades

6

CPU – 2 x E5-2660 – V3

Memory – 16 x 16GB 2133 MHz DIMM – total of 256G

Local Disks – 2 x 300 GB SAS disks for Boot

Network Card – 1x1340 VIC

Raid Controller – Cisco MRAID 12 G SAS Controller

Storage Nodes

Cisco UCS C240M4L rack servers

3

CPU – 2 x E5-2630 – V3

Memory – 8 x 16GB 2133 MHz DIMM – total of 128G

Internal HDD – None

Ceph OSD’s – 8 x 6TB SAS Disks

Ceph Journals – 2 x 400GB SSD’s

OS Boot – 2 x 1TB SAS Disks

Network Cards – 1 x VIC 1227

Raid Controller – Cisco MRAID 12 G SAS Controller

Chassis

Cisco UCS 5108 Chassis

2

IO Modules

IOM 2204 XP

4

Fabric Interconnects

Cisco UCS 6248UP Fabric Interconnects

2

Switches

Cisco Nexus 9372PX Switches

2

Deployment and a few performance tests have been evaluated on another configuration with similar hardware and software specifications as listed above but with 20 Compute nodes and 12 Ceph storage nodes.

Server pools will be utilized to divide the OpenStack server roles for ease of deployment and scalability. These pools will also decide the placement of server roles within the infrastructure. The following pools were created.

·OpenStack Controller Server pool

·OpenStack Compute Server pool

·OpenStack Ceph Server pool

The Undercloud node will be a single server and is not associated with any pool. It is a standalone template and is used to create a service profile clone.

The compute server pool allows quick provisioning of additional hosts by adding the new servers to the compute server pool. The newly provisioned compute hosts can be added into an existing OpenStack environment through introspection and Overcloud deploy, covered later in this document.

The controllers and computes are distributed across the chassis. This gives High Availability to the stack though a failure of Chassis per se does not happen. There is only one Installer node in the system and can be added in any one of the Chassis as above. In case of larger deployments having 3 or more chassis, it is recommended to distribute one controller in each chassis.

In larger deployments where the chassis are fully loaded with blades a better approach while creating server pools could be distribute manually the tenant and storage traffic across the Fabrics.

Compute pools are created as listed below:

·OpenStack Compute Server pool A

·OpenStack Compute Server pool B

·The Compute Server pool A can be used for the blades on the left side of the chassis pinned to Fabric A, while the Compute Server pool B can be used for the blades on the right side of the chassis. This is achieved with pool A using vNICs pinned to Fabric A while pool B tenant vNICs pinned to Fabric B.

Service profiles will be created from the service templates. However once successfully created, they will be unbound from the templates. The vNIC to be used for tenant traffic needs to be identified as eth1. This is to take care of the current limitation in Cisco UCSM kilo plugin for OpenStack. This is being addressed while this document is being written and will be taken care in the future releases.

A Floating or Provider network is not necessary. It has been included in the configuration because of the limitation in the IP’s. Virtual machines can be configured to have direct access through the external network too.

A separate network layout was also verified in another POD without any floating IP’s. This is for customers who do not have the limitations of external IP’s as encountered in the configuration. However most of the tests were performed with floating IP’s only. The Network Topology in this design is almost similar to what shown above. The virtual machines can be accessed directly from the external work. The below diagram depicts how the network was configured in this POD without floating network. With this you need not have floating vNIC interface for Controller Service profile, nor you will need a section of block in controller.yaml for floating ip and passing the floating parameter in your overcloud deploy command. Refer Appendix B for details.

The family of vNICs are placed in the same Fabric Interconnect to avoid an extra hop to the upstream Nexus switches.

The following categories of vNICs are used in the setup:

·Provisioning Interfaces pxe vNICs are pinned to Fabric A

·Tenant vNICs are pinned to Fabric A

·Internal API vNICs are pinned to Fabric B

·External Interfaces vNICs are pinned to Fabric A

·Storage Public Interfaces are pinned to Fabric A

·Storage Management Interfaces are pinned to Fabric B

Only one Compute server pool is created in the setup. However, we may create multiple pools if desired as mentioned above.

While configuring vNICs in templates and with failover option enabled in Fabrics, the vNICs order has to be specified manually as shown below.

The order of vNIC’s has to be pinned as above for consistent PCI device naming options. The above is an example of controller blade. The same has to be done for all the other servers, the Compute and Storage nodes. This order should match the Overcloud heat templates NIC1, NIC2, NIC3, and NIC4.

Red Hat Linux OpenStack Platform Director is a new set of tool chain introduced with Kilo that automates the creation of Undercloud and Overcloud nodes as above. It performs the following:

·Install Operating System on Undercloud Node

·Install Undercloud Node

·Perform Hardware Introspection

·Prepare Heat templates and Install Overcloud

·Implement post Overcloud configuration steps

·Create Tenants, Networks and Instances for Cloud

Undercloud Node is the deployment environment while Overcloud nodes are referred to nodes actually rendering the cloud services to the tenants.

The Undercloud is the TripleO (OOO – OpenStack over OpenStack) control plane. It uses native OpenStack APIs and services to deploy, configure, and manage the production OpenStack deployment. The Undercloud defines the Overcloud with Heat templates and then deploys it through the Ironic bare metal provisioning service. OpenStack Director includes predefined Heat templates for the basic server roles that comprise the Overcloud. Customizable templates allow Director to deploy, redeploy, and scale complex Overclouds in a repeatable fashion.

Ironic gathers information about bare metal servers through a discovery mechanism known as introspection. Ironic pairs servers with bootable images and installs them through PXE and remote power management.

Red Hat Linux OpenStack Director deploys all servers with the same generic image by injecting Puppet modules into the image to tailor it for specific server roles. It then applies host-specific customizations through Puppet including network and storage configurations. While the Undercloud is primarily used to deploy OpenStack, the Overcloud is a functional cloud available to run virtual machines and workloads.

The following subsections detail the roles that comprise the Overcloud.

This role provides endpoints for REST-based API queries to the majority of the OpenStack services. These include Compute, Image, Identity, Block, Network, and Data processing. The controller nodes also provide the supporting facilities for the API’s, database, load balancing, messaging, and distributed memory objects. They also provide external access to virtual machines. The controller can run as a standalone server or as a High Availability (HA) cluster. The current configuration was configured with HA.

This role provides the processing, memory, storage, and networking resources to run virtual machine instances. It runs the KVM hypervisor by default. New instances are spawned across compute nodes in a round-robin fashion based on resource availability.

Ceph is a distributed block, object store and file system. This role deploys Object Storage Daemon (OSD) nodes for Ceph clusters. It also installs the Ceph Monitor service on the controller. The instance distribution is influenced by the currently set filters. The default filters can be altered if needed; for more information, please refer to the OpenStack documentation.

OpenStack requires multiple network functions. While it is possible to collapse all network functions onto a single network interface, isolating communication streams in their own physical or virtual networks provides better performance and scalability. Each OpenStack service is bound to an IP on a particular network. In a cluster a service virtual IP is shared among all of the HA controllers.

The Control plane installs Overcloud through this network. All nodes must have a physical interface attached to the provisioning network. This network carries DHCP/PXE and TFTP traffic. It must be provided on a dedicated interface or native VLAN to the boot interface. The provisioning interface can also act as a default gateway for to Overcloud; the compute and storage nodes use this provisioning gateway interface on the Undercloud node.

The External network is used for hosting the Horizon dashboard and the Public APIs, as well as hosting the floating IPs that are assigned to VMs. The Neutron L3 routers which perform NAT are attached to this interface. The range of IPs that are assigned to floating IPs should not include the IPs used for hosts and VIPs on this network.

This network is used for connections to the API servers, as well as RPC messages using RabbitMQ and connections to the database. The Glance Registry API uses this network, as does the Cinder API. This network is typically only reachable from inside the OpenStack Overcloud environment, so API calls from outside the cloud will use the Public APIs.

Virtual machines communicate over the tenant network. It supports three modes of operation: VXLAN, GRE, and VLAN. VXLAN and GRE tenant traffic is delivered through software tunnels on a single VLAN. Individual VLANs correspond to tenant networks in the case where VLAN tenant networks are used.

This network carries storage communication including Ceph, Cinder, and Swift traffic. The virtual machine instances communicate with the storage servers through this network. Data-intensive OpenStack deployments should isolate storage traffic on a dedicated high bandwidth interface, i.e. 10 GB interface. The Glance API, Swift proxy, and Ceph Public interface services are all delivered through this network.

Storage management communication can generate large amounts of network traffic. This network is shared between the front and back end storage nodes. Storage controllers use this network to access data storage nodes. This network is also used for storage clustering and replication traffic.

The previous section discussed server roles. Each server role requires access to specific types of network traffic. The network isolation feature allows Red Hat Enterprise Linux OpenStack Platform Director to segment network traffic by particular network types. When using network isolation, each server role must have access to its required network traffic types.

By default, Red Hat Enterprise Linux OpenStack Platform Director collapses all network traffic to the provisioning interface. This configuration is suitable for evaluation, proof of concept, and development environments. It is not recommended for production environments where scaling and performance are primary concerns.

The VXLAN mechanism driver encapsulates each layer 2 Ethernet frame sent by the VMs in a layer 3 UDP packet. The UDP packet includes an 8-byte field, within which a 24-bit value is used for the VXLAN Segment ID. The VXLAN Segment ID is used to designate the individual VXLAN over network on which the communicating VMs are situated. This provides segmentation for each Tenant network

The GRE mechanism driver encapsulates each layer 2 Ethernet frame sent by the VMs in a special IP packet using the GRE protocol (IP type 47). The GRE header contains a 32-bit key which is used to identify a flow or virtual network in a tunnel. This provides segmentation for each Tenant network.

Cisco Nexus Plugin is bundled in OpenStack Platform 7 kilo release. While it can support both VLAN and VXLAN configurations, only VLAN mode is validated as part of this design. VXLAN will be considered in future releases when the current VIC 1340 Cisco interface card will be certified on VXLAN and Red Hat operating system.

Two components drive HA for all core and non-core OpenStack services: the cluster manager and the proxy server.

The cluster manager is responsible for the startup and recovery of an inter-related services across a set of physical machines. It tracks the cluster’s internal state across multiple machines. State changes trigger appropriate responses from the cluster manager to ensure service availability and data integrity.

This section describes the steps to configure networking for Overcloud. The network setup used in the configuration as shown in Figure 5 earlier.

The configuration is done using Heat Templates on the Undercloud prior to deploying the Overcloud. These steps need to be followed after the Undercloud install. In order to use network isolation, we have to define the Overcloud networks. Each will have an IP subnet, a range of IP addresses to use on the subnet and a VLAN ID. These parameters will be defined in the network environment file. In addition to the global settings there is a template for each of the nodes like controller, compute and Ceph that determines the NIC configuration for each role. These have to be customized to match the actual hardware configuration.

Heat communicates with Neutron API running on the Undercloud node to create isolated networks and to assign neutron ports on these networks. Neutron will assign a static port to each port and Heat will use these static IP’s to configure networking on the Overcloud nodes. A utility called os-net-config runs on each node at provisioning time to configure host level networking.

The cluster manager is responsible for the startup and recovery of an inter-related services across a set of physical machines. It tracks the cluster’s internal state across multiple machines. State changes trigger appropriate responses from the cluster manager to ensure service availability and data integrity.

In the HA model Clients do not directly connect to service endpoints. Connection requests are routed to service endpoints by a proxy server.

Cluster manager provides state awareness of other machines to coordinate service startup and recovery, shared quorum to determine majority set of surviving cluster nodes after failure, data integrity through fencing and automated recovery of failed instances.

Proxy servers help in load balancing connections across service end points. The nodes can be added or removed without interrupting service.

Red Hat Linux OpenStack Director uses HAproxy and Pacemaker to manage HA services and load balance connection requests. With the exception of RabbitMQ and Galera, HAproxy distributes connection requests to active nodes in a round-robin fashion. Galera and RabbitMQ use persistent options to ensure requests go only to active and/or synchronized nodes. Pacemaker checks service health at one second intervals. Timeout settings vary by service.

The combination of Pacemaker and HAProxy:

·Detects and recovers machine and application failures

·Starts and stops OpenStack services in the correct order

·Responds to cluster failures with appropriate actions including resource failover and machine restart and fencing

RabbitMQ, memcached, and mongodb do not use HAProxy server. These services have their own failover and HA mechanisms.

The Cisco Nexus driver for OpenStack Neutron allows customers to easily build their Infrastructure-as-a-Service (IaaS) networks using the industry’s leading networking platform, delivering performance, scalability, and stability with the familiar manageability and control you expect from Cisco® technology. ML2 Nexus drivers dynamically provision OpenStack managed VLAN’s on Nexus switches. They configure the trunk ports with the dynamically created VLAN’s solving the logical port count issue on Nexus switches. They provide better manageability of the network infrastructure.

ML2 UCSM drivers dynamically provision OpenStack managed VLAN’s on Fabric Interconnects. They configure VLAN’s on Controller and Compute node VNIC’s. The Cisco UCS Manager Plugin talks to the Cisco UCS Manager application running on Fabric Interconnect and is part of an ecosystem for Cisco UCS Servers that consists of Fabric Interconnects and IO modules. The ML2 Cisco UCS Manager driver does not support configuration of Cisco UCS Servers, whose service profiles are attached to Service Templates. This is to prevent that same VLAN configuration to be pushed to all the service profiles based on that template. The plugin can be used after the Service Profile has been unbound from the template.

Cisco Nexus 1000V OpenStack offers rich features, which are not limited to the following:

·Layer2/Layer3 Switching

·East-West Security

·Policy Framework

·Application Visibility

All the monitoring, management and functionality features offered on the Nexus 1000V are consentient with the physical Nexus infrastructure. This enables customer to reuse the existing tool chains to manage the new virtual networking infrastructure as well. Along with this, customer can also have the peace of mind that the feature functionality they enjoyed in the physical network will now be the same in the virtual network.

To configure the Global policies, log into UCS Manager GUI, and complete the following steps:

1.Under Equipment à Global Policies;

a.Set the Chassis/FEX Discovery Policy to match the number of uplink ports that are cabled between the chassis or fabric extenders and to the fabric interconnects.

b.Set the Power policy based on the input power supply to the UCS chassis. In general, UCS chassis with 5 or more blades recommends minimum of 3 power supplies with N+1 configuration. With 4 power supplies, 2 on each PDUs the recommended power policy is Grid.

c.Set the Global Power allocation Policy as Policy driven Chassis Group cap.

a.Select the ports (Port 1 to 8) that are connected to the left side of each UCS chassis FEX 2204, right-click them and select Configure as Server Port.

b.Select the ports (Port 9 to 11) that are connected to the 10G MLOM (VIC1227) port1 of each UCS C240 M4, right-click them, and select Configure as Server Port.

c.Click Save Changes to save the configuration.

d.Repeat steps 1 and 2 on Fabric Interconnect B and save the configuration.

After this the blades and rack servers will be discovered as shown below:

Navigate to each blade and rack servers to make sure that the disks are in Unconfigured Good state, else convert jbod to Uncofigured as below. The below diagram show how to convert a disk to Unconfigured Good state.

2.Specify the VLAN name as PXE-Network for Provisioning and specify the VLAN ID as 110 and click OK.

3.Specify the VLAN name as Storage-Public for accessing Ceph Storage Public Network and specify the VLAN ID as 120 and click OK.

4.Specify the VLAN name as Tenant-Internal-Network and specify the VLAN ID as 130 and click OK.

5.Specify the VLAN name as Storage-Mgmt-Network for Managing Ceph Storage Cluster and specify the VLAN ID as 130 and click OK.

6.Specify the VLAN name as External-Network and specify the VLAN ID as 215 and click OK.

7.Specify the VLAN name as Tenant-Floating-Network for accessing Tenant instances externally and specify the VLAN ID as 160 and click OK.

This network is Optional. In this solution, we only used a 24 bit netmask for the External network that had a limitation of 250 IPs for tenant VMs. Due to this limitation, we used a 20 bit netmask for the Tenant Floating Network.

The screenshot below shows the output of VLANs for all the OpenStack Networks created above.

A maintenance policy determines a pre-defined action to take when there is a disruptive change made to the service profile associated with a server. When creating a maintenance policy you have to select a reboot policy which defines when the server can reboot once the changes are applied.

To configure the Maintenance policy from the Cisco UCS Manager, complete the following steps:

Cisco UCS uses the priority set in the power control policy, along with the blade type and configuration, to calculate the initial power allocation for each blade within a chassis. During normal operation, the active blades within a chassis can borrow power from idle blades within the same chassis. If all blades are active and reach the power cap, service profiles with higher priority power control policies take precedence over service profiles with lower priority power control policies.

To configure the Power Control policy from the UCS Manager, complete the following steps:

No Cap keeps the server runs at full capacity regardless of the power requirements of the other servers in its power group. Setting the priority to no-cap prevents Cisco UCS from leveraging unused power from that particular blade server. The server is allocated the maximum amount of power that that blade can reach.

To allow flexibility in defining the number of storage disks, roles and usage of these disks, and other storage parameters, you can create and use storage profiles. LUNs configured in a storage profile can be used as boot LUNs or data LUNs, and can be dedicated to a specific server. You can also specify a local LUN as a boot device. However, LUN resizing is not supported.

To configure Storage profiles from the Cisco UCS Manager, complete the following steps:

a.Specify the Storage profile name as C240-Ceph for the Ceph Storage Servers. Click “+”.

b.Specify the LUN name and size in GB. For the Disk group policy creation, select Disk Group Configuration for Ceph nodes as Ceph-OS-Boot similar to “BootDisk-OS” disk group policy as above.

c.After successful creation of Disk Group Policy, choose Disk Group Configuration as Ceph-OS-Boot and click OK.

d.Click OK to complete the Storage Profile creation for the Ceph Nodes.

For the Cisco UCS C240 M4 servers, the LUN creation for Ceph OSD disks (6TB SAS) and Ceph Journal disks (400GB SSDs) still remains on the Ceph Storage profile. Due to the Cisco UCS Manager limitations, we have to create OSD LUNs and Journal LUNs after the Cisco UCS C240 M4 server has been successfully associated with the Ceph Storage Service profiles.

d.Create the VNIC interface for PXE or Provisioning network as PXE-NIC and click the check box Use VNIC template.

e.Under vNIC template, choose the PXE-NIC template previously created from the drop-down list and choose Linux for the Adapter Policy.

f.Create the VNIC interface for Tenant Internal Network as eth1 and then under vNIC template, choose the “Tenant-Internal” template we created before from the drop-down list and choose Adapter Policy as “Linux”.

g.Create the VNIC interface for Internal API network as Internal-API and click the check box for Use VNIC template.

h.Under vNIC template, choose the Internal-API-NIC template previously created from the drop-down list and choose Linux for the Adapter Policy.

i.Create the VNIC interface for Storage Public Network as Storage-Pub and click the check box for Use VNIC template.

j.Under vNIC template, choose the Storage-Pub-NIC template previously created from the drop-down list and choose Linux for the Adapter Policy.

k.Create the VNIC interface for Storage Mgmt Cluster Network as Storage-Mgmt and click the check box for Use VNIC template.

l.Under vNIC template, choose the Storage-Mgmt-NIC template previously created from the drop-down list and choose Linux for the Adapter Policy.

m.Create the VNIC interface for Floating Network as Tenant-Floating and click the check box the Use VNIC template.

n.Under the vNIC template, choose the Tenant-Floating template previously created from the drop-down list and choose Linux for the Adapter Policy.

o.Create the VNIC interface for External Network as External-NIC and click the check box the Use VNIC template.

p.Under the vNIC template, choose the External-NIC template previously created from the drop-down list and choose Linux for the Adapter Policy.

5.Create the VNIC interface for PXE or Provisioning network as PXE-NIC and click the check box for Use VNIC template.

6.Under the vNIC template, choose the PXE-NIC template previously created, from the drop-down list and choose Linux for the Adapter Policy.

7.Create the VNIC interface for Tenant Internal Network as eth1 and then under vNIC template, choose the “Tenant-Internal” template we created before from the drop-down list and choose Adapter Policy as “Linux”.

Due to the Cisco UCS Manager Plugin limitations, we have created eth1 as VNIC for Tenant Internal Network..

8.Create the VNIC interface for Internal API network as Internal-API and click the check box for VNIC template.

9.Under the vNIC template, choose the Internal-API template previously created, from the drop-down list and choose Linux for the Adapter Policy.

10.Create the VNIC interface for Storage Public Network as Storage-Pub and click the check box for Use VNIC template.

11.Under the vNIC template, choose the Storage-Pub-NIC template previously created, from the drop-down list and choose Linux for the Adapter Policy.

To create the Service Profile templates for the Ceph Storage nodes, complete the following steps:

1.Specify the Service profile template name for the Ceph storage node as OSP-Ceph-Storage-SP-Template. Choose the UUID pools previously created, from the drop-down list and click Next.

2.Create vNIC’s for PXE, Stroage-Pub and Storage-Mgmt following steps similar to controller as mentioned here.

3.Click Next and then Choose “Server_Ack” under Maintenance Policy and then choose the “OSP-CephStorage-Server-Pools” under Pool assignment . Then select “No-power-cap” under power control policy. Click on Finish to complete the Service profile template creation for Ceph nodes.

16.Under Server Boot Order, choose the boot policy as “Create a Specific Boot Policy”, from the drop-down list and click Next. Make sure you select “ local CD/DVD” as first boot order and “ local LUN” as second boot order and click Next.

Verify the Port Channels Status on the Fabrics

Prior to starting the Operating System installation on the Undercloud Node, you must complete the pre-validation checks. To complete the validation checks, complete the following steps:

1.If you are planning to use Jumbo frames for the storage network, make sure to enter the following information in the templates as shown in the screenshot below.

2.When the service profiles are created from the template, unbind from the templates in case they have been created as updating templates. This is to accommodate the UCS Manager Plugin. Keeping the compute host's service profiles bound to the template does not allow the plugin to individually configure each compute host with tenant based VLANs. Hence, the service profiles for each compute host need to be unbound from the template. Please check the current limitations outlined in the UCSM Kilo plugin web page.

3.The naming convention for the tenant interfaces is also vNIC eth1. This is the same for the Cisco UCS Manager Plugin, link provided above.

4.VLAN ID is already included in OpenStack configuration. Do not have native vlan tagged for your external interface on overcloud service profiles.

5.The provisioning interfaces should be Native for both Undercloud and Overcloud setups.

6.While planning your networks, make sure all the networks defined are not overlapping with any of your data-center networks.

It is highly recommended to install the Operating System with Versionlock as outlined in the steps below. Versionlock restricts yum to install or upgrade a package to a fixed specific version than specified using the Versionlock plugin of yum

The steps outlined in this document including a few of the configurations, are bound to the installed packages. Installing the same set of packages as in the Cisco Validated Design ensures accuracy of the solution with minimal deviations. This is an attempt to make sure that the installation steps deviate lesser when OpenStack packages move further. While installing RHEL-OSP7 on Cisco blade and rack servers without version lock should still work, it needs to be noted that there could be changes in the configurations and install steps needed that may not exist in this document.

Any updates to the Undercloud stack through yum install may conflict with the version lock packages. You may have to relax the lock files for such updates, when it is required. It is strongly recommended to complete the install with version lock first followed by Overcloud install before attempting any such updates.

Download the Versionlock and kick start the file from Cisco Systems.

To install the Operating System on the Undercloud Node, complete the following steps:

2.Sanity check the files and update on details like subscription management and web server to host these files for network install. The web server should be accessible from this Director node which is being kick started now.

Ignore the software selection since this will come from the kick start file.

11.Select manual partitioning and remove any unwanted partitions. This LUN is carved out from two disks on the Undercloud node (RAID 10 mirror set up through local disk config policy or through storage profile in Cisco UCS). Preferably increase the root partition to 100GB.

13.Add eth2 for public network. This is the interface that undercloud will pull the necessary files from Red Hat website during install. eth1 interface is not mandatory. However it has been added on the test bed to login to Fabric Interconnects and/or Nexus switches. Leave the pxe interface NIC as unconfigured. It will be configured later through the Undercloud install.

14.Enter the root password and optionally create the stack user and reboot the server when prompted.

15.Run Post Install checks before proceeding:

a.Run subscription-manager status to check the registration status. Make sure that the pool with OpenStack entitlements is attached.

b.Make sure that version lock list package is installed as part of kickstart.

[root@osp7-director heat]# rpm -qa | grep yum-plugin-versionlock

yum-plugin-versionlock-1.1.31-34.el7.noarch

If red hat registration fails for some reasons per kickstart file, version lock might not be installed and you may land pulling the latest bits from Red Hat web site.

c.Check for existence of versionlock.conf and versionlock.list in /etc/yum/pluginconf.d/versionlock.list. yum versionlock list command should reveal the contents for /etc/yum/pluginconf.d/versionlock.list.

d.Run ifconfig to check the health of the configured interfaces. The pxe should not have been configured at this stage.

e.Check name resolution and external connectivity. This is needed for yum updates and registration.

It is recommended to use your organization DNS server. name server 8.8.8.8 is used here for reference purpose only.

6.In case you have not registered the Undercloud node as part of versionlock earlier, please register the system to Red Hat Network and get the appropriate pool id for Open stack entitlements and attach the pool.

b.In the download page, select servers-Unified computing under products. On the right menu select your class of servers say Cisco UCS B-series Blade server software and then select Unified Computing System (UCS) Drivers in the following page.

c.Select your firmware version under All Releases, as an example 2.2(5c) and download the ISO image of UCS-related drivers for your matching firmware, for example ucs-bxxx-drivers.2.2.5c.iso.

b.Log into the Undercloud dashboard from one of the IP’s above and do a sanity check. Log in as user admin. The default password can be obtained from /home/stack/stackrc (run sudo hiera admin_password).

The dnsmasq.conf dhcp_range should match the undercloud.conf file range. This will help you spot any errors that might have gone while running Undercloud install earlier.The default pxe timeout is 60 minutes in Kilo. This means if you have more servers to be introspected and it takes longer than 60 minutes, introspection is bound to fail.

c.Download Deployment Ramdisk, Overcloud Image and Discovery Ramdisk for Red Hat Linux Director for 7.2. The solution is validated on 7.2. Customizations and interoperability of these files with Cisco Plugins were done with 7.2. In case of higher versions posted in this web page, contact Red Hat for getting 7.2 images.

The images can be downloaded directly from Director Host as a GUI install was done on the Director node by launching a browser or doing a wget of the download links above.

d.Download the files into /home/stack/images directory, extract the tar files;

Run the following as root user. Navigate to your download directory and issue the following as root:

cd /home/stack/images

export LIBGUESTFS_BACKEND=direct

a.Update fencing packages.

Before proceeding with the customization of the Overcloud image, there are some fixes that are not part of the osp7 y2 distribution. Refer to bug 1298430.

Download the fencing packages from http://people.redhat.com/cfeist/cisco_ucs/ to ~/images/ directory. These packages are being integrated to mainstream and we will update the document when they are available in the Red Hat repository.

Extract the fencing files from these rpm’s as shown below:

rpm2cpio <name of the fence agents common rpm file > | cpio –idmv

rpm2cpio <name of the fence agents cisco ucs rpm file > | cpio –idmv

This should create a local usr/share directory. The following two files need to be copied:

cp ./usr/share/fence/fencing.py /home/stack/images

cp ./usr/sbin/fence_cisco_ucs /home/stack/images

These two files will be used to update the overcloud image.

As root user;

cd /home/stack/images

chmod +x ./fencing.py

chmod +x ./fence_cisco_ucs

chown root:root fenc*

virt-copy-in -a overcloud-full.qcow2 ./fencing.py /usr/share/fence/

virt-copy-in -a overcloud-full.qcow2 ./fence_cisco_ucs /sbin/

Maybe you can virt-copy-out and validate that the files have been uploaded properly by extracting them to say /tmp location

While updating, the image with root password is not required; it becomes useful to login through KVM console in case of Overcloud installation failures and debug the issues.

The enic.ko was extracted earlier on the Directory node after installing the enic rpm. This helps ensure that both Director and the Overcloud images will be with same enic driver.

The N1000V modules will be injected and installed in the Overcloud image. As part of deployment, the VSM module will be installed on the Controller nodes, while VEM will be installed on all Controller and Compute nodes, due to the update to the Overcloud image.

The grub has been modified to have interface names like eth[0], eth[1] …

The fence_cisco_ucs package has been modified to take care of the HA bug 1298430.

10.Upload the images to openstack. As stack user run the following:

su – stack

source stackrc

cd ~/images

openstack overcloud image upload

openstack image list

11.Initialize boot LUNs. There is no need to initialize the SSD and OSD luns on Ceph nodes as this will be taken care by wipe_disk.yaml file included in the templates ( included through network-environment.yaml file).

12.Before running Introspection and Overcloud installation, it is recommended to initialize the boot LUNs. This is required in case you are repeating or using old disks.

13.Boot the server in UCS, press CTRL-R, then F2 and re-initialize the boot LUNs as shown below and shutdown the servers.

Before delving into the Overcloud installation, it is necessary to understand and change the templates for your configuration. Red Hat Linux OpenStack Director provides lot of flexibility in configuring Overcloud. At the same time, understanding the parameters and providing the right inputs to heat through these templates is paramount.

Before attempting the Overcloud install, it is necessary to understand and setup the Overcloud heat templates. For complete details of the templates, please refer to the Red Hat online documentation on OpenStack.

Overcloud is installed through command line interface with the following command. A top down approach of the yaml and configuration files is provided here.

The files are sensitive to whitespaces and tabs.

Refer to the Appendix A for run.sh, the command used to deploy Overcloud.

The heat templates have to be customized depending on the network layout and nic interface configurations in the setup. The templates are standard heat templates in YAML format. They are included in Appendix A. A set of configuration files with floating IP are included in Appendix A while the set in Appendix B are the files without floating IP configuration. Appendix A is the superset while Appendix B has the files that differ. Hence in your configuration, if you have luxury of external IP’s for VM’s external access and you wish not to use floating IP’s you may pick up all the files from Appendix A and overlay them with Appendix B configurations. Again use them for reference purpose only and make the updates as needed.

The network configuration included in the Director are of two categories and are included in /usr/share/openstack-tripleo-heat-templates/network/config

In the Cisco UCS configuration a hybrid model was adopted. This was done for simplicity and also to have a separate VLAN dedicated on each interface for every network. While this gives a fine grain control of policies like QOS etc, if needed, but were not adopted for simplicity. NIC2 or eth1 was used as tenant interface.

Some of the above files may have to be created. These files are referenced in Overcloud deploy command either directly or through another file. Ceph.yaml has to be modified directly in /usr/share/openstack-tripleo-heat-templates.

network-environment.yaml

The first section is for resource_registry. The section for parameter defaults have to be customized. The following are a few important points to be noted in network-environment.yaml file:

1.Enter the Network Cidr values in the parameter section.

2.The Internal Network and the UCS management network are on the same network. Make sure that InternalApiAllocationPools do not overlap with UCS IP pools. In the configuration they span the subnet from 10.22.100.50 to 250.

3.For consistency a similar approach followed for Storage and Storage Management Allocation pools.

4.The Tenant Allocation pool and Network is created for /12 subnet, just to allocate more addresses.

5.The Control Plane Default Route is the Gateway Router for the provisioning network or the Undercloud IP. This matches with your network_gateway and masquerade_network in your undercloud.conf file.

6.EC2Metadata IP is the Undercloud IP again.

7.Neturon External Network Bridge should be set to "''". An empty string to allow multiple external networks or VLAN’s.

8.No bonding used in the configuration. This will be addressed in our future releases.

controller.yaml

This parameter section overrides the ones mentioned in the networking-environment file. The get_param calls for the defined parameters. The following are important points to be considered for Controller.yaml file:

1.The PXE interface NIC1 should have dhcp as false to configure static ip’s, with next hop going to Undercloud node.

2.The external bridge is configured to the External Interface Default Route on the External Network VlanID.

3.The MTU value of 9000 to be added as needed. Both the storage networks are configured on mtu 9000.

compute.yaml

The same rules for the Controller apply:

1.The PXE interface NIC1 is configured with dhcp as false. There are no external IP’s available for Controller and Storage. Hence natting is done through UnderCloud node. For this purpose, the Control Plane Default Route is the, network gateway defined in undercloud.conf file which is also the UnderCloud local_ip.

2.Only the Storage Public network is defined along with Tenant networks on Compute nodes.

ceph-storage.yaml

1.Same as Compute.yaml mentioned above.

2.Only Storage Public and Storage Cluster are defined in this file.

ceph.yaml

Configuring Ceph.yaml is tricky and needs to be done carefully. This is because we are configuring the partitions even before installing operating system on it. Also depending on the configuration whether you are using C240M4 LFF or C240M4 SFF the configuration changes.

An overview of the current limiations from the Red Hat Director and Cisco UCS and the workarounds is provided for reference.

The way disk ordering is done is inconsistent. However for ceph to work we need a consistent way of disk ordering. Post boot we can setup the disk labels by by-uuid or by-partuuid. However, in RHEL-OSP Director these have to be done before. Bug 1253959 is being tracked for this issue and is supposed to be fixed in later versions.

This is also a challenge to use JBOD’s in Ceph, the conventional way. Using RAID-0 Luns in place of JBOD’s is equally challenging. The Lun ID’s have to be consistent every time a server reboots. The order that is deployed in UCS is also unpredictable. Hence following workarounds are evolved on the configuration to meet these requirements. The internal SSD drives in both C240 LFF and SFF models will not be used as they are not visible to the RAID controller in the current version of UCSM and will pose challenges to RHEL-OSP Director (they are visible to BIOS, Luns cannot be carved out as RAID controller does not see them and they appear as JBOD’s to the kernel thus breaking the LUN and JBOD id’s).

Figure 13Cisco UCS C240 M4 – Large Form Factor with 12 Slots

Figure 14Cisco UCS C240 M4 – Small Form Factor with 24 Slots

Cisco UCS Side Fixes to Mitigate the Issue

As mentioned earlier, storage profiles will be used from UCS side on these servers:

1.Make sure that you do not have local disk configuration policy in UCS for these servers.

2.Create storage profile, disk group policy as below under the template. There will be one Disk group policy for each slot. One policy for RAID-10 for the OS luns and one policy each of RAID-0 for the remaining.

3.Navigate to Create Storage Profile -> Create Local Lun -> Create Disk Group Policy (Manual) -> Create Local disk configuration. This will help in binding the disk slot to each lun created.

4.Create first the boot LUN from the first 2 slots and then apply. This will give LUN-0 to boot luns.

5.Create the second and third LUNs from the SSD slots (as in C240M4 LFF ). This would create RAID-0 luns, LUN-1 and LUN-2 on the SSD disks.

6.The rest of the LUNs can be created and applied in any order.

7.With the above procedure, we are assured that LUN-0 is for Operating system, LUN-1 and LUN-2 for SSD’s and the rest for HDD’s. This in turn decodes to /dev/sda for boot lun, /dev/sdb for SSD1 and /dev/sdc for SSD2 and the rest for HDD’s.

Do not apply all the luns at the same time in the storage profile. First apply the boot lun, which should become LUN-0, followed by the SSD luns and then the rest of the HDD luns. Failure to comply with the above, will cause lun assignment in random order and heat will deploy on whatever the first boot lun presented to it.

Follow a similar procedure for C240 SFF servers too. A minimum of 4 SSD journals recommended for C240M4 SFF. The first two SSD luns with 5 partitions and the rest two with 4 partitions each.

OpenStack Side Fixes to Mitigate the Issue

Implementing Red Hat Linux OpenStack Director to successfully deploy Ceph on these disks need gpt label pre-created. This can be achieved by including wipe_disk.yaml file which creates these labels with sgdisk utility. Please refer to Appendix A for details about -disk.yaml.

In the current version there is only one ceph.yaml file on all the servers. This mapping has to be uniform across the storage servers.

While the contents of ceph.yaml in the Appendix A are self-explanatory, the following is how the mappings between SSD’s and HDD’s need to be done:

ceph::profile::params::osds:

'/dev/sdd':

journal: '/dev/sdb1'

'/dev/sde':

journal: '/dev/sdb2'

'/dev/sdf':

journal: '/dev/sdb3'

'/dev/sdg':

journal: '/dev/sdb4'

'/dev/sdh':

journal: '/dev/sdc1'

'/dev/sdi':

journal: '/dev/sdc2'

'/dev/sdj':

journal: '/dev/sdc3'

'/dev/sdk':

journal: '/dev/sdc4'

The above is an example for C240M4 LFF server. Based on the LUN id’s created above /dev/sdb and /dev/sdc are journal entries. Four entries for each of these journal directs RHEL-OSP to create 4 partitions on each SSD disk. The entries on the left are for HDD disks.

A similar approach can be followed for SFF servers.

Th ceph.yaml was copied to /usr/share/openstack-tripleo-heat-templates/puppet/hieradata/

cisco-plugins.yaml

The parameters section specifies the parameters.

n1kv

N1000vVSMIP: The Virtual Supervisor Module IP. This should be an address on the internal API network, outside of the assigned DHCP range to prevent conflicting ips.

N1000vPacemakerControl: True in HA configuration

N1000vVSMPassword: 'Password' – The password for N1KV

N1000vVSMHostMgmtIntf: br-mgmt

N1000vVSMVersion: Leave specified as an empty string- “

N1000vVEMHostMgmtIntf:vlan100, the Internal API VLAN

N1000vUplinkProfile: '{eth1: system-uplink,}'. This should be the interface connected to the tenant network. The current version does not support bridges. Refer to the limitations of UCS Manager plugin in http://docwiki.cisco.com/wiki/OpenStack/UCS_Mechanism_Driver_for_ML2_Plugin_Kilo. As both plugins are in the setup, this has to remain as eth1.These will be revisited in our next release cycle.

Cisco UCS Manager

NetworkUCSMIp: UCS Manager IP

NetworkUCSMHostList: Mapping between tenant mac address derived from UCS with Service profile name, comma separated. This list has to be built for all the compute and controller nodes.

Nexus

This will list both the Nexus switches details, their ip’s and passwords.

Servers: The list should specify the interface MAC of each controller and compute and the port-channel numbers created on the Nexus switch.

NetworkNexusManagedPhysicalNetwork physnet-tenant, the parameter you pass in the Overcloud deploy command

NetworkNexusVlanNamePrefix: ‘q-‘ These are the vlans’s that will be created on the switches

NetworkNexusVxlanGlobalConfig: false. Vxlan is not used and is not validated as part of this CVD

NeutronServicePlugins: Leave the default string as is. Any typo’s may create successfully Overcloud but will fail to create VM’s later.

NeutronTypeDrivers: vlan. The only drivers validated in this CVD.

NeutronL3HA: ‘false’ The current n1kv version does not support L3 HA. This will be revisited in the next revision.

NeutronNetworkVLANRanges: 'physnet-tenant:250:749' The range you are passing to Overcloud deploy.

The controllerExtraConfig parameters are tunables. These workers reside in /etc/neutron.conf file. Only the parameters mentioned in Appendix A are validated.

wipe_disks.yaml is configured as part of firstboot to create gpt lables on Storage node disks.

Customizing post-Configuration, is done through OS::TripleO::NodeExtraConfigPost. These can be applied as additional configurations. The current nameserver_ntp.yaml file used in the configuration achieves these by using Heat SoftwareConfig types.

In Appendix B, the yaml files used in the second pod, without floating IP’s is included. Only the files that were different like network-environment.yaml etc are included.

a.Download python-ipaddress from the web and install it (rpm –ivh <rpmname> ), the one used on the configuration was python-ipaddress-1.0.7-4.el7.noarch.rpm.

b.Validate the yaml files as shown below:

cd /home/stack/templates as stack user

python network-environment-validator.py -n network-environment.yaml

DEBUG:__main__:

parameter_defaults:

ControlPlaneDefaultRoute: 10.22.110.26

ControlPlaneSubnetCidr: '24'

DnsServers: [8.8.8.8, 8.8.4.4]

…………………

………………..

----------SUMMARY----------

SUCCESSFUL Validation with 0 error(s)

[stack@osp7-director templates]$

If you receive any errors, stop here and fix the issue(s).

2.Run ironic node-list to check that all the servers are available, powered off and not in maintenance.

While understanding the reason why a server is not as listed above, you may use ironic APIs to change the state if they are not in the desired state:

After sourcing stackrc file;

ironic node-set-power-state <uuid> off

ironic node-set-provision-state <uuid> provide

ironic node-set-maintenance <uuid> false

In case of larger deployments, the default values of max resource per stack may not be sufficient.

3.Maximum resources allowed per top-level stack. (integer value)

#max_resources_per_stack = 1000

Update the value to a higher number in /etc/heat/heat.conf. In a pod with 35 nodes, we had to bump up this value to 10000 and restart heat engine. However in a pod with 3 controllers, 6 computes and 3 ceph nodes this wasn’t necessary.

systemctl restart openstack-heat-engine.service

4.Update Ceph timeout values. This issue was noticed on the configuration in particular for larger deployments. This is per updates in bug 1250654.

Overcloud deployment may fail for several reasons. Either because of a human error, for example, passing incorrect parameters or erroneous yaml configuration files or timeouts or bug. It is beyond the scope of this document to cover all of the possible failures. However, a few scenarios that were encountered on the configuration with explanations are provided in the Troubleshooting section of this document.

The current RHEL-OSP Director supports only pg_num=128, the default placement groups. Bug 1283721 discusses this limitation. This default value may have to be updated depending on the number of OSD’s in the cluster.

As per the above formula for 24 OSD’s it is 2400/3 or 800 PG’s for the cluster. Considering this has to be to the power of 2, we will create 1024 PG’s in the cluster. However RHEL-OSP creates 4 pools by default. This means 256 PG’s for each pool.

In case you are using C240M4S, PG’s have to be calculated for 54 OSD’s.

[root@overcloud-cephstorage-2 ~]# ceph osd lspools

4 rbd,5 images,6 volumes,7 vms,

The pools will be recreated with 256 PG’s in each as shown below:

Set the placement groups as shown below;

for i in rbd images volumes vms; do

ceph osd pool set $i pg_num 256;

sleep 20

ceph osd pool set $i pgp_num 256;

sleep 10

done

5.Query the pools and tree.

6.Sporadic issues on N1000V

The following race condition is observed occasionally in the installs. Please verify that you are not hitting this issue before moving ahead. Neutron will fail to start and pcs status may be erring out if this condition exists. Please verify the following and correct as indicated if you encounter this issue.

First, check VSM for any duplicated default profiles by running the following command. If there are only two profiles and they have different descriptions, nothing needs to be done. If, as shown below (duplicate descriptions highlighted for clarity), there are three or more profiles and the descriptions repeat, then one of the duplicates needs to be removed.

After confirming this is the issue, select one of the duplicate network profiles to be deleted. For this example, we will use the first duplicate “default-vxlan-np” profile with uuid- 25c5f0b9-9ae7-45b7-b4a6-2e3a418a52af. Replace this value as needed for your deployment in the commands below.

Connect to mysql and schema ovs_neutron neutron and verify that the three profiles are also seen in mysql:

MariaDB [ovs_neutron]> select * from cisco_ml2_n1kv_network_profiles;

The above two queries return 3 rows.

There should be only one row for each vlan and vxlan entries in the table and the id from cisco_ml2_n1kv_network_profiles should match with id from the above command showing nsm network segment pools.

This issue is a regression from RHEL-OSP 7.1 Please refer bug 1297975. Only the first bridge makes its entry in network_vlan_ranges in /etc/neutron/plugin.ini. This needs to be as referred in overcloud deploy command:

--neutron-network-vlan-ranges physnet-tenant:250:749,floating:160:160

Update /etc/neutron/plugin.ini on all the three controller nodes, with the correct string

Before proceeding with pacemaker configuration, it is necessary to understand the relationship between the service profile names in UCS with the node names dynamically created by OpenStack as part of Overcloud deployment.

a.Either login through the Console or extract from /etc/neutron/plugin.ini.

Plugin.ini will be updated by Cisco Plugins that have this information. Open /etc/neutron/plugin.in file and go to the end of the file. Extract the controller syntax.

overcloud-controller-1.localdomain:Openstack_Controller_Node1,

overcloud-controller-0.localdomain:Openstack_Controller_Node2,

overcloud-controller-2.localdomain:Openstack_Controller_Node3

The mapping is controller-0 is mapped to Service Profile Controller_Node2 and so on.

b.Create a shell script as below with the following information and execute it

This shows that the second VSM is ha-standby mode. For any operations make sure that it is in ha-standby mode and not *powered* mode. Please wait few seconds to make sure that secondary is in ha-standby mode. Show module should also show all the controller and compute nodes.

·Navigating the dashboard across the admin, project, users tab to spot any issues

·Creating Tenants, Networks, Routers and Instances.

·Create Multiple Tenants, multiple networks and instances within different networks for the same tenant and with additional volumes with the following criteria:

—Successful creation of Instances through CLI and validated through dashboard

—Login to VM from the console.

—Login to VM’s through Floating IP’s.

—Checking inter instance communication for VM’s within the same network and VM’s in a different network for the same tenants and with password less authentication.

—Reboot VM’s and checking for VM evacuation

—Check for the VLAN’s created both in UCSM and also on the Nexus switches. The VLAN’s should be available globally and also on the both port-channels created on each switch:

Login to Nexus switch

conf term

show vlan | grep q-

show running-config interfrace port-channel 17-18

The basic flow of creating and deleting instances through command line horizon dashboard were tested. Creating multiple tenants and VLAN provisioning across Nexus switches and Cisco UCS Manager were verified while adding and deleting the instances.

Rally benchmarking tool was used to check the solution for scale testing. This tool tells how OpenStack performs, notably under simulated load at scale. Later after the test is completed, Rally generates an HTML report based on captured data. For more details about this tool please refer https://wiki.openstack.org/wiki/Rally

The main purpose of running the tool was to generate a workload close to cloud but not to capture some benchmark data. Hence a limited amount of tuning has been attempted on the openstack side. None of the default kernel parameters like ulimits, pid_max, libvirtd or nova parameters like osapi_max_limit or neutron api workers etc were modified, nor was any attempt done on Ceph side to extract the best performance from the configuration. It has to be noted that these may have to be tweaked to get the best results. This was just an attempt to use a tool to create VM’s simultaneously but not to do a real benchmarking exercise.

Configuration and Tuning Details

A test bed with 3 controllers, 19 computes and 3 Ceph storages nodes was built to test this setup. The hardware and software specifications are same as mentioned earlier in this document. The following parameters were changed for rally testing.

The following section describes different test case scenarios selected to test RHEL-OSP 7 environment. In this test, following benchmarking scenarios have been given to simulate multi-tenant workload at scale for VM and volume provisioning. Each benchmarking scenario will perform a small set of atomic operations, hence testing the simple use case.

Rally Configuration Summary

1.Provision 1,000 instances from 1,000 bootable Ceph volumes.

2.Use Cirros-0.3.4-x86_64-disk.img.

3.Create 200 tenants with 2 users per tenants

4.Create and authenticate these VMs.

5.Each tenant will have 1 Neutron network, a total of 200 Neutron networks. Cisco UCS and Nexus plugins provision these neutron networks mapped to VLAN with segmentation id in Cisco UCS manager and Nexus 9000 switches.

6.Tenant quotas for Cinder, Nova, and Neutron are set to unlimited for simplicity in this test to avoid any failures just to avoid any quota issues while booting the VM’s.

7.Concurrency of 3 has been used in Rally’s task configuration. Rally script will be creating a constant load by running the scenarios given for a fixed number of times, possibly in parallel iteration and therefore simulating the concurrent requests from different users and tenants.

8.Provisioned 2,000 instances from bootable volume based on the similar scenarios mentioned above.

Hardware

Following hardware has been used to run the Rally tests.

·Number of compute nodes (Cisco UCS B200 M4 Servers): 19

·The following hardware resources were available in the test bed to run the rally test described above with or without over-commitment ratios:

·Number of Ceph Nodes: 3 ( 43 TB of usable space )

·Number of Controllers: 3

vCPU default over-commitment ratio is 1:16. However, it is recommended to use 1:4. It can be modified in /etc/nova/nova.conf cpu_allocation_ratio variable. With cpu_allocation_ratio of 4, 2913 (728 * 4) instances can be created.

Rally runs different types of scenarios on the information provided in json format. Although Rally offers several different combination of scenarios to choose from, here we are focusing on testing how the system scales in a multi-tenant environment where each tenant environment has a given set of strategies

Below is the Rally task configuration used in validation. It takes different parameters for customization to run on different sets of scenarios. However, defaults are also set. Below .json file runs the NovaServers.boot_server_from_volume scenario.

In this reference environment, benchmarking contexts have been setup. Contexts in Rally allow to stage different types of environment in which benchmarking scenario is launched. In this test, environment such as number of tenants, number of users per tenant, number of neutron network per tenant, and users quota were specified.

Sample json file used for testing is provided in Appendix A.

The following parameters used in the json file are provided for reference

·Flavor : The size of the guest instance, e.g. m1.tiny

·Image : the name of the image file used for guest instances

·volume_size : size of bootable volume, e.g. 10 GB

·quotas : Quota requirement of each tenant. For unlimited resources, value of -1 for cores, ram, volume, network, ports, and so on has been given.

It is recommended to start with smaller set of values for number of tenants, VMs, times, and concurrency to diagnose for any errors. Rally generates HTML based report after the task is completed as shown below. Load duration shows the time taken to run the specified scenario, while Full duration shows the total time taken by the overall benchmark task. Iteration shows how many times a specified scenario has been executed.

The figure above shows the time taken to provision each VM. The X-axis plots the number of VMs and the Y-axis, shows the total time taken to power-on each VM that includes both the time taken to provision the boot volume with cirros images and the nova boot time of the VM.

Data Analysis

Based on total RAM available on 19 compute nodes with 1000 VM of 500Mb each, there is no memory swapping. Based on the above available hardware resources in the reference test bed and considering resource depletion, theoretically 4 GB can be assigned to each VM (1000 * 4=4000 GB) without any memory over-commitment.

Very small CPU over-commitment was observed based on 728 physical cores available for 1000 instances. As mentioned earlier, the configuration had enough vCPU resources with recommended cpu_allocation_ratio of 4 to provision 1,000 instances.

Better performance and results have been observed by reducing the number of tenants and neutron network along with concurrency level.

Different results have been observed with different image sizes. It takes longer to provision bootable volume, if the image size is larger.

Limited variations have been observed in the result sets even if similar scenario is run many times because of resource depletion.

Connection error has been observed because of HAProxy timeout, HAProxy being the top layer of the stack with respect to all incoming client connections. HAProxy timeout value is increased from 30s to 180s. It is observed that default timeout value is not sufficient to handle incoming Rally client connection requests.

Load Profile

From the above it is evident that load profile has been consistent throughout the test.

Workload Distribution

The graph above shows the distribution in seconds versus the number of instances provisioned.

Atomic Actions

The graph shows the atomicity of the single Rally benchmarking operation. In this test, there were two major operations, cinder volume creation and nova boot of guest instances. It is observed that volume creation took little longer in the begining and after that it remained consistent on an average of 7 seconds. Nova boot took on the average of 9.7 seconds for a single instance. Nova boot time was also consistent. However few spikes have been noticed especially after 700 iterations. This could vary depending on other internal OpenStack API calls within the controller.

Out of the total time it took to boot from volume and provision the instance, 43% of the time was taken for creation of the volume, while the rest 57% for to create VM in its active state

Cinder Create Volume

On an average, it took 7 seconds to provision single volume. You may observe a skewed average as the first 500 were provisioned in less than 5 seconds.

Data Analysis

It took on the average of 15 seconds to provision single VM in this test. At about 1441 iteration, we encountered one failure. This particular instance failed to provision and OpenStack API call eventually timed out. It is clearly evident from the graph as well. It took more than 120 seconds on this instance and in most cases after 120 sec, OpenStack API requests time out.

Load Profile

Similar to the 1000 VM tests, the volumes were consistently provisioned on an average of 6.3 seconds.

Nexus Ml2 plugin provisioned tenant’s VLAN and allowed them in the trunk as Rally provisioned the neutron network for each tenant.

Similarly, in UCSM, the tenant’s VLANs are also provisioned in the LAN cloud. Furthermore, tenant VLANs are added to the vNIC that carries tenant data traffic to the respective compute host where the instance is provisioned.

While the tests were not targeted as a benchmarking exercise, we can draw a few conclusions that could help us to plan the infrastructure.

The over commitment and concurrency play an important role on sporadic failures. Also it depends how much burst of work load do we expect in a real production environment and then test and tune for concurrency.

Minimal tuning was completed on either nova or on Ceph while running these tests. This is an iterative exercise but could have provided more insight for extracting better performance values.

Failures like timeouts will skew the result set. It is recommended to pay attention to the median and 90th percentile figures to understand the system behavior.

The Ceph benchmarking tool was used in the configuration to test the scalability of storage nodes. The purpose of this testing done on this configuration as part of this CVD was not to do benchmarking but to provide steps on how to do the storage testing of nodes in OpenStack and provide some comparison data between Cisco UCS C240 M4 large and small factor servers. This is to help choose the right configuration based on the workload expected in cloud. Each of them have their own hardware characteristics and the performance data captured here should help to make an informed decision on the storage servers. However, the data presented below should not be considered as the optimal storage scalability values.

Ceph benchmarking tool (CBT) can be downloaded from https://github.com/ceph/cbt. It is an open source python script to test the performance of the Ceph clusters. It tests both the object and rados block devices scaling. Only Rados Block Device (rbd) tests were done. While there are three different categories for block devices testing that can be done, the most conservative, librbdfio was used in the test bed. Librbdfio, tests block storage performance of RBD volumes without KVM/QEMU configuration through librbd libraries. They give the closest approximation of KVM/QEMU performance. Please refer to the link above to configure the tool for testing.

The results obtained depend on several factors. The important ones that were included in the test bed are mentioned below.

Default value of rbd_cache is true. This was turned to false purposely to suit some of the RDBMS workloads.

The read ahead configured was as default on the disks.

[root@overcloud-cephstorage-2 ~]# hdparm -a /dev/sde

/dev/sde:

read ahead = 8192 (on)

The write cache policy on the LUNs was write through which is the default.

The io depth in the ceph parameter was 8.

Each VM by default can do 1GB of IO throughput. There was no qemu throttling or QOS policy implemented on the setup. Few VM’s each running with full capacity can saturate the storage.

The tests were done to measure the IOPS and bandwidth as a whole on the storage cluster. This in turn will be shared by all the VM’s running in the cloud. The values represent what storage can scale but not how many VM’s can saturate them.

·Minimal CPU or Memory overhead observed during the tests. These conclude that system will have sufficient head room during failures for recovery operation. The core/spindle ratio was higher as well on these boxes.

·As mentioned earlier the tests were conducted in a controlled environment. Increasing the VM's substantially without a control on IO on each might give poorer results. A separate layer of client side IO throttling has to be in place if this is the case.

·CBT only checks the storage performance of the Ceph cluster. The number of VM's configured is controlled through the yaml file.

Live migration refers to the process of moving a running virtual machine between different physical machines without disconnecting the client or application. Memory, storage, and network connectivity of the virtual machine are transferred from the source host to the destination host.

Live migration is crucial from operational perspective to provide continuous delivery of services running on the infrastructure. This allows for movement of the running virtual machine from one compute node to another one.

The most common use case for live migration is host maintenance - necessary for software patching or firmware/hardware/kernel upgrades. Second case is imminent host failure, like cooling issues, storage or networking problems.

Live migration helps in optimal resource placement across an entire datacenter. It allows reducing costs by stacking more virtual machines on a group of hosts to save power. What’s more it is possible to lower network latency by placing combined instances close to each other. Live migration can also be used to increase resiliency and performance by separating noisy neighbors.

The Tunneling option provides secure migration. In this model, hypervisor creates a point-to-point tunnel and sends encrypted (AES) data. This option also uses CPU for encrypting and decrypting transferred data. Without this, the data is transmitted in raw format.Tunneling is important from the perspective of security. Encryption of all data-in-transit ensures that the data cannot be captured.

The tunneling optioned is configured by adding the following value to flag:

Busy enterprise workloads hosted on large sized VM's tend to dirty memory faster than the transfer rate achieved through live guest migration. If the migration don’t converge it is possible to use auto-converge feature (KVM + Qemu). This feature allows to auto-detect lack of convergence and trigger a throttle-down of the memory writes on a VM. This flag speed up process of Live Migration.

All Cassandra VMs are created on the same compute node (it fits about 90 percent of RAM and 70 percent of CPU) in availability zone named ‘workers’. Similar situation applies to client VMs which are created on other compute node in availability zone named ‘clients’.

To enable usage of availability zones additional config file (ycsb_flags.yml) needs to be placed under `<perfkit_dir>/perfkitbenchmarker/configs/` with below content:

cassandra_ycsb: vm_groups:

workers:

vm_spec:

OpenStack:

machine_type: 'm1.xlarge'

zone: 'workers'

clients:

vm_spec:

OpenStack:

machine_type: 'm1.medium'

zone: 'clients'

The oversubscription level for CPU was equal to 2.8 (virtual core count to physical core count ratio) and for the memory it was 0.93 (virtual RAM count to physical RAM count ratio).

Cassandra is the choice for testing methodology as it is interesting in perspective of the cloud and it’s easily scalable using built-in mechanisms. What’s more, Cassandra is more RAM intensive compared to other, more classical DB’s (RDBMS), which best shows impact of Live Migration process.

During the benchmark all VMs are migrated one by one to other compute nodes, migration time of each VM is counted and additionally various performance metrics from Cassandra cluster are collected. In the same time metrics - RAM, CPU and network traffic - from compute node containing Cassandra servers are obtained.

To start each migration use below command. Make sure to invoke migration during YCSB benchmarking READ/UPDATE operations on Cassandra. Destination compute node should not be the same as used for YCSB clients (availability zone ‘clients’).

Average VM migration time in all combinations of this flags during Cassandra test based on PerfkitBenchmarker and YCSB test suite. This test suite is running in configuration of 14 Cassandra nodes and 10 clients. All Cassandra nodes were located on the same compute node (it fits about 90 percent of RAM and 70 percent of CPU) and then migrated one by one to other available computes.

Based on gathered results best time of migration is possible with disabled tunneling and enabled auto converge option. It can be used just in internal or test environments because from security perspective disabling tunneling is a disadvantage and enterprise setups should avoid to disable it.

To configure this option set up below line in live_migration_flag and restart nova-compute service:

Unencrypted LM traffic is not a flaw, but allowing to traverse that traffic on a compromised network could be. Separating the LM traffic from API traffic will resolve that issue, unfortunately such configuration is not possible in current version of OpenStack (Kilo).

Current possibilities shows that best from perspective of speed and security configuration of migration flags is to use tunneling with auto converge. To set up this edit `nova.conf` on each compute to below value in live_migration_flag and restart nova-compute service:

1.Rack the new C240M4 server(s). There is a single ceph.yaml in the current OpenStack version. Populate the hard disks in these storage servers in the same order as they exist in other servers.

2.Attach Console and discover the storage server(s) in UCS. Factory reset to defaults if needed and make them UCS managed.

3.Refer to this section for creating service profiles from Storage template. Create a new service profile from the template. Unbind the template and remove the storage policy that was attached to it earlier and associate the service profile to the server.

4.Upgrade firmware if needed.

Check the installed firmware on the new node and make sure that it is upgraded to the same version as other storage servers.

5.Create a new Storage Profile for Disks.

Before creating the storage profile, login to the equipment tab and make sure that all the new storage servers have the disks in place and they are physically on the same slots at par with other storage servers.

Since we used the storage profile earlier with other servers we cannot use them right away. The reason being the luns have to be added to the server in the same way as was done earlier. In case you are discovering more than one storage server at this stage, a single new profile created as below will serve the purpose. While creating this new storage profile, we can reuse the existing disk group configuration policies created earlier.

6.Go to the service profile of the new server and to the storage tab to create a new storage profile as shown below.

7.Attach this storage profile to the service profile. This will create the first boot lun LUN-0 on the server. Go back to the equipment tab and inventory/storage to check that this is the first Lun is added. This will be the boot lun LUN-0 that will be visible to the server bios. In case of multiple servers being added in this step, attach the new storage profile created above to all these service profiles. This in turn will create LUN-0 in all the nodes.

A subsequent update to this storage profile will be propagated across all these new service profiles.

8.Go to Storage tab in UCSM and update the storage profile.

9.Create and attach SSD luns, which will be LUN-1 and LUN-2. Wait few minutes to make sure that all the new servers get these luns in the same order, boot as LUN-0, ceph-ssd1 as LUN-1 and ceph-ssd2 disk as LUN-2.

This will be consistent with other servers and we can expect sda for boot lun and sdb and sdc for SSD LUNs being used with the journals.

10.Add all the HDD LUNs.

The steps above do not represent the actual boot order. You may have to observe the actual boot order from KVM console to verify.

If the boot disks are being repurposed and are not new, go ahead and re-initialize the boot lun through bios. Boot server, CTR-R, F2 and reinitialize the VD for the boot LUNs.

1.Get the hardware inventory needed introspection.

2.Go to the Equipment tab > Inventory > CIMC and get the IPMI address.

3.Under the same Inventory tab go to NIC subtab and get the pxe mac address of the server. The same inventory should have the CPU and memory details.

4.Specify the NIC order in the service profile. This should be the same as the other storage servers with provisioning interface as the first one.

5.Check the boot policy of the server. Validate that this is same as other storage servers. It should be LAN PXE first followed by local LUN.

Insert the new Cisco UCS B200 M4 blade into an empty slot in the chassis with similar configuration of local disks.

1.Refer to this section above for creating service profiles from Storage template. Create a new service profile from the template. Unbind the template and remove the storage policy that was attached to it earlier and associate the service profile to the server.

2.Upgrade firmware if needed.

3.Check the installed firmware on the new node and make sure that it is upgraded to the same version as other compute nodes.

To perform the deployment and health checks, complete the following steps:

1.Login to each controller node and check for the existence of the new compute node in /etc/neutron/plugin.ini. If not please add in each Nexus Switch section and also in UCSM host list in plugin.ini file. Make sure to make the changes across all the controller nodes.

Both the hardware and software stack are injected with faults to trigger a failure of a running process on a node or an unavailability of hardware for a short or extended period of time. With the fault in place the functional validations are done as mentioned above. The purpose is to achieve business continuity without interruption to the clients. However performance degradation is inevitable and has been documented wherever it was captured as part of the tests.

Few identified services running on these nodes were either restarted or killed and/or rebooted the nodes.

For eg.

Master/Slave Set: redis-master [redis]

Masters: [ overcloud-controller-2 ]

Slaves: [ overcloud-controller-0 overcloud-controller-1 ]

Per above redis master is overcloud-controller-2. This node was rebooted and observed the behavior while the node getting rebooted and any impact of N-S traffic or E-W traffic of VM’s. The only issue observed was for about 2-3 minutes few of the VM’s were not pingable because of bug 1281603 and this was not related with the services above.

The ceph node monitors and services were also restarted to test any interruption of volume creation and booting of the VM’s, but no issues observed.

FI Reboot Tests

Cisco UCS Fabric Interconnects work in pair with inbuilt HA. While both of them serve traffic during a normal operation, a surviving member can still keep the system up and running. Depending on the overprovisioning used in the deployment a degradation in performance may be expected.

An effort is made to reboot the Fabric one after the other and do functional tests as mentioned earlier.

Reboot Fabric A

·Check the status of the UCS Fabric Cluster before reboot

UCSO-6248-FAB-A# show cluster state

Cluster Id: 0x1992ea1a116111e5-0x8ace002a6a3bbba1

A: UP, PRIMARY

B: UP, SUBORDINATE

HA READY ß--System should be in HA ready before invoking any of the HA tests on Fabrics.

Grep for any errors or stopped actions from PCS, fix the issues before starting the tests.

·Reboot Fabric A ( primary )

Log into UCS Fabric Command Line Interface and reboot the Fabric

UCSO-6248-FAB-A# connect local-mgmt

Cisco Nexus Operating System (NX-OS) Software

UCSO-6248-FAB-A(local-mgmt)# reboot

Before rebooting, please take a configuration backup.

Do you still want to reboot? (yes/no):yes

nohup: ignoring input and appending output to `nohup.out'

Broadcast message from root (Fri Nov 6 20:59:45 2015):

All shells being terminated due to system /sbin/reboot

Connection to 10.22.100.6 closed.

Health Checks and Observations

The following is a list of health checks and observations:

·Check for VIP and Fabric A pings. Both should be down immediately. VIP recovers after a couple of minutes

·Check for PCS Cluster status on one of the controller nodes. System could be slow in the beginning but should respond as follows:

PCSD Status:

overcloud-controller-0: Online

overcloud-controller-1: Online

overcloud-controller-2: Online

Perform a quick health check on creating VM’s, checking the status of existing instances and l3 forwarding enabled in N1KV earlier. Check the sanity checks on Nexus switches too for any impact on port-channels because of Fab A is down.

·Create Virtual Machines

Perform a quick health check on creating VM's, checking the status of existing instances and l3 forwarding enabled in N1KV earlier. Check the sanity checks on Nexus switches too for any impact on port-channels because of Fab A is down.

Fabric A might take around 15 minutes to come back online.

Reboot Fabric B

·Connect to the Fabric B now and check the cluster status. System should show HA READY before rebooting Fab B.

·Reboot Fab B by connecting to the local-mgmt similar to Fab A.

·Perform the health check similar to the ones does for Fab A.

·The test went fine without any issues on the configuration. Please refer bug 1267780 on the issues encountered and the fix rolled out in this document.

IO Module Failures seldom happen in UCS infrastructure and in most of the cases these are human mistakes. The failure tests were included just to validate the business continuity. Any L3 east-west traffic will get routed through upstream switches in case of IOM failures.

Multiple Tenants with multiple networks and Virtual machines were created. Identified the VM’s belonging to the same tenant but with different networks and also going to different chassis. One of the IO Modules was pulled out from the chassis and the L3 traffic validated.

Heatlh Checks before Fault Injection

Ping from tenant320_120_inst8 to tenant320_170_inst19 10.2.170.11 and 10.22.160.49

Fault Injection and Heatlh Checks

Nexus switches are deployed in pairs and allow the upstream connectivity of the virtual machines to outside of the fabric. Cisco Nexus plugin creates VLANs on these switches both globally and on the port channel. The Nexus plugin replays these vlans or rebuilds the vlan information on the rebooted switch once it comes back up again. In order to test the HA of these switches multiple networks and instances were created and one of the switches were rebooted. The connectivity of the VM’s through floating network checked and also the time it took for the plugins to replay was noted as below.

Controllers are key for the health of the cloud which hosts most of the OpenStack services. There are three types of controller failures that could happen.

Server reboot, pulling the blade out of the chassis while system is up and running and putting it back, pulling the blade from the chassis and replacing it simulating a total failure of the controller node.

Server Reboot Tests

Run Health check before to make sure that system is healthy.

·Run nova list after sourcing stackrc as stack user on Undercloud node to verify that all the controllers are in healthy state as below

·Do not reboot the second controller unless the prior one comes up first. Check pacemaker status, health of quorum ( corosync ), health of n1kv’s primary and standby VSM’s.

·Two controllers are minimum needed for healthy operation.

·While the first node is booting up, it takes time for pcs status command to complete.

PCS will report one server is offline.

PCSD Status:

overcloud-controller-0: Online

overcloud-controller-1: Offline

overcloud-controller-2: Online

Daemon Status:

corosync: active/enabled

pacemaker: active/enabled

pcsd: active/enabled

Corosync will return that it gets only 2 votes out of 3 as below while the server is getting rebooted. This is normal.

[root@overcloud-controller-2 ~]# corosync-quorumtool

Quorum information

------------------

Date: Thu Jan 28 12:09:16 2016

Quorum provider: corosync_votequorum

Nodes: 2

Node ID: 3

Ring ID: 56

Quorate: Yes

Votequorum information

----------------------

Expected votes: 3

Highest expected: 3

Total votes: 2

Quorum: 2

Flags: Quorate

Membership information

----------------------

Nodeid Votes Name

1 1 overcloud-controller-0

3 1 overcloud-controller-2 (local)

·Refer to bug 1281603. It might take just over 3 minutes for the other controllers to start rescheduling the routers. Within this time frame, slower keystone authentication, and creation of VM’s observed. However, system recover fine after this. The issue of l3_ha=false to default of true from false is being addressed in the next release of n1kv.

·When the node comes up, the routers remain on the other 2 controllers and do not fall back. Can be queried with ip netns too.

·If controller node does not come up, check through KVM console to spot out any issues and hold off rebooting the second node before a healthy operation of the first.

Blade Pull Tests

One of the controller nodes blade was pulled out while the system is up and running. The validation tests like VM creation etc were done prior to the tests and to check the status when the blade is pulled from the chassis. This is like simulating a complete blade failure. After around 60 minutes the blade was re-inserted back in the chassis.

Health Checks and Observations

The same behavior as observed during reboot were noticed during the blade pull tests. However unlike a reboot which completes in 5-10 minutes, this was for an extended period of time of 60 minutes to check the status of the cluster.

·Cisco UCS marks the blade as ‘removed’ and prompts to resolve the slot issue.

·Nova declares the state of the server as NOSTATE as shown below:

·Ironic gives up as it cannot bring the server back online and enables Maintenance mode to True for this node.

·Compare the Instance UUID from ironic node-list and ID from nova list

·Ceph storage will report that 1 out of 3 monitors are down. All the 3 controllers will be running one monitor each. However all the OSD’s are up and running.

All of the above, the behavior UCS asking to resolve slot, ironic turning the blade to maintenance mode, nova setting the status to NOSTATE, and ceph reporting on one of the monitors as down are expected.

·After inserting the blade back into the same slot of the chassis, it needed a manual intervention to correct the above.

—Insert the blade back into the slot and resolve the slot issue in UCS.

—Wait for a minute and check back for these columns with ironic node-list.

—nova reset-state --active a23af643-51c8-4f59-881c-77a9d5e1557f

—Wait for about 5-10 minutes for nova to act upon this and re-query the status with nova list. It should turn it back as active like other controller nodes.

—Login to the controller node, check for pcs status and resolve any processes that were not brought up running ‘pcs resource cleanup’

—If the monitor is still down and/or taking longer time issue /etc/init.d/ceph restart mon on the controller node(s).

Blade Replacement

Unlike the above two types of failures, in this test the blade is completely removed and new one is added. There were few issues encountered while rebuilding the failed controller blade and adding it as a replacement. The fix for bug 1298430 will give business continuity but there is a need to fix the failed blade. While this issue is being investigated, an interim solution was developed to circumvent the above limitation. This is included in the Hardware failures section. Different types of hardware failures that can happen on a controller blade and how to mitigate these issues considering the dependency of Controller blade on IPMI and MAC addresses is addressed there.

·Identify the floating IP’s for these VM’s from nova list –-all-tenants and capture data to login without password, run ifconfig script. The script ssh’s to all the VM’s run’s ifconfig and returns serially.

You may have to hard reboot the instances as nova reboot --hard $i after capturing the instance id’s from nova list --all-tenants.

Query the instances as shown below:

[root@overcloud-compute-0 nova]# virsh list --all

Id Name State

----------------------------------------------------

21 instance-000000f6 running

22 instance-0000000f running

23 instance-00000021 running

Blade Pull Tests

One of the Compute blades was pulled out while the system is up and running. This was also an extended test for about 60 minutes and then the blade was re-inserted back in the chassis.

Observations:

·Results were similar to reboot tests above.

·UCS Manager complained to resolve the host as it was pulled out from the chassis. This was acknowledged and the blade was re-inserted.

·The guest VM’s came up when resume_guests property was set to true at host level.

·Similar to Controller blade pull tests, nova made the state as ‘NOSTATE’ and ironic put the blade into Maintenance mode.

·Similar steps like setting up the maintenance mode to off through ironic and nova reset state were issued to the blade after getting ‘ok’ status in UCS manager.

Blade Replacement

Compute blade was pulled from the chassis completely and the server was decommissioned in UCS. This is to simulate a complete failure of a Compute blade. Then an attempt was made to remove this from OpenStack and add a new blade to the cloud. The service profile was reused in this method. The following were the tasks list and observations made during a Compute blade replacement test.

Blade replacement is actually a two phase process. First remove the faulty blade from the system and then add a new one.

6.Workarounds to delete the blade in the current status are as follows:

Update the error status to available status in ironic node-list

edit /etc/ironic/ironic.conf

Update the enabled drivers temporarily as below

#enabled_drivers=pxe_ipmitool,pxe_ssh,pxe_drac

enabled_drivers=fake

Restart openstack-ironic-conductor

Sudo service openstack-ironic-conductor restart

ironic node-update NODE_UUID replace driver=fake

The node in ironic node-list should be with provision-state=active and maintenance=false

If not

ironic node-set-provision-state NODE_UUID provide

ironic node-set-provision-state NODE_UUID active

ironic node-set-provision-state NODE_UUID deleted

The power status should be on, provision state as available and maintenance as false before moving ahead.

Run nova service-list and identify the service id’s

Delete the service id’s associated with this node as

nova service-delete $id

Delete the node from nova

nova delete NODE_UUID

Delete the node from ironic

ironic node-delete NODE_UUID

Revert back the “fake” driver from ironic.conf

edit vi /etc/ironic/ironic.conf.

enabled_drivers=pxe_ipmitool,pxe_ssh,pxe_drac

#enabled_drivers=fake

Restart ironic-conductor to pick up the drivers again.

service openstack-ironic-conductor restart

The deleted node should not exist anymore in ironic node-list or nova-list now.

Node Addition

When the compute blade has been completely removed from OpenStack, a new blade can be added. The procedure for adding a new compute blade is same as how it was addressed earlier in upscaling the compute pod.

Ceph, the software stack deployed by Red Hat OpenStack Director, has its high availability built in itself. By default, the system will be replicating the placement groups and has 3 copies distributed across the hosts.

The parameter osd_pool_default_size = 3 in ceph.conf brings this feature by default when installed.

If we create a crushmap from the existing cluster as below it reveals what type of buckets are in and what mode of replication is being done by default in the cluster.

ceph osd getcrushmap -o /tmp/crushmap.bin

crushtool -d crushmap.bin -o /tmp/crushmap.txt

rule replicated_ruleset {

ruleset 0

type replicated ß Default to Replication mode

min_size 1

max_size 10

step take default

step chooseleaf firstn 0 type host ßDefault distribution of PG copies

step emit

}

Whenever a Ceph node goes down, the system will start rebuilding from the copies of replicas. While this is an expected feature of Ceph, it causes some CPU and memory overhead too. This is one of the reasons to have a minimum of 3 nodes for ceph and leave some good amount of free space with in the storage cluster. This will help Ceph to move the blocks around in case of failures like this. More the nodes better it is, as this rebuild activity is distributed across the cluster. Though there are other parameters like osd_max_backfills to control this activity and its impact on CPU, it may not be feasible to cover all of these recovery parameters in this document.

What needs to be noted is that the recovery kicks in as part of the tests below. The ceph cluster status may show warnings while the tests are being conducted as it is moving the placement groups and may cause performance issues on the storage cluster. Hence checking the health of nodes while adding/rebuilding a new node is important.

Reboot Test

1.Check the status of the cluster:

2.Reboot one of the Ceph storage node:

The following was observed during the reboot and running ceph -w

mon.0 [INF] osd.13 marked itself down

mon.0 [INF] osd.15 marked itself down

mon.0 [INF] osd.10 marked itself down

mon.0 [INF] osd.8 marked itself down

mon.0 [INF] osd.6 marked itself down

mon.0 [INF] osd.4 marked itself down

mon.0 [INF] osd.2 marked itself down

mon.0 [INF] osd.0 marked itself down

mon.0 [INF] osdmap e176: 24 osds: 16 up, 24 in

ceph osd tree reports

3.Make sure the VM’s connectivity through floating IP from an external host is successful.

Host is tenant310-110-inst2 and Network is inet 10.2.110.6 netmask 255.255.255.0

Host is tenant310-160-inst3 and Network is inet 10.2.160.5 netmask 255.255.255.0

Host is tenant310-160-inst4 and Network is inet 10.2.160.6 netmask 255.255.255.0

Host is tenant311-111-inst1 and Network is inet 10.2.111.5 netmask 255.255.255.0

Host is tenant311-111-inst2 and Network is inet 10.2.111.6 netmask 255.255.255.0

Host is tenant311-161-inst3 and Network is inet 10.2.161.5 netmask 255.255.255.0

Host is tenant311-161-inst4 and Network is inet 10.2.161.6 netmask 255.255.255.0

Host is tenant312-112-inst1 and Network is inet 10.2.112.6 netmask 255.255.255.0

The node comes after few minutes, while the cluster shows warning issues during the reboot period.

The status of the cluster observed fine after few minutes of reboot. The warning message continues until the recovery activity is complete.

System Power Off

The behavior in system power off is very similar to what observed on Controller and Compute blade pull tests.

System took around 6 minutes to come back to OK status. The time system takes to recover depends on the active number of placement group and copies the system was attempting to move around.

There is a more detailed description and symptoms observed during power off that are listed in Node Replacement section below.

Node Replacement

One of the storage servers was powered off ( pull the power cord ) completely and the server was decommissioned in UCS. This is to simulate a complete failure of the storage server. Then an attempt was made to remove this node from OpenStack and add a new one to the cloud. The following were the tasks list and observations made during a Storage node replacement test.

Node replacement is actually a two phase process. First remove the server from the system and then add a new one.

To delete a node, complete the following steps:

1.Power off the node by pulling the power cord from a running cluster.

2.Check the health of placement groups:

[root@overcloud-cephstorage-0 ~]# ceph pg dump_stuck stale

ok

[root@overcloud-cephstorage-0 ~]# ceph pg dump_stuck inactive

ok

[root@overcloud-cephstorage-0 ~]# ceph pg dump_stuck unclean

ok

3.Run Ceph PG dump to validate that the OSD’s do not have any copies.

This makes sure that there is nothing in osd.24 to osd.31. These are the OSD’s that are part of the node that was deleted. Ceph moved all the copies from this node to other node.

Making sure that no placement groups are attached to the OSD’s using ceph pg dump or ceph osd stat makes sure of data integrity. The above command confirms that all the data has been moved out of the OSD’s. It is not recommended to delete a node with any placement groups residing in these OSD’s. Please wait till the recovery activity is complete. Do not let the Ceph cluster reach its full ratio when removing nodes or OSD’s. Removing OSD's could cause the cluster to reach full ratio and could cause data integrity issues.

The node in ironic node-list should be with provision-state=active and maintenance=false

If not

ironic node-set-provision-state NODE_UUID provide

ironic node-set-provision-state NODE_UUID active

ironic node-set-provision-state NODE_UUID deleted

The power status should be on, provision state as available and maintenance as false

3.Delete the node from nova and ironic.

Run nova service-list and identify the service id’s

Delete the service id’s associated with this node as

nova service-delete $id

nova delete NODE_UUID

ironic node-delete NODE_UUID

Revert back the “fake” driver from ironic.conf.

edit vi /etc/ironic/ironic.conf.

enabled_drivers=pxe_ipmitool,pxe_ssh,pxe_drac

#enabled_drivers=fake

Restart ironic-conductor to pick up the drivers again.

service openstack-ironic-conductor restart

Storage node deletion differs from compute node deletion here. In both the cases we have deleted the nodes from UCS and OpenStack so far. However ceph entries still remain and these have to be cleaned up.

Clean Up Ceph after Node Deletion

To clean up Ceph after a node deletion, complete the following steps:

1.Check the details from ceph health and osd tree:

[root@overcloud-cephstorage-0 ~]# ceph osd stat

osdmap e132: 32 osds: 24 up, 24 in

[root@overcloud-cephstorage-0 ~]#

2.Remove OSD’s from Ceph. Change the OSD ID’s to your setup and from the output of osd tree above.

Node Addition

When the storage node has been completely removed from OpenStack and the ceph entries cleaned, a new server can be added. The procedure for adding a new storage node is same as how it was addressed earlier in upscaling the storage pod.

RHEL-OSP7 supports only one Undercloud Node as of the date this document was first published. Also in the test bed, the compute and storage nodes are natted through Undercloud node. Though this does not pose any challenges during Overcloud operation, any future heat stack or overcloud deploys could be impacted.

The following backup and recovery method has been documented on Red Hat web site for reference. This procedure has not been validated in this CVD. It is strongly recommended to test the below procedure in a test environment and document the process to restore the Undercloud node from backup. Subsequently take a backup of the Undercloud node and store the back up for an easy retrieval later in case of failures.

The hardware failures of blades are infrequent and happen very rarely. Cisco stands behind the customers to support in such conditions. There is also a Return Materia Authorization (RMA) process in place. Depending on the types of failure, either the parts or the entire blade may be replaced. This section at a high level covers the types of failures that could happen on Cisco UCS blades running OpenStack and how to get the system up and running with little or no business interruption.

Before dwelling into the details, we would like to highlight that this section was validated specifically for Controller blades. The replacement of compute and storage blades are covered earlier in the High Availability section.

Any such failures happening on a blade either leads to degraded performance while the system continues to operate (like DIMM or disk Failures) or it could fail completely. In case of complete failures, OpenStack Nova and Ironic may also take them offline and there is a need to fix the errors.

A compute node failure will impact only the VM’s running on the compute node and these can be evacuated to another node.

Ceph storage nodes are configured with replication factor of 3, and the system continues to operate though the recovery operation may cause slight degraded performance of the storage cluster.

In case of total failure of controller blades, the fencing packages will fence the failed node. You may need to fix for bug 1303698 for this. This is not included in RHEL-OSP 7.2 release and instructions are provided how to get the patches and customize the Overcloud image earlier in this document. With the fix in place, the system continues to operate in a degraded mode (performance impact while navigating through dashboard and while creating new virtual machines).

The controller ethernet interfaces and MAC addresses are available in the Local Disk of the failed blade. Hence failure cases of hard disks is also included above. Apart from this, the provisioning Interface MAC address is also stored in the Undercloud node.

The local hard disk has all the configuration information and should be available. Hence it is strongly recommended to have a pair of Local Disks in RAID-10 configuration to overcome against disk failures.

Post hardware failures, if all of the above are brought back, the system can be made operational and this is what is addressed in this section.

As mentioned earlier there can be several types of failures including CPU or Memory and system may perform in a degraded fashion. Not all of these are covered in this document, but the ones which have hooks to OpenStack are covered here.

In case there is a need to replace the blade, the ipmi address, MAC addresses and Local disks have to be restored. It is assumed that there is no double failure here.

IPMI Address

The IPMI addresses are allocated from the KVM pool. When a blade fails system will hold the address until it has been decommissioned. If system is decommissioned, it will release the free IP to KVM Pool. We can allocate this free old IP to the new blade. The below figure shows how to change the IPMI address in UCS as an example.

NIC’s and MAC Addresses

Service profiles are like SIM card of a phone, that store all the hardware identity. Once the Service Profile is disassociated from the failed node and attached to the new node, all of the policies like Boot Policy and Network interfaces along with MAC addresses are available to the new blade.

Local Disks

The two hard disks can be taken out from the failed blade and inserted into the new blade. You have to make sure that the new blade is identical and upgraded to the same firmware version as of the failed blade. The local disks have the controller binaries and the cluster configuration information. Associating the service profile will bring up all the hardware profiles on the new blade. Hence now system will be in sync from both hardware and software side and should be up and running.

When the blade is removed from the chassis, pcs status shows resources as Unclean until the operation interval. This is configured as one minute. Hence wait for 1-2 minutes, make sure that pcs status only shows stopped as above and not unclean or unmanaged and then move forward.

Associate the existing service profile that was disassociated earlier from the failed blade to the new or replacement blade.

Make sure that Config is in progress and monitor the status in FSM tab of the server.

In rare occasions the existing storage policy may not let you associate the new blade. As the service profile is already unbound from the template, you may remove the storage profile from the service profile, reboot the server and then attach back the storage profile to this service profile.

·Can we use IO Modules 2208 instead of IOM 2204 as shown in the topology diagram?

Yes, both IOM 2204 and IOM 2208 are supported. For more details refer the design guide here.

·When should we use C240M4S and when C240M4L for storage servers?

This boils down as a design question and depends on the requirement. The C240M4 SFF, small form factor offers more spindles and hence higher IOPS with reasonable bandwidth capacity. The C240M4 LFF, the large form factor has a higher storage capacity but may not be as good as SFF on total IOPS per node. Validation has been done and the performance metrics provided that should help you choose the right hardware.

·Can I use Cisco UCS Sub-Orgs?

The current release of UCSM plugin does not support Sub-Orgs. We are working on this and will update this in the next release whenever this functionality is available.

·Can I use different hardware like Cisco M3 blades and different VIC adapters in the solution?

Cisco hardware higher than the version in the BOM are supported. While lower versions may still work, they have not been validated.

·How many chassis or blades and servers can I scale horizontally?

The validation was done with 3 fully loaded chassis with blades and 12 x C240M4L storage nodes for ceph. From hardware point of view, it could be the port limits and you may have to go with 96 ports Fabric Interconnects or Nexus switches.

However as number of blades increase the controller and neutron activity increases. Validation was done only with 3 controllers.

·Why aren't the internal SSD drives for Ceph storage nodes included in the BOM?

Current versions of ironic cannot identify internal drives as the ordering would vary from system to system. You may notice that you are using only disks attached to the RAID controller, which allows us to enforce a certain disk ordering. The internal drives are not connected to this RAID controller and thus would be unordered.

·My network topology differs from what mentioned in this document. What changes I need to do to the configuration?

The network topology verified in the configuration is included in the Appendix A. There were limited IP's and the floating network was used. It is not necessary to have the same settings. However you may have to change yaml files accordingly and tweaks may be necessary. Please refer Red Hat documentation on how to accommodate these changes in the template files.

·Updating yaml files is error prone. A simple white space or tab causing issues. What should I do?

It is recommended to use online parser followed by running network-validator before running Overcloud. Please look at Overcloud Install section for details.

·Do you have specific recommendations for Live Migration?

Live migration was attempted both in tunneling and converged modes. While the former is more secure because of encryption, the latter is performant. Please refer Live Migration section on changes needed in nova.conf to accommodate these.

·Why version lock directives have been delegated in this document?

OpenStack is continuously updated and changes in binaries and configurations go neck and neck. The purpose of providing lock file is to lock and provide binaries as close as possible to the validated design. This ensures consistency with minimal deviations from the validate design and adoption of configuration files like the yaml files. You can always install a higher version than mentioned but the specifics needed on configuration files may vary and/or some of the validations that were done in this document may have to be redone to avoid any regressions.

A few troubleshooting areas that may help while installing RHEL-OSP 7 on Cisco UCS servers is presented here. Troubleshooting in Openstack is exhaustive and this section limits what helped in debugging the install. Necessary links as needed are provided too. This is only an effort to help the readers to narrow down the issues. It is assumed that the reader has followed the pre and post install steps mentioned earlier.

·The provisioning interface should be enabled as native across all the blades and rack servers for successful introspection and Overcloud deploy.

·In case you are attempting a repeat of full deployment it is recommended to re-initialize the boot luns. The wipe_disk.yaml will work on storage partitions after OS is installed but not for the boot luns.

·The native flag for external network shouldn’t be enabled on Overcloud nodes as observed on the test bed.

·Specify the PCI order for network interfaces. This ensures that they are enumerated in the same way as specified in the templates.

·In case of using updating templates make sure that the service profiles are unbound from the service profile template for successful operation of UCS Manager Plugin.

·Before applying service profiles, you should make sure that all the disks are in ‘Unconfigured Good’ status. The storage profile, that is attached to these service profiles will then successfully get applied and then will make the boot lun in operable mode.

Undercloud install observed to be straight forward and very few issues observed. Mostly these were human mistakes like typos in the configuration file.

·Make sure that the server is registered with Red Hat Content Delivery Network for downloading the packages. In case the server is behind proxy, update /etc/rhsm/rhsm.conf file with appropriate proxy server values.

·Double check the entries in Undercloud configuration file. Provide enough room for discovery_iprange and dhcp start/end, also considering the future expansion or upscaling of the servers later. Most of these parameters are explained in the sample file provided in /usr/share.

·Leave the value of undercloud_debug=true as default to check for failures. The log file install-undercloud.log is created as part of Undercloud install in /home/stack/.instack. This will be handy to browse through on iussues encountered during the install.

·A repeat of Undercloud install preferably has to be done in a cleaner environment after reinstalling the base operating system.

Failure of introspecting the nodes can by many. Make sure that you have verified all the post undercloud and pre-introspection steps mentioned earlier in this document.

·A correct value of ipmi and mac addresses and powering on/off with ipmitool as mentioned earlier in this document should isolate the issues. Check with ironic node-list and ironic node-show to ensure that the registered values are correct.

·The boot luns configured in UCS through storage profile should be in available state before starting introspection. The size of the lun specified in the instack.json file should be equal or less that the size of the lun seen in UCS.

The best way to debug introspection failures is to open KVM console on the server and check for issues. The below screenshot warns something wrong in my instack configuration file.

A successful introspection is shown below:

·In case system takes you to the shell prompt and dumps /run/initramfs/sosreport.txt provides some insight as well.

·dnsmasq is the dhcp process that pxe uses to discover. Within the provisioning subnet configured you should have only one dhcp process or this dnsmasq process running on the Undercloud node. Any overlap will cause discovery failures.

Use the below method only to clean up the full install of introspection. Also this was tested with the current release and there could be changes in the future release from Red Hat and/or Openstack community. Exercise caution before attempting.

for i in $(ironic node-list | awk ' /power/ { print $2 } ' );

do

ironic node-set-power-state $i off

ironic node-delete $i

done

sleep 30

sudo rm /var/lib/ironic-discoverd/discoverd.sqlite # must be deleted as root

Debugging Overcloud failues sometimes is a daunting task. The issues could be as simple as passing incorrect parameters to Overcloud deploy while some could be bugs as well. Here is an attempt to narrow down the problems. It is difficult to cover all the failure scenarios here. Few of them found out on the configuration are mentioned here. The best place is to debug from here and then move forward with Red Hat and Openstack documentation.

·Check for the flavors pre-defined and verify that they match correctly. Incorrect flavors and/or the number of nodes passed may error with insufficient number of nodes while running Overcloud deploy command. Run instack-ironic-deployment --show-profile to confirm.

·Make sure that you have ntp server configured and check with ntpdate -d -y <ntp server> to check the drift. Preferably should be less than 20 ms for ceph monitors.

·Run in debug mode to capture the errors while running Overcloud deploy.

·The 300 minutes provided in the Appendix for Overcloud deployment command should suffice. But in case of large deployments, it may have to be increased.

·Run journalctl as above and check for dmesg and /var/log/messages to reveal any failures related with partitioning and/or network.

·Per ceph.yaml included in this document and because of bug 1297251, Ceph partitions are pre-created with wipe_disk.yaml file. Validate this with /root/wipe_disk.txt file and running cat /proc/partitions. Only the journal partitions are pre-created. The OSD partitions are created by RHEL-OSP Director.

·Checking the partitions in /proc/partitions and the existence of /var/log/ceph/*, /var/lib/ceph* and /etc/ceph/keyring and other files reveal at what stage it failed.

·The monitors should be setup before creating Ceph OSD’s. Existence of /etc/ceph/* on controller nodes, followed by that in storage nodes will reveal whether monitor setup was successful or not.

·Run ceph -s to check the health and observe for how many total OSD’s, how many are up etc.

·Run ceph osd tree to reveal issues with any individual OSD’s.

·If you detect clock skew issues on monitors, check for ntp daemon, sync up the time on monitors running on controller nodes and restart the monitors /etc/init.d/ceph mon restart

The following sequence may be followed to debug heat stack create/update issues.

1.heat stack-list

2.heat resource-list overcloud | grep -vi complete

3.heat resource-list -n5 overcloud | grep -vi complete

4.heat resource-show overcloud Controller

5.heat deployment-show <deployment id obtained above>

6.In the current version it has been observed that both introspection and Overcloud deploy happen in batches of ten. While the first 10 are being deployed, the rest of the nodes could be in spawning or wait-call-back mode. If the status hasn’t changed for a while, it may be best to login to the KVM console. Make sure to map the Overcloud name to UCS service profile name for this. In case of successful installation, /etc/neutron/plugin.ini will be populated by UCSM plugin from where you can extract the mappings between UCS Service Profile name and OpenStack host names. However in case of heat stack failures, UCSM plugin might not have completed the install, and you may have to map it manually in order to login to the correct nodes.

From the IPMI or mac address you can find out which node and login to the respective KVM console of the Cisco UCS server.

The following is a list of bugs that were encountered while working on this CVD. They are either fixed and rolled out as errata update by Red Hat or workarounds have been evolved and put in place in this document. This list is just for reference purposes only.

This paper is a joint contribution from Cisco Systems Inc, Red Hat Inc and Intel Corporation. The solution is baked by combining the technologies, expertise, contributions to OpenStack community and experience from the field and will provide a rich experience to the end users both on installation and day to day operational aspects of OpenStack.

Template files are sensitive to whitespaces and tabs. Please copy as is into a txt file and then rename them to .yaml, followed by validating them in online yaml parser like http://yaml-online-parser.appspot.com/