Director Installation and Usage

An end-to-end scenario on using Red Hat OpenStack Platform director to create an OpenStack cloud

OpenStackDocumentationTeam

This guide contains information on how to install Red Hat OpenStack Platform 14 in an enterprise environment using the Red Hat OpenStack Platform director. This includes installing the director, planning your environment, and creating an OpenStack environment with the director.

Chapter 1. Introduction

The Red Hat OpenStack Platform director is a toolset for installing and managing a complete OpenStack environment. Director is based primarily on the OpenStack project TripleO, which is an abbreviation of "OpenStack-On-OpenStack". This project consists of OpenStack components that you can use to install a fully operational OpenStack environment. This includes OpenStack components that provision and control bare metal systems to use as OpenStack nodes. This provides a simple method for installing a complete Red Hat OpenStack Platform environment that is both lean and robust.

The Red Hat OpenStack Platform director uses two main concepts: an undercloud and an overcloud. The undercloud installs and configures the overcloud. The next few sections outline the concept of each.

1.1. Undercloud

The undercloud is the main management node that contains the OpenStack Platform director toolset. It is a single-system OpenStack installation that includes components for provisioning and managing the OpenStack nodes that form your OpenStack environment (the overcloud). The components that form the undercloud have multiple functions:

Environment Planning

The undercloud includes planning functions for users to create and assign certain node roles. The undercloud includes a default set of nodes: Compute, Controller, and various storage roles. You can also design custom roles. Additionally, you can select which OpenStack Platform services to include on each node role, which provides a method to model new node types or isolate certain components on their own host.

Bare Metal System Control

The undercloud uses the out-of-band management interface, usually Intelligent Platform Management Interface (IPMI), of each node for power management control and a PXE-based service to discover hardware attributes and install OpenStack on each node. You can use this feature to provision bare metal systems as OpenStack nodes. See Appendix B, Power Management Drivers for a full list of power management drivers.

Orchestration

The undercloud contains a set of YAML templates that represent a set of plans for your environment. The undercloud imports these plans and follows their instructions to create the resulting OpenStack environment. The plans also include hooks that you can use to incorporate your own customizations as certain points in the environment creation process.

Undercloud Components

The undercloud uses OpenStack components as its base tool set. Each component operates within a separate container on the undercloud:

1.2. Overcloud

The overcloud is the resulting Red Hat OpenStack Platform environment that the undercloud creates. The overcloud consists of multiple nodes with different roles that you define based on the OpenStack Platform environment that you want to create. The undercloud includes a default set of overcloud node roles:

Controller

Controller nodes provide administration, networking, and high availability for the OpenStack environment. A recommended OpenStack environment contains three Controller nodes together in a high availability cluster.

A default Controller node contains the following components:

OpenStack Dashboard (horizon)

OpenStack Identity (keystone)

OpenStack Compute (nova) API

OpenStack Networking (neutron)

OpenStack Image Service (glance)

OpenStack Block Storage (cinder)

OpenStack Object Storage (swift)

OpenStack Orchestration (heat)

OpenStack Telemetry Metrics (gnocchi)

OpenStack Telemetry Alarming (aodh)

OpenStack Telemetry Event Storage (panko)

OpenStack Clustering (sahara)

OpenStack Shared File Systems (manila)

OpenStack Bare Metal (ironic)

MariaDB

Open vSwitch

Pacemaker and Galera for high availability services.

Compute

Compute nodes provide computing resources for the OpenStack environment. You can add more Compute nodes to scale out your environment over time. A default Compute node contains the following components:

OpenStack Compute (nova)

KVM/QEMU

OpenStack Telemetry (ceilometer) agent

Open vSwitch

Storage

Storage nodes that provide storage for the OpenStack environment. The following list contains information about the various types of storage node in Red Hat OpenStack Platform:

Ceph Storage nodes - Used to form storage clusters. Each node contains a Ceph Object Storage Daemon (OSD). Additionally, the director installs Ceph Monitor onto the Controller nodes in situations where you deploy Ceph Storage nodes as part of your environment.

Block storage (cinder) - Used as external block storage for highly available Controller nodes. This node contains the following components:

1.3. High Availability

The Red Hat OpenStack Platform director uses a Controller node cluster to provide highly available services to your OpenStack Platform environment. For each service, the director installs the same components on all Controller node and manages the Controller nodes together as a single service. This type of cluster configuration provides a fallback in the event of operational failures on a single Controller node. This provides OpenStack users with a certain degree of continuous operation.

The OpenStack Platform director uses some key pieces of software to manage components on the Controller node:

Pacemaker - Pacemaker is a cluster resource manager. Pacemaker manages and monitors the availability of OpenStack components across all nodes in the cluster.

From version 13 and later, you can use the director to deploy High Availability for Compute Instances (Instance HA). With Instance HA you can automate evacuating instances from a Compute node when the Compute node fails.

1.4. Containerization

Each OpenStack Platform service on the undercloud and overcloud runs inside an individual Linux container on their respective node. This containerization provides a method to isolate services, maintain the environment, and upgrade OpenStack Platform. Red Hat supports several methods of obtaining container images for your overcloud:

1.5. Ceph Storage

It is common for large organizations using OpenStack to serve thousands of clients or more. Each OpenStack client is likely to have their own unique needs when consuming block storage resources. Deploying glance (images), cinder (volumes) and/or nova (Compute) on a single node can become impossible to manage in large deployments with thousands of clients. Scaling OpenStack externally resolves this challenge.

However, there is also a practical requirement to virtualize the storage layer with a solution like Red Hat Ceph Storage so that you can scale the Red Hat OpenStack Platform storage layer from tens of terabytes to petabytes (or even exabytes) of storage. Red Hat Ceph Storage provides this storage virtualization layer with high availability and high performance while running on commodity hardware. While virtualization might seem like it comes with a performance penalty, Ceph stripes block device images as objects across the cluster, meaning that large Ceph Block Device images have better performance than a standalone disk. Ceph Block devices also support caching, copy-on-write cloning, and copy-on-read cloning for enhanced performance.

Part I. Director Installation and Configuration

Chapter 2. Planning your undercloud

2.1. Containerized undercloud

The undercloud is the node that controls the configuration, installation, and management of your final OpenStack Platform environment, which is called the overcloud. The undercloud itself uses OpenStack Platform components in the form of containers to create a toolset called OpenStack Platform director. This means the undercloud pulls a set of container images from a registry source, generates configuration for the containers, and runs each OpenStack Platform service as a container. As a result, the undercloud provides a containerized set of services you can use as a toolset for creating and managing your overcloud.

Since both the undercloud and overcloud uses containers, both use the same architecture to pull, configure, and run containers. This architecture is based on the OpenStack Orchestration service (heat) for provisioning nodes and uses Ansible for configuring services and containers. It is useful to have some familiarity with Heat and Ansible to help you troubleshoot issues you might encounter.

2.2. Preparing your undercloud networking

The undercloud requires access to two main networks:

The Provisioning or Control Plane network, which is the network the director uses to provision your nodes and access them over SSH when executing Ansible configuration. This network also enables SSH access from the undercloud to overcloud nodes. The undercloud contains DHCP services for introspection and provisioning other nodes on this network, which means no other DHCP services should exist on this network. The director configures the interface for this network.

The External network that enables access to OpenStack Platform repositories, container image sources, and other servers such as DNS servers or NTP servers. Use this network for standard access the undercloud from your workstation. You must manually configure an interface on the undercloud to access the external network.

The undercloud requires a minimum of 2 x 1 Gbps Network Interface Cards: one for the Provisioning or Control Plane network and one for the External network. However, it is recommended to use a 10 Gbps interface for Provisioning network traffic, especially if provisioning a large number of nodes in your overcloud environment.

Note the following:

Ensure the Provisioning / Control Plane NIC is not the same NIC you use to access the director machine from your workstation. The director installation creates a bridge using the Provisioning NIC, which drops any remote connections. Use the External NIC for remote connections to the director system.

The Provisioning network requires an IP range that fits your environment size. Use the following guidelines to determine the total number of IP addresses to include in this range:

Include at least one temporary IP address for each node connected to the Provisioning network during introspection.

Include at least one permanent IP address for each node connected to the Provisioning network during deployment.

Include an extra IP address for the virtual IP of the overcloud high availability cluster on the Provisioning network.

Include additional IP addresses within this range for scaling the environment.

2.3. Determining environment scale

Prior to installing the undercloud, it is recommended to determine the scale of your environment. Include the following factors when planningyour environment:

How many nodes in your overcloud? The undercloud manages each node within an overcloud. Provisioning overcloud nodes consumes resources on the undercloud. You must provide your undercloud with enough resources to adequately provision and control overcloud nodes.

How many simultaneous operations do you want the undercloud perform? Most OpenStack services on the undercloud use a set of workers. Each worker performs an operation specific to that service. Multiple workers provide simultaneous operations. The default number of workers on the undercloud is determined by halving the undercloud’s total CPU thread count [1]. For example, if your undercloud has a CPU with 16 threads, then the director services spawn 8 workers by default. The director also uses a set of minimum and maximum caps by default:

Service

Minimum

Maximum

OpenStack Orchestration (heat)

4

24

All other service

2

12

The undercloud has the minimum CPU and memory requirements:

An 8-thread 64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions. This provides 4 workers for each undercloud service.

A minimum of 24 GB of RAM.

The ceph-ansible playbook consumes 1 GB resident set size (RSS) per 10 hosts deployed by the undercloud. If the deployed overcloud will use an existing Ceph cluster, or if it will deploy a new Ceph cluster, then provision undercloud RAM accordingly.

To use a larger number of workers, increase your undercloud’s vCPUs and memory using the following recommendations:

Minimum: Use 1.5 GB of memory per thread. For example, a machine with 48 threads should have 72 GB of RAM. This provides the minimum coverage for 24 Heat workers and 12 workers for other services.

Recommended: Use 3 GB of memory per thread. For example, a machine with 48 threads should have 144 GB of RAM. This provides the recommended coverage for 24 Heat workers and 12 workers for other services.

[1]
In this instance, thread count refers to the number of CPU cores multiplied by the hyper-threading value

2.4. Undercloud disk sizing

The recommended minimum undercloud disk size is 100 GB of available disk space on the root disk:

Edit the /etc/hosts to include an entry for the system’s hostname. The IP address in /etc/hosts must match the address that you plan to use for your undercloud public API. For example, if the system is named manager.example.com and uses 10.0.0.1 for its IP address, then /etc/hosts requires an entry like:

10.0.0.1 manager.example.com manager

Register your system either with the Red Hat Content Delivery Network or with a Red Hat Satellite. For example, run the following command to register the system to the Content Delivery Network. Enter your Customer Portal user name and password when prompted:

Install the command line tools for director installation and configuration:

[stack@director ~]$ sudo yum install -y python-tripleoclient

3.2. Installing ceph-ansible

The following procedure installs the ceph-ansible package if you plan to create an overcloud with Ceph Storage nodes. If you do not plan to create Ceph Storage nodes in your overcloud, you do not need this package.

3.3. Preparing container images

The undercloud configuration requires initial registry configuration to determine where to obtain images and how to store them. Complete the following steps to generate and customize an environment file for preparing your container images.

--local-push-destination sets the registry on the undercloud as the location for container images. This means the director pulls the necessary images from the Red Hat Container Catalog and pushes them to the registry on the undercloud. The director uses this registry as the container image source. To pull directly from the Red Hat Container Catalog, omit this option.

--output-env-file is an environment file name. The contents of this file include the parameters for preparing your container images. In this case, the name of the file is containers-prepare-parameter.yaml.

Note

You can also use the same containers-prepare-parameter.yaml file to define a container image source for both the undercloud and the overcloud.

Edit the containers-prepare-parameter.yaml and make the modifications to suit your requirements.

3.4. Container image preparation parameters

The default file for preparing your containers (containers-prepare-parameter.yaml) contains the ContainerImagePrepare Heat parameter. This parameter defines a list of strategies for preparing a set of images:

Each strategy accepts a set of sub-parameters that define which images to use and what to do with them. The following table contains information about the sub-parameters you can use with each ContainerImagePrepare strategy:

Parameter

Description

excludes

List of image name substrings to exclude from a strategy.

includes

List of image name substrings to include in a strategy. At least one image name must match an existing image. All excludes are ignored if includes is specified.

modify_append_tag

String to append to the tag for the destination image. For example, if you pull an image with the tag 14.0-89 and set the modify_append_tag to -hotfix, the director tags the final image as 14.0-89-hotfix.

modify_only_with_labels

A dictionary of image labels that filter the images to modify. If an image matches the labels defined, the director includes the image in the modification process.

modify_role

String of ansible role names to run during upload but before pushing the image to the destination registry.

modify_vars

Dictionary of variables to pass to modify_role.

push_destination

The namespace of the registry to push images during the upload process. When you specify a namespace for this parameter, all image parameters use this namespace too. If set to true, the push_destination is set to the undercloud registry namespace. It is not recommended to set this parameters to false in production environments.

pull_source

The source registry from where to pull the original container images.

set

A dictionary of key: value definitions that define where to obtain the initial images.

tag_from_label

Defines the label pattern to tag the resulting images. Usually sets to \{version}-\{release}.

The set parameter accepts a set of key: value definitions. The following table contains information about the keys:

Key

Description

ceph_image

The name of the Ceph Storage container image.

ceph_namespace

The namespace of the Ceph Storage container image.

ceph_tag

The tag of the Ceph Storage container image.

name_prefix

A prefix for each OpenStack service image.

name_suffix

A suffix for each OpenStack service image.

namespace

The namespace for each OpenStack service image.

neutron_driver

The driver to use to determine which OpenStack Networking (neutron) container to use. Use a null value to set to the standard neutron-server container. Set to ovn to use OVN-based containers. Set to odl to use OpenDaylight-based containers.

tag

The tag that the director uses to identify the images to pull from the source registry. You usually keep this key set to latest.

Note

The set section might contains several parameters that begin with openshift_. These parameters are for various scenarios involving OpenShift-on-OpenStack.

3.5. Layering image preparation entries

The value of the ContainerImagePrepare parameter is a YAML list. This means you can specify multiple entries. The following example demonstrates two entries where the director uses the latest version of all images except for the nova-api image, which uses the version tagged with 14.0-44:

The includes and excludes entries control image filtering for each entry. The images that match the includes strategy take precedence over excludes matches. The image name must include the includes or excludes value to be considered a match.

3.6. Modifying images during preparation

It is possible to modify images during image preparation, then immediately deploy with modified images. Scenarios for modifying images include:

As part of a continuous integration pipeline where images are modified with the changes being tested before deployment.

As part of a development workflow where local changes need to be deployed for testing and development.

When changes need to be deployed but are not available through an image build pipeline. For example, adding proprietry add-ons or emergency fixes.

To modify an image during preparation, invoke an Ansible role on each image that you want to modify. The role takes a source image, makes the requested changes, and tags the result. The prepare command can push the image to the destination registry and set the Heat parameters to refer to the modified image.

The Ansible role tripleo-modify-image conforms with the required role interface, and provides the behaviour necessary for the modify use-cases. Modification is controlled using modify-specific keys in the ContainerImagePrepare parameter:

modify_role specifies the Ansible role to invoke for each image to modify.

modify_append_tag appends a string to the end of the source image tag. This makes it obvious that the resulting image has been modified. Use this parameter to skip modification if the push_destination registry already contains the modified image. It is recommended to change modify_append_tag whenever you modify the image.

modify_vars is a dictionary of Ansible variables to pass to the role.

To select a use-case that the tripleo-modify-image role handles, set the tasks_from variable to the required file in that role.

While developing and testing the ContainerImagePrepare entries that modify images, it is recommended to run the image prepare command without any additional options to confirm the image is modified as expected:

3.8. Installing additional RPM files to container images

You can install a directory of RPM files in your container images. This is useful for installing hotfixes, local package builds, or any package not available through a package repository. For example, the following ContainerImagePrepare entry installs some hotfix packages only on the nova-compute image:

3.9. Modifying container images with a custom Dockerfile

For maximum flexibility, you can specify a directory containing a Dockerfile to make the required changes. When you invoke the tripleo-modify-image role, the role generates a Dockerfile.modified file that changes the FROM directive and adds extra LABEL directives. The following example runs the custom Dockerfile on the nova-compute image:

3.10. Preparing a Satellite server for container images

Red Hat Satellite 6 offers registry synchronization capabilities. This provides a method to pull multiple images into a Satellite server and manage them as part of an application life cycle. The Satellite also acts as a registry for other container-enabled systems to use. For more details information on managing container images, see "Managing Container Images" in the Red Hat Satellite 6 Content Management Guide.

The examples in this procedure use the hammer command line tool for Red Hat Satellite 6 and an example organization called ACME. Substitute this organization for your own Satellite 6 organization.

Depending on your configuration, hammer might ask for your Satellite server username and password. You can configure hammer to automatically login using a configuration file. For more information, see the "Authentication" section in the Hammer CLI Guide.

If your Satellite 6 server uses content views, create a new content view version to incorporate the images and promote it along environments in your application life cycle. This largely depends on how you structure your application lifecycle. For example, if you have an environment called production in your lifecycle and you want the container images available in that environment, create a content view that includes the container images and promote that content view to the production environment. For more information, see "Managing Container Images with Content Views".

--output-env-file is an environment file name. The contents of this file will include the parameters for preparing your container images for the undercloud. In this case, the name of the file is containers-prepare-parameter.yaml.

Edit the containers-prepare-parameter.yaml file and modify the following parameters:

namespace - The URL and port of the registry on the Satellite server. The default registry port on Red Hat Satellite is 5000.

name_prefix - The prefix is based on a Satellite 6 convention. This differs depending on whether you use content views:

If you use content views, the structure is [org]-[environment]-[content view]-[product]-. For example: acme-production-myosp14-osp14_containers-.

If you do not use content views, the structure is [org]-[product]-. For example: acme-osp14_containers-.

ceph_namespace, ceph_image, ceph_tag - If using Ceph Storage, include the additional parameters to define the Ceph Storage container image location. Note that ceph_image now includes a Satellite-specific prefix. This prefix is the same value as the name_prefix option.

Use this environment file when creating both your undercloud and overcloud.

Chapter 4. Installing director

4.1. Configuring the director

The director installation process requires certain settings in the undercloud.conf configuration file, which the director reads from the stack user’s home directory. This procedure demonstrates how to use the default template as a foundation for your configuration.

Edit the undercloud.conf file. This file contains settings to configure your undercloud. If you omit or comment out a parameter, the undercloud installation uses the default value.

4.2. Director configuration parameters

The following list contains information about parameters for configuring the undercloud.conf file. Keep all parameters within their relevant sections to avoid errors.

Defaults

The following parameters are defined in the [DEFAULT] section of the undercloud.conf file:

additional_architectures

A list of additional (kernel) architectures that an overcloud supports. Currently the overcloud supports ppc64le architecture.

Note

When enabling support for ppc64le, you must also set ipxe_enabled to False

certificate_generation_ca

The certmonger nickname of the CA that signs the requested certificate. Use this option only if you have set the generate_service_certificate parameter. If you select the local CA, certmonger extracts the local CA certificate to /etc/pki/ca-trust/source/anchors/cm-local-ca.pem and adds the certificate to the trust chain.

clean_nodes

Defines whether to wipe the hard drive between deployments and after introspection.

cleanup

Cleanup temporary files. Set this to False to leave the temporary files used during deployment in place after the command is run. This is useful for debugging the generated files or if errors occur.

container_images_file

Heat environment file with container image information. This can either be:

Parameters for all required container images

Or the ContainerImagePrepare parameter to drive the required image preparation. Usually the file containing this parameter is named containers-prepare-parameter.yaml.

custom_env_files

Additional environment file to add to the undercloud installation.

deployment_user

The user installing the undercloud. Leave this parameter unset to use the current default user (stack).

discovery_default_driver

Sets the default driver for automatically enrolled nodes. Requires enable_node_discovery enabled and you must include the driver in the enabled_hardware_types list.

docker_insecure_registries

A list of insecure registries for docker to use. Use this parameter if you want to pull images from another source, such as a private container registry. In most cases, docker has the certificates to pull container images from either the Red Hat Container Catalog or from your Satellite server if the undercloud is registered to Satellite.

Defines the core services to enable for director. Leave these parameters set to true.

enable_ui

Defines whether to install the director web UI. Use this parameter to perform overcloud planning and deployments through a graphical web interface. Note that the UI is only available with SSL/TLS enabled using either the undercloud_service_certificate or generate_service_certificate.

enable_node_discovery

Automatically enroll any unknown node that PXE-boots the introspection ramdisk. New nodes use the fake_pxe driver as a default but you can set discovery_default_driver to override. You can also use introspection rules to specify driver information for newly enrolled nodes.

enable_novajoin

Defines whether to install the novajoin metadata service in the Undercloud.

enable_routed_networks

Defines whether to enable support for routed control plane networks.

enable_swift_encryption

Defines whether to enable Swift encryption at-rest.

enable_telemetry

Defines whether to install OpenStack Telemetry services (gnocchi, aodh, panko) in the undercloud. Set enable_telemetry parameter to true if you want to install and configure telemetry services automatically. The default value is false, which disables telemetry on the undercloud. This parameter is required if using other products that consume metrics data, such as Red Hat CloudForms.

enabled_hardware_types

A list of hardware types to enable for the undercloud.

generate_service_certificate

Defines whether to generate an SSL/TLS certificate during the undercloud installation, which is used for the undercloud_service_certificate parameter. The undercloud installation saves the resulting certificate /etc/pki/tls/certs/undercloud-[undercloud_public_vip].pem. The CA defined in the certificate_generation_ca parameter signs this certificate.

heat_container_image

URL for the heat container image to use. Leave unset.

heat_native

Use native heat templates. Leave as true.

hieradata_override

Path to hieradata override file that configures Puppet hieradata on the director, providing custom configuration to services beyond the undercloud.conf parameters. If set, the undercloud installation copies this file to the /etc/puppet/hieradata directory and sets it as the first file in the hierarchy. See Section 4.5, “Configuring hieradata on the undercloud” for details on using this feature.

inspection_extras

Defines whether to enable extra hardware collection during the inspection process. This parameter requires python-hardware or python-hardware-detect package on the introspection image.

inspection_interface

The bridge the director uses for node introspection. This is a custom bridge that the director configuration creates. The LOCAL_INTERFACE attaches to this bridge. Leave this as the default br-ctlplane.

inspection_runbench

Runs a set of benchmarks during node introspection. Set this parameter to true to enable the benchmarks. This option is necessary if you intend to perform benchmark analysis when inspecting the hardware of registered nodes.

ipa_otp

Defines the one time password to register the Undercloud node to an IPA server. This is required when enable_novajoin is enabled.

ipxe_enabled

Defines whether to use iPXE or standard PXE. The default is true, which enables iPXE. Set to false to set to standard PXE.

local_interface

The chosen interface for the director’s Provisioning NIC. This is also the device the director uses for DHCP and PXE boot services. Change this value to your chosen device. To see which device is connected, use the ip addr command. For example, this is the result of an ip addr command:

In this example, the External NIC uses eth0 and the Provisioning NIC uses eth1, which is currently not configured. In this case, set the local_interface to eth1. The configuration script attaches this interface to a custom bridge defined with the inspection_interface parameter.

local_ip

The IP address defined for the director’s Provisioning NIC. This is also the IP address that the director uses for DHCP and PXE boot services. Leave this value as the default 192.168.24.1/24 unless you use a different subnet for the Provisioning network, for example, if it conflicts with an existing IP address or subnet in your environment.

local_mtu

MTU to use for the local_interface.

local_subnet

The local subnet to use for PXE boot and DHCP interfaces. The local_ip address should reside in this subnet. The default is ctlplane-subnet.

net_config_override

Path to network configuration override template. If you set this parameter, the undercloud uses a JSON format template to configure the networking with os-net-config. The undercloud ignores the network parameters set in undercloud.conf. See /usr/share/python-tripleoclient/undercloud.conf.sample for an example.

When configuring the overcloud, the CloudDomain parameter must be set to a matching value. Set this parameter in an environment file when you configure your overcloud.

roles_file

The roles file to override for undercloud installation. It is highly recommended to leave unset so that the director installation uses the default roles file.

scheduler_max_attempts

Maximum number of times the scheduler attempts to deploy an instance. This value must be greater or equal to the number of bare metal nodes that you expect to deploy at once to work around potential race condition when scheduling.

service_principal

The Kerberos principal for the service using the certificate. Use this parameter only if your CA requires a Kerberos principal, such as in FreeIPA.

subnets

List of routed network subnets for provisioning and introspection. See Subnets for more information. The default value includes only the ctlplane-subnet subnet.

templates

Heat templates file to override.

undercloud_admin_host

The IP address defined for the director Admin API when using SSL/TLS. This is an IP address for administration endpoint access over SSL/TLS. The director configuration attaches the director’s IP address to its software bridge as a routed IP address, which uses the /32 netmask.

undercloud_debug

Sets the log level of undercloud services to DEBUG. Set this value to true to enable.

undercloud_enable_selinux

Enable or disable SELinux during the deployment. It is highly recommended to leave this value set to true unless you are debugging an issue.

undercloud_hostname

Defines the fully qualified host name for the undercloud. If set, the undercloud installation configures all system host name settings. If left unset, the undercloud uses the current host name, but the user must configure all system host name settings appropriately.

undercloud_log_file

The path to a log file to store the undercloud install/upgrade logs. By default, the log file is install-undercloud.log within the home directory. For example, /home/stack/install-undercloud.log.

undercloud_nameservers

A list of DNS nameservers to use for the undercloud hostname resolution.

undercloud_ntp_servers

A list of network time protocol servers to help synchronize the undercloud date and time.

undercloud_public_host

The IP address defined for the director Public API when using SSL/TLS. This is an IP address for accessing the director endpoints externally over SSL/TLS. The director configuration attaches this IP address to the director software bridge as a routed IP address, which uses the /32 netmask.

undercloud_service_certificate

The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you obtain this certificate from a trusted certificate authority. Otherwise, generate your own self-signed certificate.

undercloud_update_packages

Defines whether to update packages during the undercloud installation.

Subnets

Each provisioning subnet is a named section in the undercloud.conf file. For example, to create a subnet called ctlplane-subnet, use the following sample in your undercloud.conf file:

You can specify as many provisioning networks as necessary to suit your environment.

gateway

The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the External network. Leave this as the default 192.168.24.1 unless you use a different IP address for the director or want to use an external gateway directly.

The network that the director uses to manage overcloud instances. This is the Provisioning network, which the undercloud neutron service manages. Leave this as the default 192.168.24.0/24 unless you use a different subnet for the Provisioning network.

masquerade

Defines whether to masquerade the network defined in the cidr for external access. This provides the Provisioning network with a degree of network address translation (NAT) so that the Provisioning network has external access through the director.

dhcp_start; dhcp_end

The start and end of the DHCP allocation range for overcloud nodes. Ensure this range contains enough IP addresses to allocate your nodes.

Modify the values of these parameters to suit your configuration. When complete, save the file.

4.3. Configuring the undercloud with environment files

You configure the main parameters for the undercloud through the undercloud.conf file. You can also configure Heat parameters specific to the undercloud installation. You accomplish this with an environment file containing your Heat parameters.

Procedure

Create an environment file at /home/stack/templates/custom-undercloud-params.yaml.

Edit this file and include your Heat parameters. The following example shows how to enable debugging for certain OpenStack Platform services:

parameter_defaults:
Debug: True

Save this file when you have finished.

Edit your undercloud.conf file and scroll to the custom_env_files parameter. Edit the parameter to point to your environment file:

Set the hieradata_override parameter to the path of the hieradata file in your undercloud.conf:

hieradata_override = /home/stack/hieradata.yaml

4.6. Installing the director

Complete the following procedure to install the director and perform some basic post-installation tasks.

Procedure

Run the following command to install the director on the undercloud:

[stack@director ~]$ openstack undercloud install

This launches the director’s configuration script. The director installs additional packages and configures its services according to the configuration in the undercloud.conf. This script takes several minutes to complete.

The script generates two files when complete:

undercloud-passwords.conf - A list of all passwords for the director’s services.

stackrc - A set of initialization variables to help you access the director’s command line tools.

The script also starts all OpenStack Platform service containers automatically. Check the enabled containers using the following command:

[stack@director ~]$ sudo docker ps

The script adds the stack user to the docker group to give the stack user access to container management commands. Refresh the stack user’s permissions with the following command:

[stack@director ~]$ exec su -l stack

The command prompts you to log in again. Enter the stack user’s password.

To initialize the stack user to use the command line tools, run the following command:

[stack@director ~]$ source ~/stackrc

The prompt now indicates OpenStack commands authenticate and execute against the undercloud;

(undercloud) [stack@director ~]$

The director installation is complete. You can now use the director’s command line tools.

4.7. Obtaining images for overcloud nodes

The director requires several disk images for provisioning overcloud nodes. This includes:

An introspection kernel and ramdisk - Used for bare metal system introspection over PXE boot.

A deployment kernel and ramdisk - Used for system provisioning and deployment.

An overcloud kernel, ramdisk, and full image - A base overcloud system that is written to the node’s hard disk.

The following procedure shows how to obtain and install these images.

4.7.1. Single CPU architecture overclouds

These images and procedures are necessary for deployment of the overcloud with the default CPU architecture, x86-64.

Procedure

Source the stackrc file to enable the director’s command line tools:

[stack@director ~]$ source ~/stackrc

Install the rhosp-director-images and rhosp-director-images-ipa packages:

The default overcloud-full.qcow2 image is a flat partition image. However, you can also import and use whole disk images. See Appendix C, Whole Disk Images for more information.

4.8. Setting a nameserver for the control plane

If you intend for the overcloud to resolve external hostnames, such as cdn.redhat.com, it is recommended to set a nameserver on the overcloud nodes. For a standard overcloud without network isolation, the nameserver is defined using the undercloud’s control plane subnet. Complete the following procedure to define nameservers for the environment.

If you aim to isolate service traffic onto separate networks, the overcloud nodes use the DnsServers parameter in your network environment files.

4.9. Updating the undercloud configuration

In the future, you might have to change the undercloud configuration to suit new requirements. To make changes to your undercloud configuration after installation, edit the relevant configuration files and re-run the openstack undercloud install command.

Procedure

Modify the undercloud configuration files. For example, edit the undercloud.conf file and add the idrac hardware type to the list of enabled hardware types:

enabled_hardware_types = ipmi,redfish,idrac

Run the openstack undercloud install command to refresh your undercloud with the new changes:

[stack@director ~]$ openstack undercloud install

Wait until the command runs to completion.

Initialize the stack user to use the command line tools,:

[stack@director ~]$ source ~/stackrc

The prompt now indicates OpenStack commands authenticate and execute against the undercloud:

(undercloud) [stack@director ~]$

Verify the director has applied the new configuration. For this example, check the list of enabled hardware types:

4.10. Next Steps

This completes the director configuration and installation. The next chapter explores basic overcloud configuration, including registering nodes, inspecting them, and then tagging them into various node roles.

Part II. Basic Overcloud Deployment

Chapter 5. Planning your overcloud

The following section contains some guidelines for planning various aspects of your Red Hat OpenStack Platform environment. This includes defining node roles, planning your network topology, and storage.

5.1. Node roles

The director includes multiple default node types for building your overcloud. These node types are:

Environments with one node can only be used for testing purposes, not for production. Environments with two nodes or more than three nodes are not supported.

Compute

A physical server that acts as a hypervisor and contains the processing capabilities required for running virtual machines in the environment. A basic Red Hat OpenStack Platform environment requires at least one Compute node.

Ceph Storage

A host that provides Red Hat Ceph Storage. Additional Ceph Storage hosts scale into a cluster. This deployment role is optional.

Swift Storage

A host that provides external object storage the OpenStack Object Storage (swift) service. This deployment role is optional.

The following table contains some examples some examples of different overclouds and defines the node types for each scenario.

Table 5.1. Node Deployment Roles for Scenarios

Controller

Compute

Ceph Storage

Swift Storage

Total

Small overcloud

3

1

-

-

4

Medium overcloud

3

3

-

-

6

Medium overcloud with additional Object storage

3

3

-

3

9

Medium overcloud with Ceph Storage cluster

3

3

3

-

9

In addition, consider whether to split individual services into custom roles. For more information about the composable roles architecture, see "Composable Services and Custom Roles" in the Advanced Overcloud Customization guide.

5.2. Overcloud networks

It is important to plan your environment’s networking topology and subnets so that you can properly map roles and services to communicate with each other correctly. Red Hat OpenStack Platform uses the Openstack Networking (neutron) service, which operates autonomously and manages software-based networks, static and floating IP addresses, and DHCP.

By default, the director configures nodes to use the Provisioning / Control Plane for connectivity. However, it is possible to isolate network traffic into a series of composable networks, which you can customize and assign services.

In a typical Red Hat OpenStack Platform installation, the number of network types often exceeds the number of physical network links. In order to connect all the networks to the proper hosts, the overcloud uses VLAN tagging to deliver more than one network per interface. Most of the networks are isolated subnets but some networks require a Layer 3 gateway to provide routing for Internet access or infrastructure network connectivity. If using VLANs to isolate your network traffic types, use a switch that supports 802.1Q standards to provide tagged VLANs.

Note

It is recommended that you deploy a project network (tunneled with GRE or VXLAN) even if you intend to use a neutron VLAN mode (with tunneling disabled) at deployment time. This requires minor customization at deployment time and leaves the option available to use tunnel networks as utility networks or virtualization networks in the future. You still create Tenant networks using VLANs, but you can also create VXLAN tunnels for special-use networks without consuming tenant VLANs. It is possible to add VXLAN capability to a deployment with a Tenant VLAN, but it is not possible to add a Tenant VLAN to an existing overcloud without causing disruption.

The director also includes a set of templates to configure NICs with isolated composable networks. The following configurations are the default configurations:

Single NIC configuration - One NIC for the Provisioning network on the native VLAN and tagged VLANs that use subnets for the different overcloud network types.

Bonded NIC configuration - One NIC for the Provisioning network on the native VLAN and the two NICs in a bond for tagged VLANs for the different overcloud network types.

Multiple NIC configuration - Each NIC uses a subnet for a different overcloud network type.

You can also create your own templates to map a specific NIC configuration.

The following details are also important when considering your network configuration:

During the overcloud creation, you refer to NICs using a single name across all overcloud machines. Ideally, you should use the same NIC on each overcloud node for each respective network to avoid confusion. For example, use the primary NIC for the Provisioning network and the secondary NIC for the OpenStack services.

Set all overcloud systems to PXE boot off the Provisioning NIC, and disable PXE boot on the External NIC and any other NICs on the system. Also ensure that the Provisioning NIC has PXE boot at the top of the boot order, ahead of hard disks and CD/DVD drives.

All overcloud bare metal systems require a supported power management interface, such as an Intelligent Platform Management Interface (IPMI). This allows the director to control the power management of each node.

Make a note of the following details for each overcloud system: the MAC address of the Provisioning NIC, the IP address of the IPMI NIC, IPMI username, and IPMI password. This information will be useful later when setting up the overcloud nodes.

If an instance needs to be accessible from the external internet, you can allocate a floating IP address from a public network and associate it with an instance. The instance still retains its private IP but network traffic uses NAT to traverse through to the floating IP address. Note that a floating IP address can only be assigned to a single instance rather than multiple private IP addresses. However, the floating IP address is reserved only for use by a single tenant, allowing the tenant to associate or disassociate with a particular instance as required. This configuration exposes your infrastructure to the external internet. As a result, you might need to check that you are following suitable security practices.

To mitigate the risk of network loops in Open vSwitch, only a single interface or a single bond may be a member of a given bridge. If you require multiple bonds or interfaces, you can configure multiple bridges.

Red Hat recommends using DNS host name resolution so that your overcloud nodes can connect to external services, such as the Red Hat Content Delivery Network and network time servers.

The director includes different storage options for the overcloud environment:

Ceph Storage Nodes

The director creates a set of scalable storage nodes using Red Hat Ceph Storage. The overcloud uses these nodes for the following storage types:

Images - Glance manages images for VMs. Images are immutable. OpenStack treats images as binary blobs and downloads them accordingly. You can use glance to store images in a Ceph Block Device.

Volumes - Cinder volumes are block devices. OpenStack uses volumes to boot VMs, or to attach volumes to running VMs. OpenStack manages volumes using cinder services. You can use cinder to boot a VM using a copy-on-write clone of an image.

File Systems - Manila shares are backed by file systems. OpenStack users manage shares using manila services. You can use manila to manage shares backed by a CephFS file system with data on the Ceph Storage Nodes.

Guest Disks - Guest disks are guest operating system disks. By default, when you boot a virtual machine with nova, the virtual machine disk appears as a file on the filesystem of the hypervisor (usually under /var/lib/nova/instances/<uuid>/). Every virtual machine inside Ceph can be booted without using Cinder. As a result, you can perform maintenance operations easily with the live-migration process. Additionally, if your hypervisor dies it is also convenient to trigger nova evacuate and run the virtual machine elsewhere.

Important

For information about supported image formats, see the Image Service chapter in the Instances and Images Guide.

The director creates an external object storage node. This is useful in situations where you need to scale or replace controller nodes in your overcloud environment but need to retain object storage outside of a high availability cluster.

5.4. Overcloud security

Your OpenStack Platform implementation is only as secure as its environment. Follow good security principles in your networking environment to ensure that network access is properly controlled:

Use network segmentation to mitigate network movement and isolate sensitive data. A flat network is much less secure.

Restrict services access and ports to a minimum.

Enforce proper firewall rules and password usage.

Ensure that SELinux is enabled.

For details about securing your system, see the following Red Hat guides:

5.5. Overcloud high availability

To deploy a highly-available overcloud, the director configures multiple Controller, Compute and Storage nodes to work together as a single cluster. In case of node failure, an automated fencing and re-spawning process is triggered based on the type of node that failed. For information about overcloud high availability architecture and services, see Understanding Red Hat OpenStack Platform High Availability.

You can also configure high availability for Compute instances with the director (Instance HA). This high availability mechanism automates evacuation and re-spawning of instances on Compute nodes in case of node failure. The requirements for Instance HA are the same as the general overcloud requirements, but you must perform a few additional steps to prepare your environment for the deployment. For information about how Instance HA works and installation instructions, see the High Availability for Compute Instances guide.

5.6. Controller node requirements

Controller nodes host the core services in a Red Hat OpenStack Platform environment, such as the Horizon dashboard, the back-end database server, Keystone authentication, and High Availability services.

Processor

64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.

Memory

The minimum amount of memory is 32 GB. However, the amount of recommended memory depends on the number of vCPUs (which is based on CPU cores multiplied by hyper-threading value). Use the following calculations to determine your RAM requirements:

Controller RAM minimum calculation:

Use 1.5 GB of memory per vCPU. For example, a machine with 48 vCPUs should have 72 GB of RAM.

Controller RAM recommended calculation:

Use 3 GB of memory per vCPU. For example, a machine with 48 vCPUs should have 144 GB of RAM

A minimum amount of 40 GB storage is required, if the Object Storage service (swift) is not running on the controller nodes. However, the Telemetry (gnocchi) and Object Storage services are both installed on the Controller, with both configured to use the root disk. These defaults are suitable for deploying small overclouds built on commodity hardware. These environments are typical of proof-of-concept and test environments. These defaults also allow the deployment of overclouds with minimal planning but offer little in terms of workload capacity and performance.

In an enterprise environment, however, this could cause a significant bottleneck, as Telemetry accesses storage constantly. This results in heavy disk I/O usage, which severely impacts the performance of all other Controller services. In this type of environment, you must plan your overcloud and configure it accordingly.

5.7. Compute node requirements

Compute nodes are responsible for running virtual machine instances after they are launched. Compute nodes must support hardware virtualization. Compute nodes must also have enough memory and disk space to support the requirements of the virtual machine instances they host.

Processor

64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions, and the AMD-V or Intel VT hardware virtualization extensions enabled. It is recommended this processor has a minimum of 4 cores.

IBM POWER 8 processor.

Memory

A minimum of 6 GB of RAM. Add additional RAM to this requirement based on the amount of memory that you intend to make available to virtual machine instances.

Disk Space

A minimum of 40 GB of available disk space.

Network Interface Cards

A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic.

Power Management

Each Compute node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server’s motherboard.

5.8. Ceph Storage node requirements

Ceph uses Placement Groups to facilitate dynamic and efficient object tracking at scale. In the case of OSD failure or cluster re-balancing, Ceph can move or replicate a placement group and its contents, which means a Ceph cluster can re-balance and recover efficiently. The default Placement Group count that Director creates is not always optimal so it is important to calculate the correct Placement Group count according to your requirements. You can use the Placement Group calculator to calculate the correct count: Placement Groups (PGs) per Pool Calculator

Processor

64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.

Memory

Red Hat typically recommends a baseline of 16GB of RAM per OSD host, with an additional 2 GB of RAM per OSD daemon.

Disk Layout

Sizing is dependant on your storage need. The recommended Red Hat Ceph Storage node configuration requires at least three or more disks in a layout similar to the following example:

/dev/sda - The root disk. The director copies the main Overcloud image to the disk. This should be at minimum 40 GB of available disk space.

/dev/sdb - The journal disk. This disk divides into partitions for Ceph OSD journals. For example, /dev/sdb1, /dev/sdb2, /dev/sdb3, and onward. The journal disk is usually a solid state drive (SSD) to aid with system performance.

/dev/sdc and onward - The OSD disks. Use as many disks as necessary for your storage requirements.

Note

Red Hat OpenStack Platform director uses ceph-ansible, which does not support installing the OSD on the root disk of Ceph Storage nodes. This means you need at least two or more disks for a supported Ceph Storage node.

Network Interface Cards

A minimum of one 1 Gbps Network Interface Cards, although it is recommended to use at least two NICs in a production environment. Use additional network interface cards for bonded interfaces or to delegate tagged VLAN traffic. It is recommended to use a 10 Gbps interface for storage node, especially if creating an OpenStack Platform environment that serves a high volume of traffic.

Power Management

Each Controller node requires a supported power management interface, such as an Intelligent Platform Management Interface (IPMI) functionality, on the server’s motherboard.

5.9. Object Storage node requirements

Object Storage nodes provides an object storage layer for the overcloud. The Object Storage proxy is installed on Controller nodes. The storage layer requires bare metal nodes with multiple number of disks per node.

Processor

64-bit x86 processor with support for the Intel 64 or AMD64 CPU extensions.

Memory

Memory requirements depend on the amount of storage space. Ideally, use at minimum 1 GB of memory per 1 TB of hard disk space. For optimal performance, it is recommended to use 2 GB per 1 TB of hard disk space, especially for workloads with files smaller than 100GB.

Disk Space

Storage requirements depends on the capacity needed for the workload. It is recommended to use SSD drives to store the account and container data. The capacity ratio of account and container data to objects is approximately 1 per cent. For example, for every 100TB of hard drive capacity, provide 1TB of SSD capacity for account and container data.

However, this depends on the type of stored data. If storing mostly small objects, provide more SSD space. For large objects (videos, backups), use less SSD space.

Disk Layout

The recommended node configuration requires a disk layout similar to the following example:

/dev/sda - The root disk. The director copies the main overcloud image to the disk.

/dev/sdb - Used for account data.

/dev/sdc - Used for container data.

/dev/sdd and onward - The object server disks. Use as many disks as necessary for your storage requirements.

Provides tools for nodes to communicate with the Ceph Storage cluster. This repository should be enabled for all nodes when deploying an overcloud with a Ceph Storage cluster.

Red Hat OpenStack 14 Director Deployment Tools for RHEL 7 (RPMs)

rhel-7-server-openstack-14-deployment-tools-rpms

(For Ceph Storage Nodes) Provides a set of deployment tools that are compatible with the current version of Red Hat OpenStack Platform director. Installed on Ceph nodes without an active Red Hat OpenStack Platform subscription.

Table 5.4. NFV repositories

Name

Repository

Description of Requirement

Enterprise Linux for Real Time for NFV (RHEL 7 Server) (RPMs)

rhel-7-server-nfv-rpms

Repository for Real Time KVM (RT-KVM) for NFV. Contains packages to enable the real time kernel. This repository should be enabled for all Compute nodes targeted for RT-KVM. NOTE: You need a separate subscription to a Red Hat OpenStack Platform for Real Time SKU before you can access this repository.

IBM POWER repositories

Enable the following repositories to use Openstack Platform on POWER PC architecture. Use these repositories in place of equivalents in the Core repositories.

Name

Repository

Description of Requirement

Red Hat Enterprise Linux for IBM Power, little endian

rhel-7-for-power-le-rpms

Base operating system repository for ppc64le systems.

Red Hat OpenStack Platform 14 for RHEL 7 (RPMs)

rhel-7-server-openstack-14-for-power-le-rpms

Core Red Hat OpenStack Platform repository for ppc64le systems.

Chapter 6. Configuring a basic overcloud with CLI tools

This chapter contains basic configuration procedures to deploy an OpenStack Platform environment using the CLI tools. An overcloud with a basic configuration contains no custom features. However, you can add advanced configuration options to this basic overcloud and customize it to your specifications using the instructions in the Advanced Overcloud Customization guide.

6.1. Registering Nodes for the Overcloud

The director requires a node definition template, which you create manually. This template uses a JSON or YAML format, and contains the hardware and power management details for your nodes.

Procedure

Create a template that lists your nodes. Use the following JSON and YAML template examples to understand how to structure your node definition template:

The power management driver to use. This example uses the IPMI driver (ipmi).

Note

IPMI is the preferred supported power management driver. For more supported power management types and their options, see Appendix B, Power Management Drivers. If these power management drivers do not work as expected, use IPMI for your power management.

pm_user; pm_password

The IPMI username and password.

pm_addr

The IP address of the IPMI device.

pm_port (Optional)

The port to access the specific IPMI device.

mac

(Optional) A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.

cpu

(Optional) The number of CPUs on the node.

memory

(Optional) The amount of memory in MB.

disk

(Optional) The size of the hard disk in GB.

arch

(Optional) The system architecture.

Important

When building a multi-architecture cloud, the arch key is mandatory to distinguish nodes using x86_64 and ppc64le architectures.

After creating the template, run the following command to verify the formatting and syntax:

Wait for the node registration and configuration to completes. Once complete, confirm the director has successfully registered the nodes:

(undercloud) $ openstack baremetal node list

6.2. Inspecting the hardware of nodes

The director can run an introspection process on each node. This process boots an introspection agent over PXE on each node. The introspection agent collects hardware data from the node and sends it back to the director. The director then stores this introspection data in the OpenStack Object Storage (swift) service running on the director. The director uses hardware information for various purposes such as profile tagging, benchmarking, and manual root disk assignment.

Procedure

Run the following command to inspect the hardware attributes of each node:

Ensure this process runs to completion. This process usually takes 15 minutes for bare metal nodes.

After the introspection completes, all nodes change to an available state.

6.3. Tagging nodes into profiles

After registering and inspecting the hardware of each node, tag the nodes into specific profiles. These profile tags match your nodes to flavors, which assigns the flavors to deployment roles. The following example shows the relationships across roles, flavors, profiles, and nodes for Controller nodes:

Type

Description

Role

The Controller role defines how the director configures controller nodes.

Flavor

The control flavor defines the hardware profile for nodes to use as controllers. You assign this flavor to the Controller role so the director can decide which nodes to use.

Profile

The control profile is a tag you apply to the control flavor. This defines the nodes that belong to the flavor.

Node

You also apply the control profile tag to individual nodes, which groups them to the control flavor and, as a result, the director configures them using the Controller role.

Default profile flavors compute, control, swift-storage, ceph-storage, and block-storage are created during undercloud installation and are usable without modification in most environments.

Procedure

To tag a node into a specific profile, add a profile option to the properties/capabilities parameter for each node. For example, to tag your nodes to use Controller and Compute profiles respectively, use the following commands:

6.5. Defining the root disk

Director must identify the root disk during provisioning in the case of nodes with multiple disks. For example, most Ceph Storage nodes use multiple disks. By default, the director writes the overcloud image to the root disk during the provisioning process.

There are several properties that you can define to help the director identify the root disk:

Ensure that you configure the BIOS of each node to include booting from the root disk that you choose. The recommended boot order is network boot first, then root disk boot.

The director identifies the specific disk to use as the root disk. When you run the openstack overcloud deploy command, the director provisions and writes the Overcloud image to the root disk.

6.6. Using the overcloud-minimal image

By default, the director writes the QCOW2 overcloud-full image to the root disk during the provisioning process. The overcloud-full image uses a valid Red Hat subscription. However, you can also use the overcloud-minimal image if you do not require any other OpenStack services on your node and you do not want to use one of your Red Hat OpenStack Platform subscription entitlements. Use the overcloud-minimal image option to avoid reaching the limit of your paid Red Hat subscriptions.

To configure director to use the overcloud-minimal image, create an environment file that contains the following image definition:

parameter_defaults:
<roleName>Image: overcloud-minimal

Replace <roleName> with the name of the role, append Image to the name of the role, then pass the environment file to the deploy command.

For example, to use the overcloud-minimal image for Ceph storage nodes, include the following example environment file snippet in the openstack overcloud deploy command:

parameter_defaults:
CephStorageImage: overcloud-minimal

Note

The overcloud-minimal image supports only standard Linux bridges and not OVS because OVS is an OpenStack service that requires an OpenStack subscription entitlement.

6.7. Creating architecture specific roles

When building a multi-architecture cloud, you must add any architecture specific roles to the roles_data.yaml file. The following example includes the ComputePPC64LE role along with the default roles:

6.8. Environment files

The undercloud includes a set of Heat templates that form the plan for your overcloud creation. You can customize aspects of the overcloud using environment files, which are YAML-formatted files that override parameters and resources in the core Heat template collection. You can include as many environment files as necessary. However, the order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Use the following list as an example of the environment file order:

The number of nodes and the flavors for each role. It is vital to include this information for overcloud creation.

The location of the container images for containerized OpenStack services.

Any network isolation files, starting with the initialization file (environments/network-isolation.yaml) from the heat template collection, then your custom NIC configuration file, and finally any additional network configurations. See the following chapters in the Advanced Overcloud Customization guide for more information:

A basic overcloud uses local LVM storage for block storage, which is not a supported configuration. It is recommended to use an external storage solution, such as Red Hat Ceph Storage, for block storage.

The next few sections contain information about creating some environment files necessary for your overcloud.

By default, the director deploys an overcloud with 1 Controller node and 1 Compute node using the baremetal flavor. However, this is only suitable for a proof-of-concept deployment. You can override the default configuration by specifying different node counts and flavors. For a small scale production environment, you might want to consider at least 3 Controller nodes and 3 Compute nodes, and assign specific flavors to ensure the nodes have the appropriate resource specifications. Complete the following steps to create an environment file named node-info.yaml that stores the node counts and flavor assignments.

Procedure

Create a node-info.yaml file in the /home/stack/templates/ directory:

(undercloud) $ touch /home/stack/templates/node-info.yaml

Edit the file to include the node counts and flavors your need. This example contains 3 Controller nodes and 3 Compute nodes:

6.10. Creating an environment file for undercloud CA trust

If your undercloud uses TLS and the Certificate Authority (CA) is not publicly trusted, you can use the CA for SSL endpoint encryption that the undercloud operates. To ensure the undercloud endpoints accessible to the rest of your deployment, configure your overcloud nodes to trust the undercloud CA.

Note

For this approach to work, your overcloud nodes must have a network route to the undercloud’s public endpoint. It is likely that deployments that rely on spine-leaf networking will need to apply this configuration.

There are two types of custom certificates you can use in the undercloud:

User-provided certificates - This definition applies when you have provided your own certificate. This could be from your own CA, or it might be self-signed. This is passed using the undercloud_service_certificate option. In this case, you must either trust the self-signed certificate, or the CA (depending on your deployment).

Auto-generated certificates - This definition applies when you use certmonger to generate the certificate using its own local CA. This is enabled using the generate_service_certificate option in the undercloud.conf file. In this case, the director generates a CA certificate at /etc/pki/ca-trust/source/anchors/cm-local-ca.pem and the director configures the undercloud’s HAProxy instance to use a server certificate. Add the CA certificate to the inject-trust-anchor-hiera.yaml file to present the certificate to OpenStack Platform.

This example uses a self-signed certificate located in /home/stack/ca.crt.pem. If you use auto-generated certificates, use /etc/pki/ca-trust/source/anchors/cm-local-ca.pem instead.

Procedure

Open the certificate file and copy only the certificate portion. Do not include the key:

$ vi /home/stack/ca.crt.pem

The certificate portion you need will look similar to this shortened example:

6.11. Deployment command

The final stage in creating your OpenStack environment is to run the openstack overcloud deploy command to create the overcloud. Before running this command, you should familiarize yourself with key options and how to include custom environment files.

Warning

Do not run openstack overcloud deploy as a background process. The overcloud creation might hang mid-deployment if run as a background process.

6.12. Deployment command options

The following table lists the additional parameters for the openstack overcloud deploy command.

Table 6.1. Deployment command options

Parameter

Description

--templates [TEMPLATES]

The directory containing the Heat templates to deploy. If blank, the command uses the default template location at /usr/share/openstack-tripleo-heat-templates/

--stack STACK

The name of the stack to create or update

-t [TIMEOUT], --timeout [TIMEOUT]

Deployment timeout in minutes

--libvirt-type [LIBVIRT_TYPE]

Virtualization type to use for hypervisors

--ntp-server [NTP_SERVER]

Network Time Protocol (NTP) server to use to synchronize time. You can also specify multiple NTP servers in a comma-separated list, for example: --ntp-server 0.centos.pool.org,1.centos.pool.org. For a high availability cluster deployment, it is essential that your controllers are consistently referring to the same time source. Note that a typical environment might already have a designated NTP time source with established practices.

Extra environment files to pass to the overcloud deployment. You can specify this option more than once. Note that the order of environment files passed to the openstack overcloud deploy command is important. For example, parameters from each sequential environment file override the same parameters from earlier environment files.

--environment-directory

The directory containing environment files to include in deployment. The deploy command processes these environment files in numerical, then alphabetical order.

--validation-errors-nonfatal

The overcloud creation process performs a set of pre-deployment checks. This option exits if any non-fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail.

--validation-warnings-fatal

The overcloud creation process performs a set of pre-deployment checks. This option exits if any non-critical warnings occur from the pre-deployment checks.

--dry-run

Performs validation check on the overcloud but does not actually create the overcloud.

--skip-postconfig

Skip the overcloud post-deployment configuration.

--force-postconfig

Force the overcloud post-deployment configuration.

--skip-deploy-identifier

Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps only trigger if there is an actual change to the configuration. Use this option with caution and only if you are confident you do not need to run the software configuration, such as scaling out certain roles.

--answers-file ANSWERS_FILE

Path to a YAML file with arguments and parameters.

--rhel-reg

Register overcloud nodes to the Customer Portal or Satellite 6.

--reg-method

Registration method to use for the overcloud nodes. satellite for Red Hat Satellite 6 or Red Hat Satellite 5, portal for Customer Portal.

--reg-org [REG_ORG]

Organization to use for registration.

--reg-force

Register the system even if it is already registered.

--reg-sat-url [REG_SAT_URL]

The base URL of the Satellite server to register overcloud nodes. Use the Satellite’s HTTP URL and not the HTTPS URL for this parameter. For example, use http://satellite.example.com and not https://satellite.example.com. The overcloud creation process uses this URL to determine whether the server is a Red Hat Satellite 5 or Red Hat Satellite 6 server. If the server is a Red Hat Satellite 6 server, the overcloud obtains the katello-ca-consumer-latest.noarch.rpm file, registers with subscription-manager, and installs katello-agent. If the server is a Red Hat Satellite 5 server, the overcloud obtains the RHN-ORG-TRUSTED-SSL-CERT file and registers with rhnreg_ks.

--reg-activation-key [REG_ACTIVATION_KEY]

Activation key to use for registration.

Run the following command to view a full list of options:

(undercloud) $ openstack help overcloud deploy

Some command line parameters are outdated or deprecated in favor of using Heat template parameters, which you include in the parameter_defaults section on an environment file. The following table maps deprecated parameters to their Heat Template equivalents.

An Open vSwitch bridge to create on each hypervisor. This defaults to "br-ex". Typically, this should not need to be changed

HypervisorNeutronPhysicalBridge

--neutron-bridge-mappings

The logical to physical bridge mappings to use. Defaults to mapping the external bridge on hosts (br-ex) to a physical name (datacentre). You would use this for the default floating network

NeutronBridgeMappings

--neutron-public-interface

Defines the interface to bridge onto br-ex for network nodes

NeutronPublicInterface

--neutron-network-type

The tenant network type for Neutron

NeutronNetworkType

--neutron-tunnel-types

The tunnel types for the Neutron tenant network. To specify multiple values, use a comma separated string

NeutronTunnelTypes

--neutron-tunnel-id-ranges

Ranges of GRE tunnel IDs to make available for tenant network allocation

NeutronTunnelIdRanges

--neutron-vni-ranges

Ranges of VXLAN VNI IDs to make available for tenant network allocation

NeutronVniRanges

--neutron-network-vlan-ranges

The Neutron ML2 and Open vSwitch VLAN mapping range to support. Defaults to permitting any VLAN on the datacentre physical network

NeutronNetworkVLANRanges

--neutron-mechanism-drivers

The mechanism drivers for the neutron tenant network. Defaults to "openvswitch". To specify multiple values, use a comma-separated string

NeutronMechanismDrivers

--neutron-disable-tunneling

Disables tunneling in case you aim to use a VLAN segmented network or flat network with Neutron

No parameter mapping.

--validation-errors-fatal

The overcloud creation process performs a set of pre-deployment checks. This option exits if any fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail.

No parameter mapping

These parameters are scheduled for removal in a future version of Red Hat OpenStack Platform.

6.13. Including environment files in an overcloud deployment

Use the -e option to include an environment file to customize your overcloud. You can include as many environment files as necessary. However, the order of the environment files is important as the parameters and resources defined in subsequent environment files take precedence. Use the following list as an example of the environment file order:

The number of nodes and the flavors for each role. It is vital to include this information for overcloud creation.

The location of the container images for containerized OpenStack services.

Any network isolation files, starting with the initialization file (environments/network-isolation.yaml) from the heat template collection, then your custom NIC configuration file, and finally any additional network configurations. See the following chapters in the Advanced Overcloud Customization guide for more information:

The validation requires the overcloud-resource-registry-puppet.yaml environment file to include overcloud-specific resources. Add any additional environment files to this command with -e option. Also include the --show-nested option to resolve parameters from nested templates.

The validation command identifies any syntax errors in the template. If the template syntax validates successfully, the command returns a preview of the resulting overcloud template.

6.15. Overcloud deployment output

Once the overcloud creation completes, the director provides a recap of the Ansible plays executed to configure the overcloud:

6.16. Accessing the overcloud

The director generates a script to configure and help authenticate interactions with your overcloud from the director host. The director saves this file, overcloudrc, in your stack user’s home director. Run the following command to use this file:

(undercloud) $ source ~/overcloudrc

This loads environment variables necessary to interact with your overcloud from the director host’s CLI. The command prompt changes to indicate this:

(overcloud) $

To return to interacting with the director’s host, run the following command:

(overcloud) $ source ~/stackrc
(undercloud) $

Each node in the overcloud also contains a heat-admin user. The stack user has SSH access to this user on each node. To access a node over SSH, find the IP address of the desired node:

(undercloud) $ openstack server list

Then connect to the node using the heat-admin user and the node’s IP address:

6.17. Next steps

Chapter 7. Configuring a Basic Overcloud using Pre-Provisioned Nodes

This chapter contains basic configuration procedures for using pre-provisioned nodes to configure an OpenStack Platform environment. This scenario differs from the standard overcloud creation scenarios in several ways:

You can provision nodes using an external tool and let the director control the overcloud configuration only.

You can use nodes without relying on the director’s provisioning methods. This is useful if you want to create an overcloud without power management control or use networks with DHCP/PXE boot restrictions.

The director does not use OpenStack Compute (nova), OpenStack Bare Metal (ironic), or OpenStack Image (glance) for managing nodes.

Pre-provisioned nodes can use a custom partitioning layout that does not rely on the QCOW2 overcloud-full image.

This scenario include onlu basic configuration with no custom features. However, you can add advanced configuration options to this basic overcloud and customize it to your specifications using the instructions in the Advanced Overcloud Customization guide.

Important

Combining pre-provisioned nodes with director-provisioned nodes in an overcloud is not supported.

A set of bare metal machines for your nodes. The number of nodes required depends on the type of overcloud you intend to create. These machines must comply with the requirements set for each node type. These nodes require Red Hat Enterprise Linux 7.6 or later installed as the host operating system. Red Hat recommends using the latest version available.

One network connection for managing the pre-provisioned nodes. This scenario requires uninterrupted SSH access to the nodes for orchestration agent configuration.

One network connection for the Control Plane network. There are two main scenarios for this network:

Using the Provisioning Network as the Control Plane, which is the default scenario. This network is usually a layer-3 (L3) routable network connection from the pre-provisioned nodes to the director. The examples for this scenario use following IP address assignments:

Table 7.1. Provisioning Network IP Assignments

Node Name

IP Address

Director

192.168.24.1

Controller 0

192.168.24.2

Compute 0

192.168.24.3

Using a separate network. In situations where the director’s Provisioning network is a private non-routable network, you can define IP addresses for nodes from any subnet and communicate with the director over the Public API endpoint. There are certain caveats to this scenario, which this chapter examines later in Section 7.5, “Using a Separate Network for Overcloud Nodes”.

All other network types in this example also use the Control Plane network for OpenStack services. However, you can create additional networks for other network traffic types.

7.1. Creating a User for Configuring Nodes

When configuring an overcloud with pre-provisioned nodes, the director requires SSH access to the overcloud nodes as the stack user. To create the stack user, complete the following steps:

On each overcloud node, create the stack user and set a password on each node. For example, run the following commands on the Controller node:

After creating and configuring the stack user on all pre-provisioned nodes, copy the stack user’s public SSH key from the director node to each overcloud node. For example, to copy the director’s public SSH key to the Controller node, run the following command:

7.3. Configuring SSL/TLS Access to the Director

If the director uses SSL/TLS, the pre-provisioned nodes require the certificate authority file used to sign the director’s SSL/TLS certificates. If using your own certificate authority, perform the following actions on each overcloud node:

Copy the certificate authority file to the /etc/pki/ca-trust/source/anchors/ directory on each pre-provisioned node.

Run the following command on each overcloud node:

[root@controller-0 ~]# sudo update-ca-trust extract

These steps ensure the overcloud nodes can access the director’s Public API over SSL/TLS.

7.4. Configuring Networking for the Control Plane

The pre-provisioned overcloud nodes obtain metadata from the director using standard HTTP requests. This means all overcloud nodes require L3 access to either:

The director’s Control Plane network, which is the subnet defined with the network_cidr parameter in your undercloud.conf file. The overcloud nodes require either direct access to this subnet or routable access to the subnet.

The director’s Public API endpoint, specified as the undercloud_public_host parameter in your undercloud.conf file. This option is available if you do not have an L3 route to the Control Plane or you aim to use SSL/TLS communication. See Section 7.5, “Using a Separate Network for Overcloud Nodes” for additional information about configuring your overcloud nodes to use the Public API endpoint.

The director uses the Control Plane network to manage and configure a standard overcloud. For an overcloud with pre-provisioned nodes, your network configuration might require some modification to accommodate communication between the director and the pre-provisioned nodes.

Using Network Isolation

You can use network isolation to group services to use specific networks, including the Control Plane. There are multiple network isolation strategies in the The Advanced Overcloud Customization guide. You can also define specific IP addresses for nodes on the control plane. For more information about isolating networks and creating predictable node placement strategies, see the following sections in the Advanced Overcloud Customizations guide:

If you use network isolation, ensure your NIC templates do not include the NIC used for undercloud access. These template can reconfigure the NIC, which introduces connectivity and configuration problems during deployment.

Assigning IP Addresses

If you do not use network isolation, you can use a single Control Plane network to manage all services. This requires manual configuration of the Control Plane NIC on each node to use an IP address within the Control Plane network range. If using the director’s Provisioning network as the Control Plane, ensure the chosen overcloud IP addresses fall outside of the DHCP ranges for both provisioning (dhcp_start and dhcp_end) and introspection (inspection_iprange).

During standard overcloud creation, the director creates OpenStack Networking (neutron) ports and automatically assigns IP addresses to the overcloud nodes on the Provisioning / Control Plane network. However, this can cause the director to assign different IP addresses to the ones you configure manually for each node. In this situation, use a predictable IP address strategy to force the director to use the pre-provisioned IP assignments on the Control Plane.

For example, you can use an environment file ctlplane-assignments.yaml with the following IP assignments to implement a predictable IP strategy:

In this example, the OS::TripleO::DeployedServer::ControlPlanePort resource passes a set of parameters to the director and defines the IP assignments of our pre-provisioned nodes. The DeployedServerPortMap parameter defines the IP addresses and subnet CIDRs that correspond to each overcloud node. The mapping defines the following attributes:

The name of the assignment, which follows the format <node_hostname>-<network> where the <node_hostname> value matches the short hostname for the node and <network> matches the lowercase name of the network. For example: controller-0-ctlplane for controller-0.example.com and compute-0-ctlplane for compute-0.example.com.

The IP assignments, which use the following parameter patterns:

fixed_ips/ip_address - Defines the fixed IP addresses for the control plane. Use multiple ip_address parameters in a list to define multiple IP addresses.

subnets/cidr - Defines the CIDR value for the subnet.

A later section in this chapter uses the resulting environment file (ctlplane-assignments.yaml) as part of the openstack overcloud deploy command.

7.5. Using a Separate Network for Overcloud Nodes

By default, the director uses the Provisioning network as the overcloud Control Plane. However, if this network is isolated and non-routable, nodes cannot communicate with the director’s Internal API during configuration. In this situation, you might need to define a separate network for the nodes and configure them to communicate with the director over the Public API.

You must define an accessible fully qualified domain name (FQDN) for director. This FQDN must resolve to a routable IP address for the director. Use the undercloud_public_host parameter in the undercloud.conf file to set this FQDN.

The examples in this section use IP address assignments that differ from the main scenario:

Table 7.2. Provisioning Network IP Assignments

Node Name

IP Address or FQDN

Director (Internal API)

192.168.24.1 (Provisioning Network and Control Plane)

Director (Public API)

10.1.1.1 / director.example.com

Overcloud Virtual IP

192.168.100.1

Controller 0

192.168.100.2

Compute 0

192.168.100.3

The following sections provide additional configuration for situations that require a separate network for overcloud nodes.

The RedisVipPort resource is mapped to network/ports/noop.yaml. This mapping is necessary because the default Redis VIP address comes from the Control Plane. In this situation, we use a noop to disable this Control Plane mapping.

The EC2MetadataIp and ControlPlaneDefaultRoute parameters are set to the value of the Control Plane virtual IP address. The default NIC configuration templates require these parameters and you must set them to use a pingable IP address to pass the validations performed during deployment. Alternatively, customize the NIC configuration so they do not require these parameters.

7.6. Mapping pre-provisioned node hostnames

When configuring pre-provisioned nodes, you must map Heat-based hostnames to their actual hostnames so that ansible-playbook can reach a resolvable host. Use the HostnameMap to map these values.

Procedure

Create an environment file, for example hostname-map.yaml, and include the HostnameMap parameter and the hostname mappings. Use the following syntax:

Using the example export command, set the OVERCLOUD_HOSTS variable to a space-separated list of IP addresses of the overcloud hosts intended to be used as Ceph clients (such as the Compute, Block Storage, Image, File System, Telemetry services, and so forth). The enable-ssh-admin.sh script configures a user on the overcloud nodes that Ansible uses to configure Ceph clients.

environments/deployed-server-bootstrap-environment-rhel.yaml - Environment file to execute a bootstrap script on the pre-provisioned servers. This script installs additional packages and includes basic configuration for overcloud nodes.

environments/deployed-server-pacemaker-environment.yaml - Environment file for Pacemaker configuration on pre-provisioned Controller nodes. The namespace for the resources registered in this file use the Controller role name from deployed-server/deployed-server-roles-data.yaml, which is ControllerDeployedServer by default.

deployed-server/deployed-server-roles-data.yaml - An example custom roles file. This file replicates the default roles_data.yaml but also includes the disable_constraints: True parameter for each role. This parameter disables orchestration constraints in the generated role templates. These constraints are for services that pre-provisioned infrastructure does not use.

If you want to use a custom roles file, ensure you include the disable_constraints: True parameter for each role:

The --overcloud-ssh-user and --overcloud-ssh-key options are used to SSH into each overcloud node during the configuration stage, create an initial tripleo-admin user, and inject an SSH key into /home/tripleo-admin/.ssh/authorized_keys. To inject the SSH key, specify the credentials for the initial SSH connection with --overcloud-ssh-user and --overcloud-ssh-key (defaults to ~/.ssh/id_rsa). To limit exposure to the private key you specify with the --overcloud-ssh-key option, the director never passes this key to any API service, such as Heat or Mistral, and only the director’s openstack overcloud deploy command uses this key to enable access for the tripleo-admin user.

7.9. Overcloud deployment output

Once the overcloud creation completes, the director provides a recap of the Ansible plays executed to configure the overcloud:

7.10. Accessing the Overcloud

The director generates a script to configure and help authenticate interactions with your overcloud from the director host. The director saves this file, overcloudrc, in your stack user’s home director. Run the following command to use this file:

(undercloud) $ source ~/overcloudrc

This loads environment variables necessary to interact with your overcloud from the director host’s CLI. The command prompt changes to indicate this:

(overcloud) $

To return to interacting with the director’s host, run the following command:

(overcloud) $ source ~/stackrc
(undercloud) $

7.11. Scaling Pre-Provisioned Nodes

The process for scaling pre-provisioned nodes is similar to the standard scaling procedures in Chapter 10, Scaling overcloud nodes. However, the process for adding new pre-provisioned nodes differs since pre-provisioned nodes do not use the standard registration and management process from OpenStack Bare Metal (ironic) and OpenStack Compute (nova).

Scaling Up Pre-Provisioned Nodes

When scaling up the overcloud with pre-provisioned nodes, you must configure the orchestration agent on each node to correspond to the director’s node count.

In most scaling operations, you must obtain the UUID value of the node you want to remove and pass this value to the openstack overcloud node delete command. To obtain this UUID, list the resources for the specific role:

The indices 0, 1, or 2 in the stack_name column correspond to the node order in the Heat resource group. Pass the corresponding UUID value from the physical_resource_id column to openstack overcloud node delete command.

Once you have removed overcloud nodes from the stack, power off these nodes. In a standard deployment, the bare metal services on the director control this function. However, with pre-provisioned nodes, you must either manually shutdown these nodes or use the power management control for each physical system. If you do not power off the nodes after removing them from the stack, they might remain operational and reconnect as part of the overcloud environment.

After powering off the removed nodes, reprovision them to a base operating system configuration so that they do not unintentionally join the overcloud in the future

Note

Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The scale down process only removes the node from the overcloud stack and does not uninstall any packages.

7.12. Removing a Pre-Provisioned Overcloud

After removing the overcloud, power off all nodes and reprovision them to a base operating system configuration.

Note

Do not attempt to reuse nodes previously removed from the overcloud without first reprovisioning them with a fresh base operating system. The removal process only deletes the overcloud stack and does not uninstall any packages.

If your overcloud uses a different name, use the --plan argument to select an overcloud with a different name:

$ openstack overcloud status --plan my-deployment

8.2. Managing containerized services

OpenStack Platform runs services in containers on the undercloud and overcloud nodes. In certain situations, you might need to control the individual services on a host. This section contains information about some common docker commands you can run on a node to manage containerized services. For more comprehensive information about using docker to manage containers, see "Working with Docker formatted containers" in the Getting Started with Containers guide.

Listing containers and images

To list running containers, run the following command:

$ sudo docker ps

To include stopped or failed containers in the command output, add the --all option to the command:

$ sudo docker ps --all

To list container images, run the following command:

$ sudo docker images

Inspecting container properties

To view the properties of a container or container images, use the docker inspect command. For example, to inspect the keystone container, run the following command:

$ sudo docker inspect keystone

Managing basic container operations

To restart a containerized service, use the docker restart command. For example, to restart the keystone container, run the following command:

$ sudo docker restart keystone

To stop a containerized service, use the docker stop command. For example, to stop the keystone container, run the following command:

$ sudo docker stop keystone

To start a stopped containerized service, use the docker start command. For example, to start the keystone container, run the following command:

$ sudo docker start keystone

Note

Any changes to the service configuration files within the container revert after restarting the container. This is because the container regenerates the service configuration based on files on the node’s local file system in /var/lib/config-data/puppet-generated/. For example, if you edit /etc/keystone/keystone.conf within the keystone container and restart the container, the container regenerates the configuration using /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf on the node’s local file system, which overwrites any the changes made within the container before the restart.

Monitoring containers

To check the logs for a containerized service, use the docker logs command. For example, to view the logs for the keystone container, run the following command:

$ sudo docker logs keystone

Accessing containers

To enter the shell for a containerized service, use the docker exec command to launch /bin/bash. For example, to enter the shell for the keystone container, run the following command:

$ sudo docker exec -it keystone /bin/bash

To enter the shell for the keystone container as the root user, run the following command:

$ sudo docker exec --user 0 -it <NAME OR ID> /bin/bash

To exit from the container, run the following command:

# exit

Enabling swift-ring-builder on undercloud and overcloud

For continuity considerations in Object Storage (swift) builds, the swift-ring-builder and swift_object_server commands are no longer packaged on the undercloud or overcloud nodes. However, the commands are still available in the containers. To run them inside the respective containers:

In this example, you create a network with the name public. The overcloud requires this specific name for the default floating IP pool. This name is also important for the validation tests in Section 8.8, “Validating the Overcloud”.

This command also maps the network to the datacentre physical network. As a default, datacentre maps to the br-ex bridge. Leave this option as the default unless you have used custom neutron settings during the overcloud creation.

Using a Non-Native VLAN

If you are not using the native VLAN, run the following commands to assign the network to a VLAN:

8.5. Creating Additional Floating IP Networks

Floating IP networks can use any bridge, not just br-ex, as long as you meet the following conditions:

NeutronExternalNetworkBridge is set to "''" in your network environment file.

You have mapped the additional bridge during deployment. For example, to map a new bridge called br-floating to the floating physical network, include the NeutronBridgeMappings parameter in an environment file:

8.6. Creating the Overcloud Provider Network

A provider network is a network physically attached to a network that exists outside of the deployed overcloud. This can be an existing infrastructure network or a network that provides external access directly to instances through routing instead of floating IPs.

When creating a provider network, you associate it with a physical network, which uses a bridge mapping. This is similar to floating IP network creation. You add the provider network to both the Controller and the Compute nodes because the Compute nodes attach VM virtual network interfaces directly to the attached network interface.

For example, if the desired provider network is a VLAN on the br-ex bridge, use the following command to add a provider network on VLAN 201:

This command creates a shared network. It is also possible to specify a tenant instead of specifying --share. The new network is available only to the specified tenant. If you mark a provider network as external, only the operator may create ports on that network.

Add a subnet to a provider network if you want neutron to provide DHCP services to the tenant instances:

Attach other networks to this router. For example, run the following command to attach a subnet ‘subnet1’ to the router:

(overcloud) $ openstack router add subnet external subnet1

This command adds subnet1 to the routing table and allows traffic using subnet1 to route to the provider network.

8.7. Creating a basic Overcloud flavor

Validation steps in this guide assume that your installation contains flavors. If you have not already created at least one flavor, use the following commands to create a basic set of default flavors that have a range of storage and processing capabilities:

Use the vcpus option to define the quantity of virtual CPUs for the flavor.

Use $ openstack flavor create --help to learn more about the openstack flavor create command.

8.8. Validating the Overcloud

The overcloud uses the OpenStack Integration Test Suite (tempest) tool set to conduct a series of integration tests. This section contains information about preparations for running the integration tests. For full instruction on using the OpenStack Integration Test Suite, see the OpenStack Integration Test Suite Guide.

Before Running the Integration Test Suite

If running this test from the undercloud, ensure that the undercloud host has access to the overcloud’s Internal API network. For example, add a temporary VLAN on the undercloud host to access the Internal API network (ID: 201) using the 172.16.0.201/24 address:

After completing the validation, remove any temporary connections to the overcloud’s Internal API. In this example, use the following commands to remove the previously created VLAN on the undercloud:

$ source ~/stackrc
(undercloud) $ sudo ovs-vsctl del-port vlan201

8.9. Modifying the Overcloud Environment

Sometimes you might want to modify the overcloud to add additional features, or change the way it operates. To modify the overcloud, make modifications to your custom environment files and Heat templates, then rerun the openstack overcloud deploy command from your initial overcloud creation. For example, if you created an overcloud using Section 6.11, “Deployment command”, rerun the following command:

The director checks the overcloud stack in heat, and then updates each item in the stack with the environment files and heat templates. The director does not recreate the overcloud, but rather changes the existing overcloud.

If you aim to include a new environment file, add it to the openstack overcloud deploy command with the`-e` option. For example:

This command includes the new parameters and resources from the environment file into the stack.

Important

It is not advisable to make manual modifications to the overcloud configuration as the director might overwrite these modifications later.

8.10. Running the dynamic inventory script

The director can run Ansible-based automation on your OpenStack Platform environment. The director uses the tripleo-ansible-inventory command to generate a dynamic inventory of nodes in your environment.

Procedure

To view a dynamic inventory of nodes, run the tripleo-ansible-inventory command after sourcing stackrc:

$ source ~/stackrc
(undercloud) $ tripleo-ansible-inventory --list

The --list option returns details about all hosts. This command outputs the dynamic inventory in a JSON format:

-u [USER] to change the SSH user that executes the Ansible automation. The default SSH user for the overcloud is automatically defined using the ansible_ssh_user parameter in the dynamic inventory. The -u option overrides this parameter.

-m [MODULE] to use a specific Ansible module. The default is command, which executes Linux commands.

-a [MODULE_ARGS] to define arguments for the chosen module.

Important

Custom Ansible automation on the overcloud is not part of the standard overcloud stack. Subsequent execution of the openstack overcloud deploy command might override Ansible-based configuration for OpenStack Platform services on overcloud nodes.

8.11. Importing Virtual Machines into the Overcloud

If you have an existing OpenStack environment and want to migrate its virtual machines to your Red Hat OpenStack Platform environment, complete the following steps:

Create a new image by taking a snapshot of a running server and download the image.

These commands copy each VM disk from the existing OpenStack environment and into the new Red Hat OpenStack Platform. Snapshots using QCOW will lose their original layering system.

8.12. Migrating instances from a Compute node

In some situations, you might perform maintenance on an overcloud Compute node. To prevent downtime, migrate the VMs on the Compute node to another Compute node in the overcloud.

The director configures all Compute nodes to provide secure migration. All Compute nodes also require a shared SSH key to provide each host’s nova user with access to other Compute nodes during the migration process. The director creates this key using the OS::TripleO::Services::NovaCompute composable service. This composable service is one of the main services included on all Compute roles by default (see "Composable Services and Custom Roles" in Advanced Overcloud Customization).

This process migrates all instances from a Compute node. You can now perform maintenance on the node without any instance downtime. To return the Compute node to an enabled state, run the following command:

8.13. Protecting the Overcloud from Removal

Heat contains a set of default policies in code that you can override by creating /etc/heat/policy.json and adding customized rules. Add the following policy to deny everyone the permissions for deleting the overcloud.

{"stacks:delete": "rule:deny_everybody"}

This prevents removal of the overcloud with the heat client. To allow removal of the overcloud, delete the custom policy and save /etc/heat/policy.json.

Once the removal completes, follow the standard steps in the deployment scenarios to recreate your overcloud.

8.15. Review the Token Flush Interval

The Identity Service (keystone) uses a token-based system for access control against the other OpenStack services. Over time, the database accumulates a large number of unused tokens. A default cron job flushes the token table every day. It is recommended that you monitor your environment and adjust the token flush interval as needed.

To adjust the interval, include the KeystoneCronToken parameter in an environment file. For more information, see the Overcloud Parameters guide.

Chapter 9. Configuring the overcloud with Ansible

Ansible is the main method to apply the overcloud configuration. This chapter provides steps how to interact with the overcloud’s Ansible configuration.

Although director generates the Ansible playbooks automatically, it is a good idea to familiarize yourself with Ansible syntax. See https://docs.ansible.com/ for more information about how to use Ansible.

Note

Ansible also uses the concept of roles, which are different to OpenStack Platform director roles.

9.1. Ansible-based overcloud configuration (config-download)

The config-download feature is the director’s method of configuring the overcloud. The director uses config-download in conjunction with OpenStack Orchestration (heat) and OpenStack Workflow Service (mistral) to generate the software configuration and apply the configuration to each overcloud node. Although Heat creates all deployment data from SoftwareDeployment resources to perform the overcloud installation and configuration, Heat does not apply any of the configuration. Heat only provides the configuration data through the Heat API. When the director creates the stack, a Mistral workflow queries the Heat API to obtain the configuration data, generate a set of Ansible playbooks, and applies the Ansible playbooks to the overcloud.

As a result, when running the openstack overcloud deploy command, the following process occurs:

The director creates a new deployment plan based on openstack-tripleo-heat-templates and includes any environment files and parameters to customize the plan.

The director uses Heat to interpret the deployment plan and create the overcloud stack and all descendant resources. This includes provisioning nodes through OpenStack Bare Metal (ironic).

Heat also creates the software configuration from the deployment plan. The director compiles the Ansible playbooks from this software configuration.

The director generates a temporary user (`tripleo-admin1) on the overcloud nodes specifically for Ansible SSH access.

The director downloads the Heat software configuration and generates a set of Ansible playbooks using Heat outputs.

The director applies the Ansible playbooks to the overcloud nodes using ansible-playbook.

9.2. config-download working directory

The director generates a set of Ansible playbooks for the config-download process. These playbooks are stored in a working directory within the /var/lib/mistral/. This directory is named after the name of the overcloud, which is overcloud by default.

The working directory contains a set of sub-directories named after each overcloud role. These sub-directories contain all tasks relevant to the configuration of the nodes in the overcloud role. These sub-directories also contain additional sub-directories named after each specific node. These sub-directories contain node-specific variables to apply to the overcloud role tasks. As a result, the overcloud roles within the working directory use the following structure:

Each working directory is a local Git repository that records changes after each deployment operation. This helps you track configuration changes between each deployment.

9.3. Enabling access to config-download working directories

The mistral user in the OpenStack Workflow Service (mistral) containers own all files in the /var/lib/mistral/ working directories. You can grant the stack user on the undercloud access to all files in this directory. This helps with performing certain operations within the directory.

Procedure

Use the setfacl command to grant the stack user on the undercloud access to the files in the /var/lib/mistral directory:

$ sudo setfacl -R -m u:stack:rwx /var/lib/mistral

This command retains mistral user access to the directory.

9.4. Checking config-download log

During the config-download process, Ansible creates a log file on the undercloud in the config-download working directory.

Procedure

View the log with the less command within the config-download working directory. The following example uses the overcloud working directory:

$ less /var/lib/mistral/overcloud/ansible.log

9.5. Running config-download manually

The working directory in /var/lib/mistral/overcloud contains the playbooks and scripts necessary to interact with ansible-playbook directly. This procedure shows how to interact with these files.

Procedure

Change to the directory of the Ansible playbook::

$ cd /var/lib/mistral/overcloud/

Run the ansible-playbook-command.sh command to reproduce the deployment:

$ ./ansible-playbook-command.sh

You can pass additional Ansible arguments to this script, which are then passed unchanged to the ansible-playbook command. This makes it is possible to take further advantage of Ansible features, such as check mode (--check), limiting hosts (--limit), or overriding variables (-e). For example:

$ ./ansible-playbook-command.sh --limit Controller

The working directory contains a playbook called deploy_steps_playbook.yaml, which runs the overcloud configuration. To view this playbook, run the following command:

$ less deploy_steps_playbook.yaml

The playbook uses various task files contained with the working directory. Some task files are common to all OpenStack Platform roles and some are specific to certain OpenStack Platform roles and servers.

The working directory also contains sub-directories that correspond to each role defined in your overcloud’s roles_data file. For example:

$ ls Controller/

Each OpenStack Platform role directory also contains sub-directories for individual servers of that role type. The directories use the composable role hostname format. For example:

$ ls Controller/overcloud-controller-0

The Ansible tasks are tagged. To see the full list of tags use the CLI argument --list-tags for ansible-playbook:

Then apply tagged configuration using the --tags, --skip-tags, or --start-at-task with the ansible-playbook-command.sh script. For example:

$ ./ansible-playbook-command.sh --tags overcloud

When config-download configures Ceph, Ansible executes ceph-ansible from within the config-download external_deploy_steps_tasks playbook. When you run config-download manually, the second Ansible execution does not inherit the ssh_args argument. To pass Ansible environment variables to this execution, use a heat environment file. For example:

When using ansible-playbook CLI arguments such as --tags, --skip-tags, or --start-at-task, do not run or apply deployment configuration out of order. These CLI arguments are a convenient way to rerun previously failed tasks or iterating over an initial deployment. However, to guarantee a consistent deployment, you must run all tasks from deploy_steps_playbook.yaml in order.

9.6. Performing Git operations on the working directory

The config-download working directory is a local Git repository. Each time a deployment operation runs, the director adds a Git commit to the working directory with the relevant changes. You can perform Git operations to view configuration for the deployment at different stages and compare the configuration with different deployments.

Be aware of the limitations of the working directory. For example, using Git to revert to a previous version of the config-download working directory only affects the configuration in the working directory. It does not affect the following configurations:

The overcloud data schema: Applying a previous version of the working directory software configuration does not undo data migration and schema changes.

The hardware layout of the overcloud: Reverting to previous software configuration does not undo changes related to overcloud hardware, such as scaling up or down.

The Heat stack: Reverting to earlier revisions of the working directory has no effect on the configuration stored in the Heat stack. The Heat stack creates a new version of the software configuration that applies to the overcloud. To make permanent changes to the overcloud, modify the environment files applied to the overcloud stack prior to rerunning openstack overcloud deploy.

Complete the following steps to compare different commits of the config-download working directory.

Procedure

Change to the config-download working directory for your overcloud. In this case, the working directory is for the overcloud named overcloud:

$ cd /var/lib/mistral/overcloud

Run the git log command to list the commits in your working directory. You can also format the log output to show the date:

Run the git diff command against two commit hashes to see all changes between the deployments:

$ git diff a7e9063 dfb9d12

9.7. Creating config-download files manually

In certain circumstances, you might generate your own config-download files outside of the standard workflow. For example, you can generate the overcloud Heat stack using the --stack-only option with the openstack overcloud deploy command so that you can apply the configuration separately. Complete the following steps to create your own config-download files manually.

9.8. config-download top level files

The following file are important top level files within a config-download working directory.

Ansible configuration and execution

The following files are specific to configuring and executing Ansible within the config-download working directory.

ansible.cfg

Configuration file used when running ansible-playbook.

ansible.log

Log file from the last run of ansible-playbook.

ansible-errors.json

JSON structured file that contains any deployment errors

ansible-playbook-command.sh

Executable script to rerun the ansible-playbook command from the last deployment operation.

ssh_private_key

Private SSH key that Ansible uses to access the overcloud nodes.

tripleo-ansible-inventory.yaml

Ansible inventory file that contains hosts and variables for all the overcloud nodes.

overcloud-config.tar.gz

Archive of the working directory.

Playbooks

The following files are playbooks within the config-download working directory.

deploy_steps_playbook.yaml

Main deployment steps. This playbook performs the main configuration operations for your overcloud.

pre_upgrade_rolling_steps_playbook.yaml

Pre upgrade steps for major upgrade

upgrade_steps_playbook.yaml

Major upgrade steps.

post_upgrade_steps_playbook.yaml

Post upgrade steps for major upgrade.

update_steps_playbook.yaml

Minor update steps.

fast_forward_upgrade_playbook.yaml

Fast forward upgrade tasks. Use this playbook only when upgrading from one long-life version of OpenStack Platform to the next. Do not use this playbook for this release of OpenStack Platform.

9.9. config-download tags

The playbooks use tagged tasks to control the tasks applied to the overcloud. Use tags with the ansible-playbook CLI arguments --tags or --skip-tags to control which tasks to execute. The following list contains information about the tags that are enabled by default:

facts

Fact gathering operations.

common_roles

Ansible roles common to all nodes.

overcloud

All plays for overcloud deployment.

pre_deploy_steps

Deployments that happen before the deploy_steps operations.

host_prep_steps

Host preparation steps.

deploy_steps

Deployment steps.

post_deploy_steps

Steps that happen after the deploy_steps operations.

external

All external deployment tasks.

external_deploy_steps

External deployment tasks that run on the undercloud only.

9.10. config-download deployment steps

The deploy_steps_playbook.yaml playbook is used to configure the overcloud. This playbook applies all software configuration necessary to deploy a full overcloud based on the overcloud deployment plan.

This section contains a summary the different Ansible plays used within this playbook. The play names in this section are the same names used within the playbook and displayed in the ansible-playbook output. This section also contains information about the Ansible tags that are set on each play.

Gather facts from undercloud

Fact gathering for the undercloud node.

Tags:facts

Gather facts from overcloud

Fact gathering for the overcloud nodes.

Tags:facts

Load global variables

Loads all variables from global_vars.yaml.

Tags:always

Common roles for TripleO servers

Applies common ansible roles to all overcloud nodes, including tripleo-bootstrap for installing bootstrap packages and tripleo-ssh-known-hosts for configuring ssh known hosts.

Applies tasks from the external_deploy_steps_tasks template interface. Ansible runs these tasks against the undercloud node only.

Tags:external, external_deploy_steps

Overcloud deploy step tasks for [1,2,3,4,5]

Applies tasks from the deploy_steps_tasks template interface.

Tags:overcloud, deploy_steps

Overcloud common deploy step tasks [1,2,3,4,5]

Applies the common tasks performed at each step, including puppet host configuration, docker-puppet.py, and paunch (container configuration).

Tags:overcloud, deploy_steps

Server Post Deployments

Applies server specific Heat deployments for configuration performed after the 5-step deployment process.

Tags:overcloud, post_deploy_steps

External deployment Post Deploy tasks

Applies tasks from the external_post_deploy_steps_tasks template interface. Ansible runs these tasks against the undercloud node only.

Tags:external, external_deploy_steps

9.11. Next Steps

You can now continue your regular overcloud operations.

Chapter 10. Scaling overcloud nodes

Warning

Do not use openstack server delete to remove nodes from the overcloud. Read the procedures defined in this section to properly remove and replace nodes.

There might be situations where you need to add or remove nodes after the creation of the overcloud. For example, you might need to add more Compute nodes to the overcloud. This situation requires updating the overcloud.

Use the following table to determine support for scaling each node type:

Scaling the overcloud requires that you edit the environment file that contains your node counts and re-deploy the overcloud. For example, to scale your overcloud to 5 Compute nodes, edit the ComputeCount parameter:

parameter_defaults:
...
ComputeCount: 5
...

Rerun the deployment command with the updated file, which in this example is called node-info.yaml:

If you passed any extra environment files when you created the overcloud, pass them here again using the -e or --environment-file option to avoid making undesired manual changes to the overcloud.

Ensure the openstack overcloud node delete command runs to completion before you continue. Use the openstack stack list command and check the overcloud stack has reached an UPDATE_COMPLETE status.

Important

If you intend to redeploy the Compute service using the same host name, then you need to use the existing service records for the redeployed node. If this is the case, skip the remaining steps in this procedure, and proceed with the instructions detailed in Redeploying the Compute service using the same host name.

10.4. Replacing Ceph Storage nodes

10.5. Replacing Object Storage nodes

Follow the instructions in this section to understand how to replace Object Storage nodes while maintaining the integrity of the cluster. This example involves a three-node Object Storage cluster in which the node overcloud-objectstorage-1 must be replaced. The goal of the procedure is to add one more node and then remove overcloud-objectstorage-1, effectively replacing it.

Procedure

Increase the Object Storage count using the ObjectStorageCount parameter. This parameter is usually located in node-info.yaml, which is the environment file containing your node counts:

parameter_defaults:
ObjectStorageCount: 4

The ObjectStorageCount parameter defines the quantity of Object Storage nodes in your environment. In this situation, we scale from 3 to 4 nodes.

Run the deployment command with the updated ObjectStorageCount parameter:

After the deployment command completes, the overcloud contains an additional Object Storage node.

Replicate data to the new node. Before removing a node (in this case, overcloud-objectstorage-1), wait for a replication pass to finish on the new node. Check the replication pass progress in the /var/log/swift/swift.log file. When the pass finishes, the Object Storage service should log entries similar to the following example:

To remove the old node from the ring, reduce the ObjectStorageCount parameter to the omit the old node. In this case, reduce it to 3:

parameter_defaults:
ObjectStorageCount: 3

Create a new environment file named remove-object-node.yaml. This file identifies and removes the specified Object Storage node. The following content specifies the removal of overcloud-objectstorage-1:

The director deletes the Object Storage node from the overcloud and updates the rest of the nodes on the overcloud to accommodate the node removal.

Important

Make sure to include all environment files and options from your initial overcloud creation. This includes the same scale parameters for non-Compute nodes.

10.6. Blacklisting nodes

You can exclude overcloud nodes from receiving an updated deployment. This is useful in scenarios where you aim to scale new nodes while excluding existing nodes from receiving an updated set of parameters and resources from the core Heat template collection. In other words, the blacklisted nodes are isolated from the effects of the stack operation.

Use the DeploymentServerBlacklist parameter in an environment file to create a blacklist.

Setting the Blacklist

The DeploymentServerBlacklist parameter is a list of server names. Write a new environment file, or add the parameter value to an existing custom environment file and pass the file to the deployment command:

Heat blacklists any servers in the list from receiving updated Heat deployments. After the stack operation completes, any blacklisted servers remain unchanged. You can also power off or stop the os-collect-config agents during the operation.

Warning

Exercise caution when blacklisting nodes. Only use a blacklist if you fully understand how to apply the requested change with a blacklist in effect. It is possible to create a hung stack or configure the overcloud incorrectly using the blacklist feature. For example, if a cluster configuration changes applies to all members of a Pacemaker cluster, blacklisting a Pacemaker cluster member during this change can cause the cluster to fail.

Do not use the blacklist during update or upgrade procedures. Those procedures have their own methods for isolating changes to particular servers. See the Upgrading Red Hat OpenStack Platform documentation for more information.

When adding servers to the blacklist, further changes to those nodes are not supported until the server is removed from the blacklist. This includes updates, upgrades, scale up, scale down, and node replacement.

Clearing the Blacklist

To clear the blacklist for subsequent stack operations, edit the DeploymentServerBlacklist to use an empty array:

parameter_defaults:
DeploymentServerBlacklist: []

Warning

Do not just omit the DeploymentServerBlacklist parameter. If you omit the parameter, the overcloud deployment uses the previously saved value.

Chapter 11. Replacing Controller Nodes

In certain circumstances a Controller node in a high availability cluster might fail. In these situations, you must remove the node from the cluster and replace it with a new Controller node.

Complete the steps in this section to replace a Controller node. The Controller node replacement process involves running the openstack overcloud deploy command to update the overcloud with a request to replace a Controller node.

Important

The following procedure applies only to high availability environments. Do not use this procedure if using only one Controller node.

11.1. Preparing for Controller replacement

Before attempting to replace an overcloud Controller node, it is important to check the current state of your Red Hat OpenStack Platform environment. Checking the current state can help avoid complications during the Controller replacement process. Use the following list of preliminary checks to determine if it is safe to perform a Controller node replacement. Run all commands for these checks on the undercloud.

Procedure

Check the current status of the overcloud stack on the undercloud:

$ source stackrc
(undercloud) $ openstack stack list --nested

The overcloud stack and its subsequent child stacks should have either a CREATE_COMPLETE or UPDATE_COMPLETE.

11.2. Removing a Ceph Monitor daemon

Follow this procedure to remove a ceph-mon daemon from the storage cluster. If your Controller node is running a Ceph monitor service, complete the following steps to remove the ceph-mon daemon. This procedure assumes the Controller is reachable.

Note

Adding a new Controller to the cluster also adds a new Ceph monitor daemon automatically.

Procedure

Connect to the Controller you want to replace and become root:

# ssh heat-admin@192.168.0.47
# sudo su -

Note

If the controller is unreachable, skip steps 1 and 2 and continue the procedure at step 3 on any working controller node.

As root, stop the monitor:

# systemctl stop ceph-mon@<monitor_hostname>

For example:

# systemctl stop ceph-mon@overcloud-controller-1

Disconnect from the controller to be replaced.

Connect to one of the existing controllers.

# ssh heat-admin@192.168.0.46
# sudo su -

Remove the monitor from the cluster:

# ceph mon remove <mon_id>

On all Controller nodes, remove the monitor entry from /etc/ceph/ceph.conf. For example, if you remove controller-1, then remove the IP and hostname for controller-1.

The director updates the ceph.conf file on the relevant overcloud nodes when you add the replacement controller node. Normally, director manages this configuration file exclusively and you should not edit the file manually. However, you can edit the file manually to ensure consistency in case the other nodes restart before you add the new node.

Optionally, archive the monitor data and save the archive on another server:

In case the old node is physically unavailable or stopped, it is not necessary to perform the previous operation, as pacemaker is already stopped on that node.

After stopping Pacemaker on the old node, delete the old node from the Corosync configuration on each node and restart Corosync. To check the status of Pacemaker on the old node, run the pcs status command and verify that the status is Stopped.

The following example command logs in to overcloud-controller-0 and overcloud-controller-2 to remove overcloud-controller-1:

The overcloud database must continue to run during the replacement procedure. To ensure Pacemaker does not stop Galera during this procedure, select a running Controller node and run the following command on the undercloud using the Controller node’s IP address:

11.4. Replacing a Controller node

To replace a Controller node, identify the index of the node that you want to replace.

If the node is a virtual node, identify the node that contains the failed disk and restore the disk from a backup. Ensure that the MAC address of the NIC used for PXE boot on the failed server remains the same after disk replacement.

If the node is a bare metal node, replace the disk, prepare the new disk with your overcloud configuration, and perform a node introspection on the new hardware.

Complete the following example steps to replace the the overcloud-controller-1 node with the overcloud-controller-3 node. The overcloud-controller-3 node has the ID 75b25e9a-948d-424a-9b3b-f0ef70a6eacf.

Important

To replace the node with an existing ironic node, enable maintenance mode on the outgoing node so that the director does not automatically reprovision the node.

Chapter 12. Rebooting Nodes

You may need to reboot the nodes in the undercloud and overcloud. Use the following procedures to understand how to reboot different node types. Be aware of the following notes:

If rebooting all nodes in one role, it is advisable to reboot each node individually. If you reboot all nodes in a role simultaneously, you might encounter service downtime during the reboot operation.

If rebooting all nodes in your OpenStack Platform environment, reboot the nodes in the following sequential order:

Recommended Node Reboot Order

Reboot the undercloud node

Reboot Controller and other composable nodes

Reboot standalone Ceph MON nodes

Reboot Ceph Storage nodes

Reboot Compute nodes

12.1. Rebooting the undercloud node

Complete the following steps to reboot the undercloud node.

Procedure

Log into the undercloud as the stack user.

Reboot the undercloud:

$ sudo reboot

Wait until the node boots.

12.2. Rebooting controller and composable nodes

Complete the following steps to reboot controller nodes and standalone nodes based on composable roles, excluding Compute nodes and Ceph Storage nodes.

Procedure

Select a node to reboot. Log into the node and stop the cluster before rebooting:

[heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster stop

Reboot the node:

[heat-admin@overcloud-controller-0 ~]$ sudo reboot

Wait until the node boots.

Re-enable the cluster for the node:

[heat-admin@overcloud-controller-0 ~]$ sudo pcs cluster start

Log into the node and check the services:

If the node uses Pacemaker services, check the node has rejoined the cluster:

[heat-admin@overcloud-controller-0 ~]$ sudo pcs status

If the node uses Systemd services, check all services are enabled:

[heat-admin@overcloud-controller-0 ~]$ sudo systemctl status

If the node uses containerized services, check all containers on the node are active:

[heat-admin@overcloud-controller-0 ~]$ sudo docker ps

12.3. Rebooting standalone Ceph MON nodes

Procedure

Log into a Ceph MON node.

Reboot the node:

$ sudo reboot

Wait until the node boots and rejoins the MON cluster.

Repeat these steps for each MON node in the cluster.

12.4. Rebooting a Ceph Storage (OSD) cluster

Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes.

Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph storage nodes.

When complete, log into a Ceph MON or Controller node and enable cluster rebalancing again:

$ sudo ceph osd unset noout
$ sudo ceph osd unset norebalance

Perform a final status check to verify the cluster reports HEALTH_OK:

$ sudo ceph status

12.5. Rebooting compute nodes

Complete the following steps to reboot Compute nodes. To ensure minimal downtime of instances in your OpenStack Platform environment, this procedure also includes instructions about migrating instances from the Compute node you want to reboot. This involves the following workflow:

Select and disable the Compute node you want to reboot so that it does not provision new instances.

After the introspection completes, the node changes to an available state.

13.2. Performing Node Introspection after Initial Introspection

After an initial introspection, all nodes should enter an available state due to the --provide option. To perform introspection on all nodes after the initial introspection, set all nodes to a manageable state and run the bulk introspection command:

After the introspection completes, all nodes change to an available state.

13.3. Performing Network Introspection for Interface Information

Network introspection retrieves link layer discovery protocol (LLDP) data from network switches. The following commands show a subset of LLDP information for all interfaces on a node, or full information for a particular node and interface. This can be useful for troubleshooting. The director enables LLDP data collection by default.

The Bare Metal service hardware inspection extras (inspection_extras) is enabled by default to retrieve hardware details. You can use these hardware details to configure your overcloud. See Configuring the Director for details on the inspection_extras parameter in the undercloud.conf file.

For example, the numa_topology collector is part of these hardware inspection extras and includes the following information for each NUMA node:

RAM (in kilobytes)

Physical CPU cores and their sibling threads

NICs associated with the NUMA node

Use the openstack baremetal introspection data save _UUID_ | jq .numa_topology command to retrieve this information, with the UUID of the bare-metal node.

The following example shows the retrieved NUMA information for a bare-metal node:

Chapter 14. Automatically Discover Bare Metal Nodes

You can use auto-discovery to register undercloud nodes and generate their metadata, without first having to create an instackenv.json file. This improvement can help reduce the time spent initially collecting information about a node, for example, removing the need to collate the IPMI IP addresses and subsequently create the instackenv.json.

14.1. Requirements

All overcloud nodes BMCs must be configured to be accessible to director through the IPMI.

All overcloud nodes must be configured to PXE boot from the NIC connected to the undercloud control plane network.

14.2. Enable Auto-discovery

Enable Bare Metal auto-discovery in undercloud.conf:

enable_node_discovery = True
discovery_default_driver = ipmi

enable_node_discovery - When enabled, any node that boots the introspection ramdisk using PXE will be enrolled in ironic.

discovery_default_driver - Sets the driver to use for discovered nodes. For example, ipmi.

Add your IPMI credentials to ironic:

Add your IPMI credentials to a file named ipmi-credentials.json. You must replace the username and password values in this example to suit your environment:

14.4. Use Rules to Discover Different Vendor Hardware

If you have a heterogeneous hardware environment, you can use introspection rules to assign credentials and remote management credentials. For example, you might want a separate discovery rule to handle your Dell nodes that use DRAC:

You must replace the username and password values in this example to suit your environment:

Import the rule into ironic:

$ openstack baremetal introspection rule import dell-drac-rules.json

Chapter 15. Creating virtualized control planes

This chapter explains how to virtualize the control plane using Red Hat OpenStack Platform and Red Hat Virtualization.

15.1. Virtualized control planes

A virtualized control plane is a control plane located on virtual machines (VMs) rather than on bare metal. A virtualized control plane reduces the number of bare metal machines required for the control plane.

You can virtualize your Red Hat OpenStack Platform control plane for the overcloud using Red Hat Virtualization, by deploying virtualized controllers as the control plane nodes. The OpenStack Platform director supports provisioning an overcloud using Controller nodes deployed in a Red Hat Virtualization cluster.

Note

Virtualized Controller nodes are supported only on Red Hat Virtualization.

To deploy a virtualized control plane, distribute the overcloud with the Controller nodes running on VMs on Red Hat Virtualization, and Compute and storage nodes on bare metal, as illustrated in the following architecture diagram.

The OpenStack Bare Metal Provisioning (ironic) service includes a driver for Red Hat Virtualization VMs, staging-ovirt, that you can use to manage virtual nodes within a Red Hat Virtualization environment. Use this driver to deploy overcloud controllers as virtual machines within a Red Hat Virtualization environment.

You can allocate resources to the virtualized controllers dynamically, using hot add and hot remove to scale CPU and memory as required, preventing downtime and facilitating increased capacity as the platform grows.

You can deploy additional infrastructure virtual machines on the same Red Hat Virtualization cluster, thereby minimizing the server footprint in the data center and maximizing efficiency of the physical nodes.

You can use composable roles to define more complex Red Hat OpenStack Platform control planes, allowing you to allocate resources to specific components of the control plane.

You can leverage the virtual machine live migration feature, and maintain systems without service interruption.

You can integrate third party or custom tools supported by Red Hat Virtualization.

Block Storage (cinder) image-to-volume is not supported for back ends that use Fiber Channel. Red Hat Virtualization does not support N_Port ID Virtualization (NPIV), therefore Block Storage (cinder) drivers that need to map LUNs from a storage back end to the controllers, where cinder-volume runs by default, will not work. Red Hat recommends creating a dedicated role for cinder-volume rather than including it on the virtualized controllers. See Composable Services and Custom Roles for details on how to do this.

The virtualized Controller nodes are prepared. The requirements for virtualized Controller nodes are the same as for bare-metal Controller nodes. For more information, see Controller node requirements.

Register the VMs hosted on Red Hat Virtualization with director by specifying them in the overcloud node definition template, for instance, nodes.json. See Registering Nodes for the Overcloud for details. Use the following key:value pairs to define aspects of the virtual machines that you want to deploy with your overcloud:

Key

Value

pm_type

Set to the OpenStack Bare Metal Provisioning (ironic) service driver for oVirt/RHV VMs, staging-ovirt.

pm_user

Set to the Red Hat Virtualization Manager username.

pm_password

Set to the Red Hat Virtualization Manager password.

pm_addr

Set to the hostname or IP of the Red Hat Virtualization Manager server.

pm_vm_name

Set to the name of the virtual machine in Red Hat Virtualization Manager where the controller is created.

Configure an affinity group in Red Hat Virtualization with "soft negative affinity" to ensure high availability is implemented for your controller VMs. See Affinity Groups for details.

Map each VLAN to a separate logical vNIC in the controller VMs using the Red Hat Virtualization Manager interface.

Disable the MAC spoofing filter on the networks attached to the controller VMs by setting no_filter in the vNIC of the director and controller VMs, and restarting the VMs. See Virtual Network Interface Cards for further details.

Deploy the overcloud to include the new virtualized controller nodes in your environment:

Chapter 16. Configuring Direct Deploy

When provisioning nodes, the director mounts the overcloud base operating system image on an iSCSI mount and then copies the image to disk on each node. Direct deploy is an alternative method that writes disk images from a HTTP location directly to disk on bare metal nodes.

16.1. Configuring the direct deploy interface on the undercloud

The iSCSI deploy interface is the default deploy interface. However, you can enable the direct deploy interface to download an image from a HTTP location to the target disk.

Note

Your overcloud node memory tmpfs must have at least 6GB of RAM.

Procedure

Create or modify a custom environment file /home/stack/undercloud_custom_env.yaml and specify the IronicDefaultDeployInterface.

parameter_defaults:
IronicDefaultDeployInterface: direct

If you register your nodes with iscsi, you must retain the iscsi value in the IronicDefaultDeployInterface parameter:

parameter_defaults:
IronicDefaultDeployInterface: direct,iscsi

Include the custom environment file in DEFAULT section of the undercloud.conf file.

custom_env_files = /home/stack/undercloud_custom_env.yaml

Perform the undercloud installation:

$ openstack undercloud install

Specify the deploy interface for a node:

$ openstack baremetal node set <NODE> --deploy-interface direct

Part V. Troubleshooting

Chapter 17. Troubleshooting Director Issues

Errors can occur at certain stages of the director’s processes. This section contains some information about diagnosing common problems.

Note the common logs for the director’s components:

The /var/log directory contains logs for many common OpenStack Platform components as well as logs for standard Red Hat Enterprise Linux applications.

ironic-inspector also stores the ramdisk logs in /var/log/ironic-inspector/ramdisk/ as gz-compressed tar files. Filenames contain date, time, and the IPMI address of the node. Use these logs to diagnose introspection issues.

17.1. Troubleshooting Node Registration

Issues with node registration usually occur due to issues with incorrect node details. In this case, use ironic to fix problems with node data registered. Here are a few examples:

17.2. Troubleshooting Hardware Introspection

The introspection process must run to completion. However, the ironic discovery daemon (ironic-inspector) times out after a default one hour period if the discovery ramdisk does not respond. Sometimes this might indicate a bug in the discovery ramdisk but usually this time-out occurs due to an environment misconfiguration, particularly BIOS boot settings.

This section contains information about common scenarios where environment misconfiguration occurs and advice about how to diagnose and resolve them.

Errors with Starting Node Introspection

Normally the introspection process uses the openstack overcloud node introspect command. However, if running the introspection directly with ironic-inspector, the introspection might fail to discover nodes in the AVAILABLE state, which is meant for deployment and not for discovery. In this situation, change the node status to the MANAGEABLE state before discovery:

You can also wait until the process times out. If necessary, change the timeout setting in /etc/ironic-inspector/inspector.conf to another duration in minutes.

Accessing the Introspection Ramdisk

The introspection ramdisk uses a dynamic login element. This means you can provide either a temporary password or an SSH key to access the node during introspection debugging. Complete the following procedure to configure ramdisk access:

Run the openssl passwd -1 command with a temporary password to generate an MD5 hash:

$ openssl passwd -1 mytestpassword
$1$enjRSyIw$/fYUpJwr6abFy/d.koRgQ/

Edit the /httpboot/inspector.ipxe file, find the line starting with kernel, and append the rootpwd parameter and the MD5 hash. For example:

Start the introspection and identify the IP address from either the arp command or the DHCP logs:

$ arp
$ sudo journalctl -u openstack-ironic-inspector-dnsmasq

SSH as a root user with the temporary password or the SSH key.

$ ssh root@192.168.24.105

Checking Introspection Storage

The director uses OpenStack Object Storage (swift) to save the hardware data obtained during the introspection process. If this service is not running, the introspection can fail. Check all services related to OpenStack Object Storage to ensure the service is running:

$ sudo docker ps --filter name=".*swift.*"

17.3. Troubleshooting Workflows and Executions

The OpenStack Workflow (mistral) service groups multiple OpenStack tasks into workflows. Red Hat OpenStack Platform uses a set of these workflow to perform common functions across the director, including bare metal node control, validations, plan management, and overcloud deployment.

For example, when running the openstack overcloud deploy command, the OpenStack Workflow service executes two workflows. The first workflow uploads the deployment plan:

A particular instruction that OpenStack performs once an associated task runs. Examples include running shell scripts or performing HTTP requests. Some OpenStack components have in-built actions that OpenStack Workflow uses.

Tasks

Defines the action to run and the result of running the action. These tasks usually have actions or other workflows associated with them. Once a task completes, the workflow directs to another task, usually depending on whether the task succeeded or failed.

Workflows

A set of tasks grouped together and executed in a specific order.

Executions

Defines a particular action, task, or workflow running.

Workflow Error Diagnosis

OpenStack Workflow also provides robust logging of executions, which helps identify issues with certain command failures. For example, if a workflow execution fails, you can identify the point of failure. List the workflow executions that have the failed state ERROR:

These commands return information about the failed task in the execution. The openstack workflow execution show command also displays the workflow used for the execution (for example, tripleo.plan_management.v1.publish_ui_logs_to_swift). You can view the full workflow definition using the following command:

17.4. Troubleshooting Overcloud Creation

The overcloud deployment can fail at one of three layers:

Orchestration (heat and nova services)

Bare Metal Provisioning (ironic service)

Post-Deployment Configuration (Ansible and Puppet)

If an overcloud deployment has failed at any of these levels, use the OpenStack clients and service log files to diagnose the failed deployment. You can also run the following command to display details of the failure:

$ openstack stack failures list <OVERCLOUD_NAME> --long

Replace <OVERCLOUD_NAME> with the name of your overcloud.

17.4.1. Accessing deployment command history

Understanding historical director deployment commands and arguments can be useful for troubleshooting and support. You can view this information in /home/stack/.tripleo/history.

17.4.2. Orchestration

In most cases, Heat shows the failed overcloud stack after the overcloud creation fails:

If the stack list is empty, this indicates an issue with the initial Heat setup. Check your Heat templates and configuration options, and check for any error messages that presented after running openstack overcloud deploy.

17.4.3. Bare Metal Provisioning

Check the bare metal service to see all registered nodes and their current status:

Here are some common issues that can occur from the provisioning process:

Review the Provision State and Maintenance columns in the resulting table. Check for the following:

An empty table, or fewer nodes than you expect

Maintenance is set to True

Provision State is set to manageable. This usually indicates an issue with the registration or discovery processes. For example, if Maintenance sets itself to True automatically, the nodes are usually using the wrong power management credentials.

If Provision State is available, then the problem occurred before bare metal deployment has even started.

If Provision State is active and Power State is power on, the bare metal deployment has finished successfully. This means that the problem occurred during the post-deployment configuration step.

If Provision State is wait call-back for a node, the bare metal provisioning process has not yet finished for this node. Wait until this status changes, otherwise, connect to the virtual console of the failed node and check the output.

If Provision State is error or deploy failed, then bare metal provisioning has failed for this node. Check the bare metal node’s details:

(undercloud) $ openstack baremetal node show [NODE UUID]

Look for last_error field, which contains error description. If the error message is vague, you can use logs to clarify it:

Discovery and deployment tasks will fail if the destination hosts are allocated an IP address which is already in use. To prevent these failures, you can perform a port scan of the Provisioning network to determine whether the discovery IP range and host IP range are free.

Perform the following steps from the undercloud host:

Install nmap:

$ sudo yum install nmap

Use nmap to scan the IP address range for active addresses. This example scans the 192.168.24.0/24 range, replace this with the IP subnet of the Provisioning network (using CIDR bitmask notation):

$ sudo nmap -sn 192.168.24.0/24

Review the output of the nmap scan:

For example, you should see the IP address(es) of the undercloud, and any other hosts that are present on the subnet. If any of the active IP addresses conflict with the IP ranges in undercloud.conf, you will need to either change the IP address ranges or free up the IP addresses before introspecting or deploying the overcloud nodes.

17.6. Troubleshooting "No Valid Host Found" Errors

Sometimes the /var/log/nova/nova-conductor.log contains the following error:

NoValidHost: No valid host was found. There are not enough hosts available.

This error occurs when the Compute Scheduler cannot find a bare metal node suitable for booting the new instance. This usually means there is a mismatch between resources that the Compute service expects to find and resources that the Bare Metal service advertised to Compute. Check the following in this case:

Ensure the introspection succeeds. If the introspection fails, check that each node contains the required ironic node properties:

Check the properties JSON field has valid values for keys cpus, cpu_arch, memory_mb and local_gb.

Check that the Compute flavor used does not exceed the node properties above for a required number of nodes:

(undercloud) $ openstack flavor show [FLAVOR NAME]

Run the openstack baremetal node list command to ensure sufficient nodes in the available state. Nodes in manageable state usually signify a failed introspection.

Run the openstack baremetal node list command to check that the nodes are not in maintenace mode. If a node changes to maintenance mode automatically, the likely cause is an issue with incorrect power management credentials. Check the power management credentials and then remove maintenance mode:

(undercloud) $ openstack baremetal node maintenance unset [NODE UUID]

If you are using the Automated Health Check (AHC) tools to perform automatic node tagging, check that you have enough nodes corresponding to each flavor/profile. Run the openstack baremetal node show command on a node and check the capabilities key in the properties field. For example, a node tagged for the Compute role should contain profile:compute.

It takes some time for node information to propagate from Bare Metal to Compute after introspection. However, if you performed some steps manually, there might be a short period of time when nodes are not available to nova. Use the following command to check the total resources in your system:

(undercloud) $ openstack hypervisor stats show

17.7. Troubleshooting the Overcloud after Creation

After creating your overcloud, you might want to perform certain overcloud operations in the future. For example, you might want to scale your available nodes, or replace faulty nodes. Certain issues might arise when performing these operations. This section contains information to consider when diagnosing and troubleshooting failed post-creation operations.

17.7.1. Overcloud Stack Modifications

Problems can occur when you modify the overcloud stack through the director. Examples of stack modifications include the following operations:

Scaling Nodes

Removing Nodes

Replacing Nodes

Modifying the stack is similar to the process of creating the stack, in that the director checks the availability of the requested number of nodes, provisions additional or removes existing nodes, and then applies the Puppet configuration. Use the guidelines in the following sections when you modify the overcloud stack. These sections contain information to consider when diagnosing issues on specific node types.

17.7.2. Controller Service Failures

The overcloud Controller nodes contain the bulk of Red Hat OpenStack Platform services. Likewise, you might use multiple Controller nodes in a high availability cluster. If a certain service on a node is faulty, the high availability cluster provides a certain level of failover. However, to ensure your overcloud operates at full capacity you must diagnose the faulty service.

The Controller nodes use Pacemaker to manage the resources and services in the high availability cluster. The Pacemaker Configuration System (pcs) command is a tool that manages a Pacemaker cluster. Run the pcs command on a Controller node in the cluster to perform configuration and monitoring functions. Use the following commands to troubleshoot overcloud services on a high availability cluster:

pcs status

Provides a status overview of the entire cluster including enabled resources, failed resources, and online nodes.

pcs resource show

Shows a list of resources and the respective nodes for each resource

pcs resource disable [resource]

Stop a particular resource.

pcs resource enable [resource]

Start a particular resource.

pcs cluster standby [node]

Place a node in standby mode. The node is no longer available in the cluster. This is useful for performing maintenance on a specific node without affecting the cluster.

pcs cluster unstandby [node]

Remove a node from standby mode. The node becomes available in the cluster again.

Use these Pacemaker commands to identify the faulty component and/or node. After identifying the component, view the respective component log file in /var/log/.

17.7.3. Containerized Service Failures

If a containerized service fails during or after overcloud deployment, use the following commands to determine the root cause for the failure:

Checking the container logs

Each container retains standard output from its main process. Use this output as a log to help determine what actually occurs during a container run. For example, to view the log for the keystone container, use the following command:

$ sudo docker logs keystone

In most cases, this log contains information about the cause of a container’s failure.

Inspecting the container

In some situations, you might need to verify information about a container. For example, use the following command to view keystone container data:

$ sudo docker inspect keystone

This command returns a JSON object containing low-level configuration data. You can pipe the output to the jq command to parse specific data. For example, to view the container mounts for the keystone container, run the following command:

$ sudo docker inspect keystone | jq .[0].Mounts

You can also use the --format option to parse data to a single line, which is useful for running commands against sets of container data. For example, to recreate the options used to run the keystone container, use the following inspect command with the --format option:

In some cases, you might need to obtain information from within a container through a specific Bash command. In this situation, use the following docker command to execute commands within a running container. For example, run the docker exec command to run a command inside the keystone container:

$ sudo docker exec -ti keystone <COMMAND>

Note

The -ti options run the command through an interactive pseudoterminal.

Replace <COMMAND> with the command you want to run. For example, each container has a health check script to verify the service connection. You can run the health check script for keystone with the following command:

$ sudo docker exec -ti keystone /openstack/healthcheck

To access the container’s shell, run docker exec using /bin/bash as the command you want to run inside the container:

$ sudo docker exec -ti keystone /bin/bash

Exporting a container

When a container fails, you might need to investigate the full contents of the file. In this case, you can export the full file system of a container as a tar archive. For example, to export the keystone container’s file system, run the following command:

$ sudo docker export keystone -o keystone.tar

This command create the keystone.tar archive, which you can extract and explore.

17.7.4. Compute Service Failures

Compute nodes use the Compute service to perform hypervisor-based operations. This means the main diagnosis for Compute nodes revolves around this service. For example, to view the status of the container, run the following command:

View the status of the container:

$ sudo docker ps -f name=nova_compute

The primary log file for Compute nodes is /var/log/containers/nova/nova-compute.log. If issues occur with Compute node communication, this log file is usually a good place to start a diagnosis.

17.9. Important Logs for Undercloud and Overcloud

Use the following logs to find out information about the undercloud and overcloud when troubleshooting.

Table 17.1. Important Logs for the Undercloud

Information

Log Location

OpenStack Compute log

/var/log/containers/nova/nova-compute.log

OpenStack Compute API interactions

/var/log/nova/nova-api.log

OpenStack Compute Conductor log

/var/log/nova/nova-conductor.log

OpenStack Orchestration log

heat-engine.log

OpenStack Orchestration API interactions

heat-api.log

OpenStack Orchestration CloudFormations log

/var/log/heat/heat-api-cfn.log

OpenStack Bare Metal Conductor log

ironic-conductor.log

OpenStack Bare Metal API interactions

ironic-api.log

Introspection

/var/log/ironic-inspector/ironic-inspector.log

OpenStack Workflow Engine log

/var/log/mistral/engine.log

OpenStack Workflow Executor log

/var/log/mistral/executor.log

OpenStack Workflow API interactions

/var/log/mistral/api.log

Table 17.2. Important Logs for the Overcloud

Information

Log Location

Cloud-Init Log

/var/log/cloud-init.log

Overcloud Configuration (Summary of Last Puppet Run)

/var/lib/puppet/state/last_run_summary.yaml

Overcloud Configuration (Report from Last Puppet Run)

/var/lib/puppet/state/last_run_report.yaml

Overcloud Configuration (All Puppet Reports)

/var/lib/puppet/reports/overcloud-*/*

Overcloud Configuration (stdout from each Puppet Run)

/var/run/heat-config/deployed/*-stdout.log

Overcloud Configuration (stderr from each Puppet Run)

/var/run/heat-config/deployed/*-stderr.log

High availability log

/var/log/pacemaker.log

Part VI. Appendices

Appendix A. SSL/TLS Certificate Configuration

You can configure the undercloud to use SSL/TLS for communication over public endpoints. However, if want to you use a SSL certificate with your own certificate authority, you must complete the following configuration steps.

A.1. Initializing the Signing Host

The signing host is the host that generates and signs new certificates with a certificate authority. If you have never created SSL certificates on the chosen signing host, you might need to initialize the host so that it can sign new certificates.

The /etc/pki/CA/index.txt file contains records of all signed certificates. Check if this file exists. If it does not exist, create an empty file:

$ sudo touch /etc/pki/CA/index.txt

The /etc/pki/CA/serial file identifies the next serial number to use for the next certificate to sign. Check if this file exists. If the file does not exist, create a new file with a new starting value:

$ echo '1000' | sudo tee /etc/pki/CA/serial

A.2. Creating a Certificate Authority

Normally you sign your SSL/TLS certificates with an external certificate authority. In some situations, you might want to use your own certificate authority. For example, you might want to have an internal-only certificate authority.

Generate a key and certificate pair to act as the certificate authority:

A.7. Using the Certificate with the Undercloud

Run the following command to combine the certificate and key:

$ cat server.crt.pem server.key.pem > undercloud.pem

This command creates a undercloud.pem file. Specify the location of this file for the undercloud_service_certificate option in your undercloud.conf file. This .pem file also requires a special SELinux context so that the HAProxy tool can read it. To configure the SELinux context, complete the following example steps:

In addition, ensure you add your certificate authority from Section A.2, “Creating a Certificate Authority” to the undercloud’s list of trusted Certificate Authorities so that different services within the undercloud have access to the certificate authority:

Appendix B. Power Management Drivers

Although IPMI is the main method the director uses for power management control, the director also supports other power management types. This appendix contains a list of the power management features that the director supports. Use these power management settings for Section 6.1, “Registering Nodes for the Overcloud”.

B.1. Intelligent Platform Management Interface (IPMI)

The standard power management method using a baseboard management controller (BMC).

pm_type

Set this option to ipmi.

pm_user; pm_password

The IPMI username and password.

pm_addr

The IP address of the IPMI controller.

pm_port (Optional)

The port to connect to the IPMI controller.

B.2. Redfish

A standard RESTful API for IT infrastructure developed by the Distributed Management Task Force (DMTF)

pm_type

Set this option to redfish.

pm_user; pm_password

The Redfish username and password.

pm_addr

The IP address of the Redfish controller.

pm_system_id

The canonical path to the system resource. This path must include the root service, version, and the path/unqiue ID for the system. For example: /redfish/v1/Systems/CX34R87.

B.3. Dell Remote Access Controller (DRAC)

DRAC is an interface that provides out-of-band remote management features including power management and server monitoring.

pm_type

Set this option to idrac.

pm_user; pm_password

The DRAC username and password.

pm_addr

The IP address of the DRAC host.

B.4. Integrated Lights-Out (iLO)

iLO from Hewlett-Packard is an interface that provides out-of-band remote management features including power management and server monitoring.

pm_type

Set this option to ilo.

pm_user; pm_password

The iLO username and password.

pm_addr

The IP address of the iLO interface.

To enable this driver, add ilo to the enabled_hardware_types option in your undercloud.conf and rerun openstack undercloud install.

The director also requires an additional set of utilities for iLo. Install the python-proliantutils package and restart the openstack-ironic-conductor service:

HP nodes must have a minimum ILO firmware version of 1.85 (May 13 2015) for successful introspection. The director has been successfully tested with nodes using this ILO firmware version.

Using a shared iLO port is not supported.

B.5. Cisco Unified Computing System (UCS)

UCS from Cisco is a data center platform that combines compute, network, storage access, and virtualization resources. This driver focuses on the power management for bare metal systems connected to the UCS.

pm_type

Set this option to cisco-ucs-managed.

pm_user; pm_password

The UCS username and password.

pm_addr

The IP address of the UCS interface.

pm_service_profile

The UCS service profile to use. Usually takes the format of org-root/ls-[service_profile_name]. For example:

"pm_service_profile": "org-root/ls-Nova-1"

To enable this driver, add cisco-ucs-managed to the enabled_hardware_types option in your undercloud.conf and rerun the openstack undercloud install command.

The director also requires an additional set of utilities for UCS. Install the python-UcsSdk package and restart the openstack-ironic-conductor service:

B.6. Fujitsu Integrated Remote Management Controller (iRMC)

Fujitsu’s iRMC is a Baseboard Management Controller (BMC) with integrated LAN connection and extended functionality. This driver focuses on the power management for bare metal systems connected to the iRMC.

Important

iRMC S4 or higher is required.

pm_type

Set this option to irmc.

pm_user; pm_password

The username and password for the iRMC interface.

pm_addr

The IP address of the iRMC interface.

pm_port (Optional)

The port to use for iRMC operations. The default is 443.

pm_auth_method (Optional)

The authentication method for iRMC operations. Use either basic or digest. The default is basic

pm_client_timeout (Optional)

Timeout (in seconds) for iRMC operations. The default is 60 seconds.

pm_sensor_method (Optional)

Sensor data retrieval method. Use either ipmitool or scci. The default is ipmitool.

To enable this driver, add irmc to the enabled_hardware_types option in your undercloud.conf and rerun the openstack undercloud install command.

If you enable SCCI as the sensor method, you must also install an additional set of utilities. Install the python-scciclient package and restart the openstack-ironic-conductor service:

B.7. Red Hat Virtualization

This driver provides control over virtual machines in Red Hat Virtualization through its RESTful API.

pm_type

Set this option to staging-ovirt.

pm_user; pm_password

The username and password for your Red Hat Virtualization environment. The username also includes the authentication provider. For example: admin@internal.

pm_addr

The IP address of the Red Hat Virtualization REST API.

pm_vm_name

The name of the virtual machine to control.

mac

A list of MAC addresses for the network interfaces on the node. Use only the MAC address for the Provisioning NIC of each system.

To enable this driver, add staging-ovirt to the enabled_hardware_types option in your undercloud.conf and rerun the openstack undercloud install command.

B.8. Manual Management

Use the ‘manual-management’ driver to control bare metal devices that do not have power management. The director does not control the registered bare metal devices, and you must perform manual power operations at certain points in the introspection and deployment processes.

Important

This option is only available for testing and evaluation purposes. It is not recommended for Red Hat OpenStack Platform enterprise environments.

pm_type

Set this option to manual-management.

This driver does not use any authentication details because it does not control power management.

To enable this driver, add manual-management to the enabled_hardware_types option in your undercloud.conf and rerun the openstack undercloud install command.

When performing overcloud deployment, check the node status with the ironic node-list command. Wait until the node status changes from deploying to deploy wait-callback and then manually start the nodes.

After the overcloud provisioning process completes, reboot the nodes. To check the completion of provisioning, check the node status with the ironic node-list command, wait until the node status changes to active, then manually reboot all overcloud nodes.

Appendix C. Whole Disk Images

The main overcloud image is a flat partition image that contains no partitioning information or bootloader on the images itself. The director uses a separate kernel and ramdisk when booting nodes and creates a basic partitioning layout when writing the overcloud image to disk. However, you can create a whole disk image, which includes a partitioning layout, bootloader, and hardened security.

Important

The following process uses the director’s image building feature. Red Hat only supports images built using the guidelines contained in this section. Custom images built outside of these specifications are not supported.

A security hardened image includes extra security measures necessary for Red Hat OpenStack Platform deployments where security is an important feature. Consider the following list of recommendations when you create a security hardened image:

The /tmp directory is mounted on a separate volume or partition and has the rw, nosuid, nodev, noexec, and relatime flags

The /var, /var/log and the /var/log/audit directories are mounted on separate volumes or partitions, with the rw and relatime flags

The /home directory is mounted on a separate partition or volume and has the rw, nodev, and relatime flags

Include the following changes to the GRUB_CMDLINE_LINUX setting:

To enable auditing, add the audit=1 kernel boot flag.

To disable the kernel support for USB using boot loader configuration, add nousb.

Remove any insecure packages (kdump installed by kexec-tools and telnet) from the image as they are installed by default

Add the new screen package necessary for security

To build a security hardened image, complete the following steps:

Download a base Red Hat Enterprise Linux 7 image

Set the environment variables specific to registration

Customize the image by modifying the partition schema and the size

Create the image

Upload the image to director

The following sections contain procedures to achieve these tasks.

C.1. Downloading the Base Cloud Image

Before building a whole disk image, you must download an existing cloud image of Red Hat Enterprise Linux to use as a basis. Navigate to the Red Hat Customer Portal and select the KVM Guest Image to download. For example, the KVM Guest Image for the latest Red Hat Enterprise Linux is available on the following page:

C.2. Disk Image Environment Variables

As a part of the disk image building process, the director requires a base image and registration details to obtain packages for the new overcloud image. Define these attributes with the following Linux environment variables.

Note

The image building process temporarily registers the image with a Red Hat subscription and unregisters the system once the image building process completes.

To build a disk image, set Linux environment variables that suit your environment and requirements:

DIB_LOCAL_IMAGE

Sets the local image that you want to use as the basis for your whole disk image.

REG_ACTIVATION_KEY

Use an activation key instead of login details as part of the registration process.

REG_AUTO_ATTACH

Defines whether to attach the most compatible subscription automatically.

REG_BASE_URL

The base URL of the content delivery server containing packages for the image. The default Customer Portal Subscription Management process uses https://cdn.redhat.com. If you use a Red Hat Satellite 6 server, set this parameter to the base URL of your Satellite server.

REG_ENVIRONMENT

Registers to an environment within an organization.

REG_METHOD

Sets the method of registration. Use portal to register a system to the Red Hat Customer Portal. Use satellite to register a system with Red Hat Satellite 6.

REG_ORG

The organization where you want to register the images.

REG_POOL_ID

The pool ID of the product subscription information.

REG_PASSWORD

Gives the password for the user account that registers the image.

REG_REPOS

A comma-separated string of repository names. Each repository in this string is enabled through subscription-manager.

Use the following repositories for a security hardened whole disk image:

Gives the hostname of the subscription service to use. The default is for the Red Hat Customer Portal at subscription.rhn.redhat.com. If using a Red Hat Satellite 6 server, set this parameter to the hostname of your Satellite server.

REG_USER

Gives the user name for the account that registers the image.

Use the following set of example commands to export a set of environment variables and temporarily register a local QCOW2 image to the Red Hat Customer Portal:

C.3. Customizing the Disk Layout

The default security hardened image size is 20G and uses predefined partitioning sizes. However, you must modify the partitioning layout to accommodate overcloud container images. Complete the steps in the following sections to increase the image size to 40G. You can modify the partitioning layout and disk size to further suit your needs.

To modify the partitioning layout and disk size, perform the following steps:

Modify the partitioning schema using the DIB_BLOCK_DEVICE_CONFIG environment variable.

Modify the global size of the image by updating the DIB_IMAGE_SIZE environment variable.

C.3.1. Modifying the Partitioning Schema

You can modify the partitioning schema to alter the partitioning size, create new partitions, or remove existing ones. You can define a new partitioning schema with the following environment variable:

Use this sample YAML content as a basis for your image’s partition schema. Modify the partition sizes and layout to suit your needs.

Note

You must define the correct partition sizes for the image as you cannot resize them after the deployment.

C.3.2. Modifying the Image Size

The global sum of the modified partitioning schema might exceed the default disk size (20G). In this situation, you might need to modify the image size. To modify the image size, edit the configuration files that create the image.

Create a copy of the /usr/share/openstack-tripleo-common/image-yaml/overcloud-hardened-images.yaml:

When you deploy the overcloud, the director creates a RAW version of the overcloud image. This means your undercloud must have enough free space to accommodate the RAW image. For example, if you increase the security hardened image size to 40G, you must have 40G of space available on the undercloud’s hard disk.

Important

When the director writes the image to the physical disk, the director creates a 64MB configuration drive primary partition at the end of the disk. When you create your whole disk image, ensure the size of the physical disk accommodates this extra partition.

C.4. Creating a Security Hardened Whole Disk Image

After you have set the environment variables and customized the image, create the image using the openstack overcloud image build command:

This is the custom configuration file containing the new disk size from Section C.3.2, “Modifying the Image Size”. If you are not using a different custom disk size, use the original /usr/share/openstack-tripleo-common/image-yaml/overcloud-hardened-images.yaml file instead.

This command creates an image called overcloud-hardened-full.qcow2, which contains all the necessary security features.

C.5. Uploading a Security Hardened Whole Disk Image

Upload the image to the OpenStack Image (glance) service and start using it from the Red Hat OpenStack Platform director. To upload a security hardened image, complete the following steps:

Rename the newly generated image and move the image to your images directory:

Defines the operation to use for the evaluation. This includes the following attributes:

eq - Equal to

ne - Not equal to

lt - Less than

gt - Greater than

le - Less than or equal to

ge - Greater than or equal to

in-net - Checks that an IP address is in a given network

matches - Requires a full match against a given regular expression

contains - Requires a value to contain a given regular expression;

is-empty - Checks that field is empty.

invert

Boolean value to define whether to invert the result of the evaluation.

multiple

Defines the evaluation to use if multiple results exist. This parameter includes the following attributes:

any - Requires any result to match

all - Requires all results to match

first - Requires the first result to match

value

Defines the value in the evaluation. If the field and operation result in the value, the condition return a true result. Otherwise, the condition returns a false result.

Example:

"conditions": [
{
"field": "local_gb",
"op": "ge",
"value": 1024
}
],

Actions

If a condition is ‘true’, the policy performs an action. The action uses the action key and additional keys depending on the value of action:

fail - Fails the introspection. Requires a message parameter for the failure message.

set-attribute - Sets an attribute on an Ironic node. Requires a path field, which is the path to an Ironic attribute (e.g. /driver_info/ipmi_address), and a value to set.

set-capability - Sets a capability on an Ironic node. Requires name and value fields, which are the name and the value for a new capability. The existing value for this same capability is replaced. For example, use this to define node profiles.

extend-attribute - The same as set-attribute but treats the existing value as a list and appends value to it. If the optional unique parameter is set to True, nothing is added if the given value is already in a list.

Fail introspection if memory is lower than 4096 MiB. You can apply these types of rules if you want to exclude certain nodes from your cloud.

Nodes with a hard drive size 1 TiB and bigger are assigned the swift-storage profile unconditionally.

Nodes with a hard drive less than 1 TiB but more than 40 GiB can be either Compute or Controller nodes. You can assign two capabilities (compute_profile and control_profile) so that the openstack overcloud profiles match command can later make the final choice. For this process to succeed, you must remove the existing profile capability, otherwise the existing profile capability has priority.

The profile matching rules do not change any other nodes.

Note

Using introspection rules to assign the profile capability always overrides the existing value. However, [PROFILE]_profile capabilities are ignored for nodes that already have a profile capability.

E.3. Importing Policy Files

To import policy files to the director, complete the following steps.

Import the policy file into the director:

$ openstack baremetal introspection rule import rules.json

Run the introspection process.

$ openstack overcloud node introspect --all-manageable

After introspection completes, check the nodes and their assigned profiles:

$ openstack overcloud profiles list

If you made a mistake in introspection rules, run the following command to delete all rules:

$ openstack baremetal introspection rule purge

E.4. Automatic Profile Tagging Properties

Automatic Profile Tagging evaluates the following node properties for the field attribute for each condition:

Property

Description

memory_mb

The amount of memory for the node in MB.

cpus

The total number of threads for the node CPU.

cpu_arch

The architecture of the node CPU.

local_gb

The total storage space of the node’s root disk. See Defining the root disk for more information about setting the root disk for a node.

Appendix F. Security Enhancements

The following sections contain information to consider when you want harden the security of your undercloud.

Set the hieradata_override parameter in the undercloud.conf file to use the hieradata override file you created before running openstack undercloud install:

[DEFAULT]
...
hieradata_override = haproxy-hiera-overrides.yaml
...

Appendix G. Red Hat OpenStack Platform for POWER

In fresh Red Hat OpenStack Platform installation, you can now deploy overcloud Compute nodes on POWER (ppc64le) hardware. For the Compute node cluster, you can choose to use the same architecture, or have a combination of x86_64 and ppc64le systems. The undercloud, Controller nodes, Ceph Storage nodes, and all other systems are only supported on x86_64 hardware. You can find installation details for each system in previous sections within this guide.

G.1. Ceph Storage

When configuring access to external Ceph in a multi-architecture cloud, set the CephAnsiblePlaybook parameter to /usr/share/ceph-ansible/site.yml.sample along with your client key and other Ceph-specific parameters.

G.2. Composable Services

The following services typically form part of the Controller node and are available for use in custom roles as Technology Preview:

Cinder

Glance

Keystone

Neutron

Swift

Note

Red Hat does not support features in Technology Preview.

For more information, see the documentation for composable services and custom roles for more information. Use the following example to understand how to move the listed services from the Controller node to a dedicated ppc64le node:

Legal Notice

The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.

Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.

Linux® is the registered trademark of Linus Torvalds in the United States and other countries.

Java® is a registered trademark of Oracle and/or its affiliates.

XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.

MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.

Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.

The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.