Demo: OpenStack + Cumulus VX 3.1 "Rack-on-a-Laptop" Part I (L2+MLAG)

Important: This demo is not approved for use in a production environment and is for demonstration purposes only.

This demo illustrates the dynamic provisioning of VLANs using a virtual simulation of two Cumulus VX leaf switches and two CentOS 7 (RDO Project) servers; together they comprise an OpenStack environment. For simplicity, the controller node, dashboard node, network node, and compute node have been combined in a single server instance. In some cases, it may be preferable to split them out in a production environment.

{{table_of_contents}}

Overview

The Cumulus Networks Modular Layer 2 (ML2) mechanism driver for OpenStack resides on the OpenStack controller node and provisions VLANs on demand. The ML2 driver queries the HTTP API server residing on the Cumulus Linux switch. This results in instances (virtual machines) communicating with each other across multiple switches without needing to pre-configure the top of rack switch beforehand. Without the ML2 mechanism driver, VLANs would need to be preconfigured on Cumulus Linux top of rack switches and between layer 2 inter-switch links. In this case, the VLAN range defined on the top of rack switch mirrors the range defined in the /etc/neutron/plugins/ml2/ml2_conf.ini file located on the OpenStack network node.

Preparing the Environment

Note: If you are reinstalling the demo, make sure you delete the VMs and all associated files before you import the OVA file.

Accept the default values when prompted during the configuration of the appliance.

Important: Ensure the following default option is unchecked when prompted: Reinitialize the MAC address of all network cards.

Note: The import process can take up to 2-3 minutes, depending on the hardware.

Start all four virtual machines imported into VirtualBox.

Note: The start order does not matter. However, you should wait a couple of minutes for the four VMs to complete the boot process.

Virtual Machine Information

VM Name

OS

Purpose

RDO Server1

Centos7/RDO (Liberty Release)

RDO Project Network/controller/compute node. Uses the Linux bridge and not the OVS bridge for simplicity.

Note: For simplicity, the network node, controller, and a compute node have been combined on a single server. These are typically separate in a more realistic environment.

RDO Server2

Centos7/RDO(Liberty Release)

RDO Project compute node.

CL31_Leaf1

Cumulus VX 3.1.0

Top of rack switch 1.

CL31_Leaf2

Cumulus VX 3.1.0

Top of rack switch 2.

Open the following browser tabs, once all four VMs are running, to log into each server and switch using the username and passwords provided:

Tab URL

Application

Authentication

http://localhost:8080

Horizon Dashboard

demo/cumulus

http://localhost:8800

Server1 - Controller / Network Node / Compute Node

cumulus/cumulus

http://localhost:8801

Server2 - Compute Node

cumulus/cumulus

http://localhost:8802

Cumulus Leaf 1

cumulus/cumulus

http://localhost:8803

Cumulus Leaf 2

cumulus/cumulus

Note: These browser tabs are provided for your convenience so that you do not need to use the consoles provided by VirtualBox.

Important: If you do not log in through the browser, it may time out after 60 seconds.

Note: The browser view of the switches and Linux compute nodes may not be typical and secure at a customer site. This is for demo simplicity and convenience purposes. However, the Horizon Dashboard is a browser-based UI that is used by OpenStack customers.

Run the following commands on the Cumulus leaf switches to confirm the baseline configuration:

Running the Demo

Demo One: Single Tenant, One Network

In this scenario, OpenStack Heat is used to create one broadcast domain that spans two compute nodes, and each compute node has one OpenStack VM in the broadcast domain. A broadcast domain is created by OpenStack; it picks a VLAN number from a range provided by Neutron. On the compute nodes (server1 and server2), a new bridge is created that contains the interface to the OpenStack VM and an interface on the switch-facing port.

On each Cumulus Linux switch, a corresponding VLAN interface is automatically created on the inter-switch link (bond0) and the OpenStack server-facing port (swp4).

The following diagram illustrates what is occurring in the configuration:

Using the provided demo script, you can start, verify, and destroy tenant networks. The demo script walks through the steps you need to take to view the demo in action:

Log into the Horizon Dashboard and console into the VMs after they are created.

Ping the VMs.

Watch the bridge changes on the switch using the Linux command watch and netshow.

Verification

The following screenshots from the OpenStack Horizon Dashboard and from a leaf switch show what happens after provisioning a single subnet in a single tenant.

Demo Two: Single Tenant, Two Networks

This demo is similar to the one above, but instead of creating only one broadcast domain, it creates two subnets in a single tenant, and places two OpenStack VMs on each subnet. It also uses Openstack Heat to perform this task.

The following diagram illustrates what is occurring in the configuration:

Follow the instructions in the Message of the Day that is displayed via the /etc/MOTD file.

cd $HOME/cumulus_demo
./two_tenant_subnets_demo.sh

Using the demo script, you can start, verify, and destroy tenants and subnets. The demo script walks through the steps required:

Login to the Horizon Dashboard and console into the VMs after they are created.

Ping the VMs.

Watch the bridge changes on the switch using the Linux command watch and netshow.

Verification

The following screenshots are from the OpenStack Horizon Dashboard. They show the results after provisioning two networks in a single tenant:

Caveats

This is for demo purposes only, and not for production use cases. There are known issues with the Cumulus Networks ML2 mechanism driver based on current testing by Cumulus Networks associates.

Do not restart the REST server daemon running on each Cumulus VX instance, nor the Cumulus VX instances themselves. If this occurs, re-run the demo by selecting the Start option (press 1) from the demo menu.

This demo is for single-attached servers only, not demos containing MLAG.

The Cumulus Networks ML2 mechanism driver depends on LLDP to discover the switch ports connected to an OpenStack server. Bond interfaces do not have LLDP information. The logic to review LLDP switch information and determine if the interface with the matching ifName is part of a bond, is not part of the driver's logic.

LLDP is used to determine which switch port needs a new VLAN or a VLAN to be removed. Cumulus Networks recommends setting up PTM with a topology.dot file to confirm that LLDP is correctly set on all ports facing the OpenStack servers, as configuring LLDP on Centos/Red Hat can be complicated.

The demo supports bridges in traditional mode only, as the Cumulus Networks ML2 mechanism driver does not support VLAN-aware configurations.

The Cumulus Networks ML2 mechanism driver stores the state in RAM. Rebooting either the REST API server on the Cumulus VX instance or the instance itself requires restarting the demo. Run the demo again using the Start option in the Demo menu. The script destroys the previous OpenStack environment and then recreates it.

Q: In the Single Tenant, Two Network Demo, when I run netshow interface or ip addr show, the default gateways of the OpenStack instances (VM) are not present. Where are the default gateways of the subnet?

A: The default gateways of the subnets are located in an IP namespace located on the network node (server1). To view the router configuration, run the following command: