Installing and configuring OVS and API server for Neutron - OpenStack

(4.0)

| 1757 Ratings

Installing and configuring OVS for Neutron

To create a Software Defined Network layer in OpenStack, we first need to install the software on our Network node. This node will utilize Open vSwitch as our switch that we can use and control when defining our networks when we use OpenStack. Open vSwitch, or OVS, is a production quality, multilayer switch. The following diagram shows the required nodes in our environment, which includes a Controller node, a Compute node and a Network node. For this section, we are configuring the Network node.

Getting ready

Ensure you are logged onto the Network node and that it has Internet access to allow us to install the required packages in our environment for running OVS and Neutron. If you created this node with Vagrant, you can execute the following:

vagrant ssh network

How to do it…

To configure our OpenStack Network node, carry out the following steps:

- When we started our Network node using Vagrant, we had to assign the fourth interface (eth3), an IP address. We no longer want an IP assigned, but we do require the interface to be online and listening for use with OVS and Neutron. Instead, we will use this IP address to assign to our bridge interface after we have created, in the later part of this section. Perform the following steps to remove this IP from our interface:

On a physical server running Ubuntu, we would configure this in our /etc/network/interfaces file as follows:

auto eth3
iface eth3 inet manual
up ip link set $IFACE up down
ip link set $IFACE down

- We then update the packages installed on the node.

sudo apt-get update
sudo apt-get -y upgrade

Next, we install the kernel headers package as the installation will compile some new kernel modules.

sudo apt-get -y install linux-headers-'uname -r'

Now we need to install some supporting applications and utilities.

- We are now ready to install Open vSwitch.

- After this has installed and configured some kernel modules we can simply start our OVS service.

sudo service openvswitch-switch start

Now we will proceed to install the Neutron components that run on this node, which are the Quantum DHCP Agent, Quantum L3 Agent, the Quantum OVS Plugin, and the Quantum OVS Plugin Agent.

- With the installation of the required packages complete, we can now configure our environment. To do this, we first configure our OVS switch service. We need to configure a bridge that we will call br-int. This is the integration bridge that glues our bridges together within our SDN environment.

sudo ovs-vsctl add-br br-int

Next, add an external bridge that is used on our external network. This will be used to route traffic to/from the outside of our environment and onto our SDN network

sudo ovs-vsctl add-br br-ex
sudo ovs-vsctl add-port br-ex eth3

We now assign the IP address, that was previously assigned to our eth3 interface, to this bridge:

sudo ifconfig br-ex 192.168.100.202 netmask 255.255.255.0

Tip

This address is on the network that we will use for accessing instances within OpenStack. We assigned this range as 192.168.100.0/24 as described in the Vagrant file:

We need to ensure that we have IP forwarding on within our Network node.

- Next, we will edit the Neutron configuration files. In a similar way to configuring other OpenStack services, the Neutron services have a configuration file and a paste ini file. The first file to edit will be the /etc/quantum/api-paste.ini to configure Keystone authentication.

- Next, we must ensure that our Neutron server configuration is pointing at the right RabbitMQ in our environment. Edit /etc/quantum/quantum.conf and locate the following and edit to suit our environment

rabbit_host = 172.16.0.200

- We need to edit the familiar [keystone_authtoken] section located at the bottom of the file to match our Keystone environment:

- The DHCP agent file, /etc/quantum/dhcp_agent.ini needs a value change to tell Neutron that we are using namespaces to separate our networking. Locate this and change this value (or insert the new line). This allows all of our networks in our SDN environment to have a unique namespace to operate in and allows us to have overlapping IP ranges within our OpenStack Networking environment:

use_namespaces = True

- With this done, we can proceed to edit the /etc/quantum/l3_agent.ini file to include these additional following values:

How it works…

We have completed and configured a new node in our environment that runs the software networking components of our SDN environment. This includes the OVS Switch service and various Neutron components that interact with this and OpenStack through the notion of plugins. While we have used Open vSwitch in our example, there are also many vendor plugins that include Nicira and Cisco UCS/Nexus among others.

The first thing we did was configure an interface on this switch node that would serve an external network. In Openstack Networking terms, this is called the Provider Network. Outside of a VirtualBox environment, this would be a publicly routable network that would allow access to the instances that get created within our SDN environment. This interface is created without an IP address so that our OpenStack environment can control this by bridging new networks to it.

A number of packages were installed on this Network node. The list of packages that we specify for installation (excluding dependencies) is as follows:

Once we installed our application and service dependencies and started the services, we configured our environment by assigning a bridge that acts as the integration bridge that spans our instances with the rest of the network, as well as a bridge to our last interface on the Provider Network—where traffic flows from the outside into our instances.

A number of files were configured to connect to our OpenStack cloud using the Identity (Keystone) services.

An important configuration of how Neutron works with our OVS environment is achieved by editing the

Here we configure the Neutron services to use the database we have created in MySQL:

[OVS]
tenant_network_type=gre

We’re configuring our networking type to be GRE (Generic Routing Encapsulation) tunnels. This allows our SDN environment to capture a wide range of protocols over the tunnels we create, as follows:

tunnel_id_ranges=1:1000

This is defining a range of tunnels that could exist in our environment where each will be assigned an ID from 1 to 1000 using following command:

network_vlan_ranges =

As we are using tunnel ranges, we explicitly unset the VLAN ranges within our environment:

integration_bridge=br-int

This is the name of the integration bridge:

tunnel_bridge=br-tun

This is the tunnel bridge name that will be present in our environment:

local_ip=172.16.0.202

This is the IP address of our Network node:

enable_tunneling=True

This informs Neutron that we will be using tunneling to provide our software defined networking.

The service that proxies metadata requests from instances within Neutron to our nova-api metadata service is the Metadata Agent. Configuration of this service is achieved with the /etc/quantum/metadata_agent.ini file and describes how this service connects to Keystone as well as providing a key for the service, as described in the metadata_proxy_shared_secret = foo line that matches the same random keywork that we will eventually configure in /etc/nova/nova.conf on our Controller node as follows:

quantum_metadata_proxy_shared_secret=foo

The step that defines the networking plumbing of our Provider Network (the external network) is achieved by creating another bridge on our node, and this time we assign it the physical interface that is connecting our Network node to the rest of our network or the Internet. In this case, we assign this external bridge, br -ex, to the interface eth3. This will allow us to create a floating IP Neutron network range, and it would accessible from our host machine running VirtualBox. On a physical server in a datacenter, this interface would be connected to the network that routes to the rest of our physical servers. The assignment of this network is described in the Creating an external Neutron network recipe.

Installing and configuring the Neutron API server

The Neutron Service provides an API for our services to access and define our software defined networking. In our environment, we install the Neutron service on our Controller node. The following diagram describes the environment we are creating and the nodes that are involved. In this section, we are configuring the services that operate on our Controller node.

Getting ready

Ensure that you are logged on to the Controller node. If you created this node with Vagrant, you can access this with the following command:

vagrant ssh controller

How to achieve it…

To configure our OpenStack Controller node, carry out the following steps:

- First update the packages installed on the node.

sudo apt-get update
sudo apt-get -y upgrade

- We are now ready to install the Neutron service and the relevant OVS plugin.- We can now configure the relevant configuration files for Neutron. The first configures Neutron to use Keystone. To do this we edit the /etc/quantum/api-paste.ini file.

Then finally we ensure that there is a section called [SECURITYGROUP] that we use to tell Neutron which Security Group Firewall driver to utilize. This allows us to define Security groups in Neutron and using Nova commands:

This is the driver used to create Ethernet devices on our Linux hosts.

firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

This is the driver to use when managing the firewalls.

service_quantum_metadata_proxy=true

This allows us to utilize the meta-data proxy service that passes requests from Neutron to the Nova-API service.

quantum_metadata_proxy_shared_secret=foo

In order to utilize the proxy service, we set a random key, in this case too, that must match on all nodes running this service to ensure a level of security when passing proxy requests.https://docs.openstack.org/juno/install-guide/install/apt/content/neutron-controller-node.html

Subscribe For Free Demo

Phone *

E-mail Address *

Free Demo for Corporate & Online Trainings.

About The Author

Ravindra Savaram is a Content Lead at Mindmajix.com. His passion lies in writing articles on the most popular IT platforms including Machine learning, DevOps, Data Science, Artificial Intelligence, RPA, Deep Learning, and so on. You can stay up to date on all these technologies by following him on LinkedIn and Twitter.

Mindmajix - Online global training platform connecting individuals with the best trainers around the globe. With the diverse range of courses, Training Materials, Resume formats and On Job Support, we have it all covered to get into IT Career. Instructor Led Training - Made easy.