Introduction

KVM hypervisor networking for CloudStack can sometimes be a challenge, considering KVM doesn’t quite have the matured guest networking model found in the likes of VMware vSphere and Citrix XenServer. In this blog post we’re looking at the options for networking KVM hosts using bridges and VLANs, and dive a bit deeper into the configuration for these options. Installation of the hypervisor and CloudStack agent is pretty well covered in the CloudStack installation guide, so we’ll not spend too much time on this.

Network bridges

On a linux KVM host guest networking is accomplished using network bridges. These are similar to vSwitches on a VMware ESXi host or networks on a XenServer host (in fact networking on a XenServer host is also accomplished using bridges).

A linux network bridge is a Layer-2 software device which allows traffic to be forwarded between ports internally on the bridge and the physical network uplinks. The traffic flow is controlled by MAC address tables maintained by the bridge itself, which determine which hosts are connected to which bridge port. The bridges allows for traffic segregation using traditional Layer-2 VLANs as well as SDN Layer-3 overlay networks.

Linux bridges vs OpenVswitch

The bridging on a KVM host can be accomplished using traditional linux bridge networking or by adopting an OpenVswitch back end. Traditional linux bridges have been implemented in the linux kernel since version 2.2, and have been maintained through the 2.4 and 2.6 kernels. Linux bridges provide all the basic Layer-2 networking required for a KVM hypervisor back end, but it lacks some automation options and is configured on a per host basis.

OpenVswitch was developed to address this, and provides additional automation in addition to new networking capabilities like Software Defined Networking (SDN). OpenVswitch allows for centralised control and distribution across physical hypervisor hosts, similar to distributed vSwitches in VMware vSphere. Distributed switch control does require additional controller infrastructure like OpenDaylight, Nicira, VMware NSX, etc. – which we won’t cover in this article as it’s not a requirement for CloudStack.

It is also worth noting Citrix started using the OpenVswitch backend in XenServer 6.0.

Network configuration overview

For this example we will configure the following networking model, assuming a linux host with four network interfaces which are bonded for resilience. We also assume all switch ports are trunk ports:

Network interfaces eth0 + eth1 are bonded as bond0.

Network interfaces eth1 + eth2 are bonded as bond1.

Bond0 provides the physical uplink for the bridge “cloudbr0”. This bridge carries the untagged host network interface / IP address, and will also be used for the VLAN tagged guest networks.

Bond1 provides the physical uplink for the bridge “cloudbr1”. This bridge handles the VLAN untagged public traffic.

The CloudStack zone networks will then be configured as follows:

Management and guest traffic is configured to use KVM traffic label “cloudbr0”.

Public traffic is configured to use KVM traffic label “cloudbr1”.

In addition to the above it’s important to remember CloudStack itself require internal connectivity from the hypervisor host to system VMs (Virtual Routers, SSVM and CPVM) over the link local 169.254.0.0/16 subnet. This is done over a host-only bridge “cloud0”, which is created by CloudStack when the host is added to a CloudStack zone.

Linux bridge configuration

CentOS

In CentOS the linux bridge configuration is done with configuration files in /etc/sysconfig/network-scripts.

Each of the four individual NIC interfaces are configured as follows (eth0 / eth1 / eth2 / eth3 are all configured the same way):

The bond configurations are specificied in the equivalent ifcfg-bond scripts and specify bonding options as well as the upstream bridge name. In this case we’re just setting a basic active-passive bond (mode=1) with status monitoring every 100ms (miimon=100):

Cloudbr1 does not have an IP address configured hence the the configuration is simpler:

# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1

DEVICE=cloudbr1
ONBOOT=yes
TYPE=Bridge
NM_CONTROLLED=no
DELAY=0

Optional tagged interface for storage traffic

If a dedicated VLAN tagged IP interface is required for e.g. storage traffic this can be accomplished by created a VLAN tagged bond and tying this to a dedicated bridge. In this case we create a new bridge on bond0 using VLAN 100:

Internal bridge cloud0

When using linux bridge networking there is no requirement to configure the internal “cloud0” bridge, this is all handled by CloudStack.

Network startup

Note – once all network startup scripts are in place and the network service is restarted you may lose connectivity to the host if there are any configuration errors in the files, hence make sure you have console access to rectify any issues.

Ubuntu

To use bonding and linux bridge networking in Ubuntu first install the following:

# apt-get install ifenslave-2.6 bridge-utils

Also add the bonding and bridge modules to the kernel modules to be loaded at boot time:

# vi /etc/modules

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
loop
lp
rtc
bonding
bridge

Before continuing also make sure the correct hostname and FQDN are set in /etc/hostname and /etc/hosts respectively. Also add the following lines to /etc/sysctl.conf:

All interface, bond and bridge configuration are configured in /etc/network/interfaces. Same as for CentOS we are configuring basic active-passive bonds (mode=1) with status monitoring every 100ms (miimon=100), and configuring bridges on top of these. As before the host IP address is tied to cloudbr0:

Optional tagged interface for storage traffic

Dedicated VLAN tagged IP interface for e.g. storage traffic is again accomplished by creating a VLAN tagged bond and tying this to a dedicated bridge. As above we add the following to /etc/network/interfaces to create a new bridge on bond0 using VLAN 100:

Internal bridge cloud0

When using linux bridge networking the internal “cloud0” bridge is again handled by CloudStack, i.e. there’s no need for specific configuration to be specified for this.

Network startup

Note – once all network startup scripts are in place and the network service is restarted you may lose connectivity to the host if there are any configuration errors in the files, hence make sure you have console access to rectify any issues.

This will configure the bridges in the OVS database, but the settings will not be persistent. To make the settings persistent we need to configure the network configuration scripts in /etc/sysconfig/network-scripts/, similar to when using linux bridges.

Each individual network interface has a generic configuration – note there is no reference to bonds at this stage. The following ifcfg-eth script applies to all interfaces:

The bonds reference the interfaces as well as the upstream bridge. In addition the bond configuration specifies the OVS specific settings for the bond (active-backup, no LACP, 100ms status monitoring):

Internal bridge cloud0

In addition to the above we also need to configure the internal only cloud0 bridge. This is required when using OVS bridging only, i.e. when the network bridge kernel module has been disabled. If the module is enabled CloudStack will configure the internal bridge using linux bridge, whilst allowing all other bridges to be configured using OVS. Note the CloudStack agent will create this bridge, hence there is no need to configure it using the ovs-vsctl command.
Since there is no routing involved for the internal bridge we simply configure this with IP address 169.254.0.1/16:

Optional tagged interface for storage traffic

If a dedicated VLAN tagged IP interface is required for e.g. storage traffic this is accomplished by creating a VLAN tagged fake bridge on top of one of the cloud bridges. In this case we add it to cloudbr0 with VLAN 100:

VLAN problems when using OVS

Due to bugs in legacy network interface drivers there are in certain circumstances issues with getting VLAN traffic to propagate between KVM hosts. This is a known issue, and the OpenVswitch VLAN FAQ is a useful place to start any troubleshooting.

One workaround for this issue is to configure the “VLAN splinters” setting on the network interfaces. This is accomplished with the following command:

ovs-vsctl set interface eth0 other-config:enable-vlan-splinters=true

The problem arises when trying to make this setting persistent across reboots, as the command can not be ran as part of the normal ifcfg-eth script.

One way to make this persistent is to add the following lines to the end of the ifup-ovs script:

This will ensure the settings are applied to each network interface at every reboot and after each network service restart.

Network startup

Note – as mentioned for linux bridge networking – once all network startup scripts are in place and the network service is restarted you may lose connectivity to the host if there are any configuration errors in the files, hence make sure you have console access to rectify any issues.

To make the configuration live restart the network service:

# service network restart

To check the bridges use the ovs-vsctl command. The following shows the optional cloudbr100 on VLAN 100:

Ubuntu

First of all install the bonding kernel module and make sure this is added to /etc/modules such that it is loaded on startup:

# apt-get install ifenslave-2.6

# vi /etc/modules

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
loop
lp
rtc
bonding

To ensure the linux bridge module isn’t loaded at boot time blacklist this module in /etc/modprobe.d/blacklist.conf:

# vim /etc/modprobe.d/blacklist.conf

Add the following line to the end of the file:

blacklist bridge

Ensure the correct hostname and FQDN are set in /etc/hostname and /etc/hosts respectively, and add the following lines to /etc/sysctl.conf:

Bridge startup in Ubuntu 12.04

The OpenVswitch implementation in Ubuntu 12.04 is slightly lacking compared to 14.04 and later. One thing which is missing is startup scripts for bringing the OVS bridges online. This can be accomplished by adding a custom bridge startup script similar to the following:

Internal bridge cloud0

In Ubuntu there is no requirement to add additional configuration for the internal cloud0 bridge, CloudStack manages this.

Optional tagged interface for storage traffic

Additional VLAN tagged interfaces are again accomplished by creating a VLAN tagged fake bridge on top of one of the cloud bridges. In this case we add it to cloudbr0 with VLAN 100:

# ovs-vsctl add-br cloudbr100 cloudbr0 100

# vi /etc/sysconfig/network-scripts/ifcfg-cloudbr100

Conclusion

As KVM is becoming more stable and mature, more people are going to start looking at using it rather that the more traditional XenServer or vSphere solutions, and we hope this article will assist in configuring host networking. As always we’re happy to receive feedback , so please get in touch with any comments, questions or suggestions.

About The Author

Dag Sonstebo is a Cloud Architect at ShapeBlue, The Cloud Specialists. Dag spends most of his time designing, implementing and automating IaaS solutions based on on Apache CloudStack.