Network Configuration

The first step of OpenStack/Gram configuration is establishing the networks described above.

We need to define a range of VLAN's for the data network (say, 1000-2000) and separate VLANs for the external, control, and management networks (say, 5, 6, and 7) on the management switch.
The external and control network ports should be configured untagged and the management port should be configured tagged.

The Control, External and Management networks are connected between the rack management switch and ethernet interfaces on the Controller or Compute nodes.

The Data network is connected between the rack OpenFlow switch and an ethernet interface on the Control and Compute nodes.

The ports on the OpenFlow switch to which data network interfaces have been connected need to be configured to trunk the VLANs of the data network. How this is done varies from switch to switch but typical commands look something like

conf t
vlan <vlanid>
tagged <ports>
exit
exit
write memory

On the OpenFlow switch, for each VLAN used in the data network (1000-2000), set the controller to point to the VMOC running on the control node. The command will vary from switch to switch but this is typical:

Management Switch

The ports on the management switch to which management network interfaces have been connected need to be configured to trunk the VLAN of the management network. How this is done varies from switch to switch, but typical commands look something like:

Install the gram package (where <type> is control or compute depending on what machine type is being installed):

sudo gdebi gram_<control/compute>.deb

Edit /etc/gram/config.json. NOTE: This is the most critical step of the process. This specifies your passwords, network configurations, so that OpenStack will be configured properly. [See section "Configuring config.json" below for details on the variables in that file]

Run the GRAM installation script (where <type> is control or compute depending on what machine type is being installed):

sudo /etc/gram/install_gram.sh <control/compute>

Configure the OS and Network. You will lose network connectivity in the step, it is recommended that the following command is run directly on the machine or using the Linux 'screen' program.

sudo /tmp/install/install_operating_system_[control/compute].sh

Configure everything else. Use a root shell

/tmp/install/install_[control/compute].sh

This last command will do a number of things:

Read in all apt dependencies required

Configure the OpenStack configuration files based on values set in config.json

Start all OpenStack services

Start all GRAM services

If something goes wrong (you'll see errors in the output stream), then the scripts it is running are in /tmp/install/install*.sh (install_compute.sh or install_control.sh). You can usually run the commands by hand and get things to work or at least see where things went wrong (often a problem in the configuration file).

Set up the namespace only on the control node. Use a root shell.

Check that sudo ip netns has two entries - the qrouter-* is the important one.

If qdhcp-* namespace is not there, type sudo quantum-dhcp-agent-restart

If you still cannot get 2 entries, try restarting all the quantum services:

If using the local gcf clearinghouse, set up gcf_config:
In ~/.gcf/gcf_config change hostname to be the fully qualified domain name of the control host for the clearinghouse portion and the aggregate manager portion (2x) eg,

host=boscontroller.gram.gpolab.bbn.com

Change the base_name to reflect the service token (the same service token used in config.json). Use the FQDN of the control for the token.

This has to be done twice as the first creates certificates for the aggregate manager and the clearinghouse. The second creates the username certificates appropriately based on the previous certificates.

Generate public key pair

ssh-keygen -t rsa -C "gram@bbn.com"

Modify ~/.gcf/omni_config to reflect the service token used in config.json: (Currently using FQDN as token)

authority=geni:boscontroller.gram.gpolab.bbn.com:gcf

Set the ip addresses of the ch and sa to the external IP address of the controler

Configuring config.json

The config.json file (in /etc/gram) is a JSON file that is parsed
by GRAM code at configre/install time as well as run time.

JSON is a format for expressing dictionaries of name/value
pairs where the values can be constants, lists or dictionaries. There are
no constants, per se, in JSON, but the file as provided has some 'dummy'
variables (e.g. "000001") against which comments can be added.

The following is a list of all the configuration variables that can
be set in the config.json JSON file. For some, defaults are provided in the
code but it is advised that the values of these parameters be explicitly set.

parameter

definition

default_VM_flavor

Name of the default VM flavor (if not provided in request RSpec), e.g. 'm1.small'

default_OS_image

Name of default VM image (if not provided in request RSpec), e.g. 'ubuntu-12.04'

default_OS_type

Name of OS of default VM image, e.g. 'Linux'

default OS_version

Version of OS of default VM image, e.g. '12'

external_interface

name of the nic connected to the external network (internet) e.g. eth0. GRAM configures this interface with a static IP address to be specified by the user

external_address

IP address of the interface connected to the external network

external_netmask

netmask associated with the above IP address

control_interface

name of the nic that is to be on the control plane

control_address

IP address of control address. This should be a private address

data_interface

name of the nic that is to be on the data plane

data_address

IP address of the data interface

internal_vlans

Set of VLAN tags for internal links and networks, not for stitching, this must match the OpenFlow switch configuration

management_interface

name of the nic that is to be on the management plane

management_address

IP address of the management interface

management_network_name

Quantum will create a network with this name to provide an interface to the VMs through the controller

management_network_cidr

The cidr of the quantum management network. It is recommended that this address space be different from the addresses used on the physical interfaces (control, management, data interfaces) of the control and compute nodes

management_network_vlan

The vlan used on the management switch to connect the management interfaces of the compute/control nodes.

Port on which to communicate to VMOC interface manager, default = 7001

vmoc_slice_autoregister

SHould GRAM automatically reigster slices to VMOC? Default = True

vmoc_set_vlan_on_untagged_packet_out

Should VMOC set VLAN on untagged outgoing packet, default = False

vmoc_set_vlan_on_untagged_flow_mod

Should VMOC set VLAN on untagged outgoing flowmod, default = True

vmoc_accept_clear_all_flows_on_startup

Should VMOC clear all flows on startup, default = True

control_host_address

The IP address of the controller node's control interface (used to set the /etc/hosts on the compute nodes

mgmt_ns

DO NOT set this field, it will be set up installation and is the name of the namespace containing the Quantum management network. This namespace can be used to access the VMs using their management address

disk_image_metadata

This provides a dictionary mapping names of images (as registered in Glance) with tags for 'os' (operating system of image), 'version' (version of OS of image) and 'description' (human readable description of image) e.g.

Installing Operations Monitoring

Monitoring can be installed after testing the initial installation of GRAM. Most supporting infrastructure was installed by
the steps above. Some steps, however, still need to be done by hand and the instructions can be found here: Installing Monitoring on GRAM

Testing GRAM installation

# Restart gram-am and clearinghose
sudo service gram-am restart
sudo service gram-ch restart
# check omni/gcf config
cd /opt/gcf/src
./omni.py getusercred
# allocate and provision a slice
# I created an rspec in /home/gram called 2n-1l.rspec
./omni.py -V 3 -a http://130.127.39.170:5001 allocate a1 ~/2n-1l.rspec
./omni.py -V 3 -a http://130.127.39.170:5001 provision a1 ~/2n-1l.rspec
# check that the VMs were created
nova list --all-tenants
# check that the VMs booted, using the VM IDs from the above command:
nova console-log <ID>
# look at the 192.x.x.x IP in the console log
# find the namespace for the management place:
sudo ip netns list
# look at each qrouter-.... for one that has the external (130) and management (192)
sudo ip netns exec qrouter-78c6d3af-8455-4c4a-9fd3-884f92c61125 ifconfig
# using this namespace, ssh into the VM:
sudo ip netns exec qrouter-78c6d3af-8455-4c4a-9fd3-884f92c61125 ssh -i ~/.ssh/id_rsa ssh gramuser@192.168.10.4
# verify that the data plane is working by pinging across VMs on the 10.x.x.x addresses
# The above VM has 10.0.21.4 and the other VM i created has 10.0.21.3
ping 10.10.21.3

Turn off Password Authentication on the Control and Compute Nodes

Generate an rsa ssh key pair on the control node for the gram user or use the one previously generated if it exists: i.e. ~gram/.ssh/id_rsa and ~gram/.ssh/id_rsa.pub

ssh-keygen -t rsa -C "gram@address"

Generate a dsa ssh key pair on the control node for the gram user or use the one previously generated if it exists: i.e. ~gram/.ssh/id_dsa and ~gram/.ssh/id_dsa.pub. Some components could only deal well with dsa keys and

so from the control node access to other resources on the rack should be using the dsa key.

ssh-keygen -t dsa -C "gram@address"

Copy the public key to the compute nodes, i.e. id_dsa.pub

On the control and compute nodes, cat id_rsa.pub >> ~/.ssh/authorized_keys

As sudo, edit /etc/sshd/config and ensure that these entries are set this way: