You
are here:Home
/
Wiki
/
PhantomNet
/
Proteus: A network service control platform for service evolution in a mobile software defined infrastructure

Info

Proteus: A network service control platform for service evolution in a mobile software defined infrastructure

Profile Overview

This tutorial is based on the work done for the Proteus platform, published in Mobicom 2016. Proteus is a mobile network service control platform to enable safe and rapid evolution of services in a mobile software defined infrastructure (SDI). Proteus allows for network service and network component functionality to be specified in templates which can then be used by the Proteus orchestrator to realize and modify service instances based on the specifics of a service creation request and the availability of resources in the mobile SDI. Proteus allows dynamic evolution of services, e.g. letting users instantiate a basic OpenEPC service, then dynamically modifying it to support SDN-based selective low-latency traffic offloading functionality.

For questions or comments, contact: aisha.syed@utah.edu

Profile Instantiation

Create a new PhantomNet experiment by logging in to the PhantomNet web UI. If you do not have any current experiments running you should land on the instantiate page by default. (Otherwise you can click on "Actions" and select "Start Experiment" from the drop down menu.) Click on the "Change Profile" button. To find the profile we will use for this tutorial, type "Proteus" into the search box. Select "Proteus" from the resulting list by clicking on it. This will show a description of the selected profile. Next click on the "Select Profile" button which will take you back to the "1. Select a Profile" page. Click "Next" to reach the "2. Parameterize" page. For this tutorial we will stay with the default options, so simply select "Next" to reach the "3. Finalize" page. This page will show a diagram of the topology that will be created for your experiment. On this page you need to select the "Project" in which the experiment should be created (in case you have access to more than one project). You might optionally also give your experiment a name at this point by typing into the "Name" field. Click "Finish". PhantomNet will now go through the process of creating and setting up your experiment. This will take a couple of minutes, so please be patient. When your experiment goes beyond the "created" state, the web interface will show more information of the resources allocated for the experiment and the current state of each node. For example the "Topology View" tab will show the topology of your experiment and hovering over a node will show its current state.

Note that you have to wait for your experiment as a whole to be in the "Ready" state before you can proceed with the tutorial. (Note: When the status of the experiment shows it to be in state "ready" you are good to go...)

The Proteus profile will set up a infrastructure service provider "whitespace" upon which different (possibly co-located) EPC service instances and their variants can be created. The orchestrator node "orch" contains a README in the /opt/proteus directory for setting up these service instances.

Proteus Usage

This tutorial will walk you through a more advanced experiment that uses an orchestrator to create an EPC service and add selective low-latency offloading functionality to it using a service template built from components used in theOpenEPCandSMORE tutorials. The tutorial will also show how to orchestrate a basic EPC service and dynamically modify, shutdown, and recreate it.

Log in to the orch node and run the following commands.

cd /opt/proteustcsh

Start the knowledge graph (KG) database process

python init.py

The init command is required to be run at least once per experiment. But this can be rerun as many times as needed, e.g. if orch node is restarted for any reason then config-tool.py script used below will give a connection refused error since the KG database process is not running.

Populate KG with resource information (takes about a minute to finish).

python config-tool.py

The KG can be accessed online at the public IP of the orch node (http://publicIP:7474) but only if not using VMs since VM nodes do not usually have a public IP in PhantomNet.

Some example queries (using the Cypher query language) that can be run using the KG web interface are shown below.

To get all nodes in inventory, type the following query in the textbox at the top of the KG webpage.

To print an overview of all existing services (SID and service name) as well as all available clients and their hostnames at any time, use:

python orch_rpm.py print

To print resources associated with a specific service (e.g. EPC1) at any time:

python orch_rpm.py resources EPC1

Where arg1=service name

To test the EPC instance, we need to configure the ANDSF IP of the EPC instance inside the client UE. To do this, run:

python orch_rpm.py setupClient client1 EPC1

where arg1=CLIENT_NAME is a client name seen from the 'python orch_rpm.py print' command, arg2=EPC_NAME is the name of an EPC instance to which we want to attach the client.

Next, ssh into the client using the hostname output using 'print' command earlier:

ssh client.experiment.project.emulab.net

Now try attaching client to EPC using the command below:

/opt/OpenEPC/bin/mm.attach.sh

OR, the following command can also be used as an alternate.

/opt/OpenEPC/bin/attach_wharf.pl mm

In the console that opens up as a result of running one of these commands, run

mm.connect_l3 LTE

If connect does not work in the first try, disconnect and try connect again:

mm.disconnect_l3 LTEmm.connect_l3 LTE

The output in the console will show the UE connecting to the configured ANDSF IP for this EPC instance, e.g. 192.168.1.31, that was output by the orchestrator after EPC instance creation and can also be observed by running the following command in the orch node shell.

python orch_rpm.py resources EPC1

where arg1=service name.

Test service connectivity by pinging to the Internet server.

ping 8.8.8.8

Starting SMORE by augmenting with EPC

Edit the sID parameter inside /opt/proteus/smore_params.yaml with the sID of EPC created earlier. Also, update the client parameter as needed inside smore_params.yaml with one or more of the client names (e.g. client1) of the EPC service created earlier.

In a shell window WIN1 on the orch node, start monitor for ping measurements:

python monitorPingMeasurements smore_params.yaml SERVICE_NAME

where arg1=smore specification file and arg2=SERVICE_NAME i.e a unique service name e.g. SMORE1 that we want to associate with the new SMORE service that will get created based on the trigger from measurements.

Keep the previous command running in WIN1 and start another shell window WIN2 for the orch node, and run the following command to insert ping measurements to trigger a new application request for creation of SMORE service when ping value is seen to be above or equal to 20(ms).

python insertPingMeasurements.py 20

Where arg1=pingValue

This will initiate modification of our EPC instance created earlier by adding SMORE, a basic low-latency traffic offloading service. The output will show the offload cloud server's IP.

The insertPingMeasurements.py can be modified so that a loop is used to insert multiple ping measurements with timestamps attached and a new SMORE is only added when past average for given period of time is seen to be above given threshold.

To test whether SMORE offloading functionality is working correctly, SSH into client2 (or client1, depending on the parameter set inside smore_params.yaml which is by default set to client2) and attach to EPC.

/opt/OpenEPC/bin/mm.attach.shmm.connect_l3 LTE

If connect does not work in the first try, the try disconnect and connect again one or two times:

mm.disconnect_l3 LTEmm.connect_l3 LTE

Now, test ping with both Internet and cloud server:

ping 8.8.8.8ping cloudIP

Once you are done testing, you can explicitly delete SMORE functionality (essentially reversing the SMORE addition to our EPC instance) by using its service name that you used to create it. E.g. if you used SMORE1 as the name then:

python orch_rpm.py delete SMORE1

This will shutdown SMORE and revert the EPC instance back to its original form, i.e. with no selective low-latency offloading.

Instead of explicitly deleting SMORE this way, read the insertPingMeasurements.py file and create a similar file that issues a delete request only when pingMeasurements fall below a certain threshold (e.g. 20ms). The new file should execution can look like this:

python insertPingMeasurements.py 15 smore_sID

where 15 is the ping value.

Optionally, try creating a new EPC service instance, EPC2 (in parallel or co-located with the other existing one we created earlier):

python orch_rpm.py new epc_params.yaml EPC2

Explicitly add a new SGW to this EPC2:

python orch_rpm.py addResource epc_params_new_SGW.yaml EPC2

Where EPC2 is the name we assigned to EPC instance we just created.

The output will show the hostname of the SGW. Login to the SGW node and attach to its console if needed:

/opt/OpenEPC/bin/sgw.attach.sh

Type gw_bindings.print inside the console to print the attached clients for this SGW.

SSH into a client node, e.g. client3 and attach:

/opt/OpenEPC/bin/mm.attach.sh mm.connect_l3 LTE

If connect does not work in the first try, disconnect and connect again, once or twice:

mm.disconnect_l3 LTEmm.connect_l3 LTE

Now, test ping to the Internet:

ping 8.8.8.8

Instead of explicitly adding SGW, a similar script to ping measurement can be created that instead inserts the values for a different variable such as number of epc-clients, then based on the value, it requests creation of new SGW.

To explicitly delete EPC1 and EPC2 instances, use their service names chosen during creation:

python orch_rpm.py delete EPC1python orch_rpm.py delete EPC2

Finally, use deleteState command to clear all persistent state and start over with the tutorial:

python orch_rpm.py deleteState

If there is any resource shortage or IP allocation error when creating new services then existing services (found out using python orch_rpm.py print command) can be deleted using 'delete' command and then using 'deleteState' to clear out the orch state.