Experimentation with GENI

1 Why GENI?

GENI might be right for you if your experiment requires:

More resources than would ordinarily be found in your lab. Since GENI is a suite of infrastructures it can potentially provide you with more resources than is typically found in any one laboratory. This is especially true for compute resources: GENI provides access to large testbeds with hundreds of PCs and to cloud computing resources.

Non-IP connectivity across resources. Some GENI aggregates allow you to set up Layer 2 connections between resources within the aggregate. Experimenters may install and run their own Layer 3 and above protocols on these resources. It is also possible to setup Layer 2 connections between many GENI aggregates that connect to GENI backbone networks (Internet2 and NLR). You can even set up your network to route through experimenter programmable switches in the GENI backbone.

A deeply programmable network. GENI has switches in the backbone and at the edges that you can program to set up the network topologies you need and to control flows in your network.

Geographically distributed resources. Some GENI resources are distributed around the world.

Reproducibility. You can get exclusive access to certain GENI resources including CPU resources and network resources. This gives you control over your experiment's environment and hence the ability for you and others to repeat experiments under identical or very similar conditions.

Benefits of using GENI include:

Unified access to a large number of resources. Your GENI experimenter credentials will give you access to a number of resources owned and operated by different organizations without your having to get separate accounts from each of them.

Unified tools and services. Many GENI experiment control and measurement tools and services work across diverse resources owned and operated by different organizations. You do not have to switch between tools to use resources from different organizations.

Operations support. A GENI-wide meta-operations team coordinates GENI operations, security, and network stitching (setting up Layer 2 VLANS over GENI backbone networks). For assistance with any of these issues contact ​gpo-infra@geni.net.

2 An Experimenter's View of GENI

GENI is a suite of infrastructures for networking and distributed systems experimentation. GENI supports at-scale experimentation on shared, heterogeneous, highly instrumented infrastructure and enables deep programmability throughout the network.

As an experimenter you will need to know about GENI clearinghouses, GENI aggregates and GEN slices.

GENI aggregates provide resources to experimenters with GENI credentials. GENI has a number of different aggregates that provide a variety of resources for experimentation. An important aspect of planning your experiment is deciding what resources you need (resource types and numbers) and which aggregates might be able to provide you these resources.

A GENI slice holds a collection of computing and communications resources capable of running an experiment or a wide area service. An experiment is a researcher-defined use of resources in a slice; an experiment runs in a slice. A researcher may run multiple experiments using resources in a slice, concurrently or over time.

The following figure illustrates the role of GENI clearinghouses and aggregates:

3 GENI Resources

GENI has a number of aggregates that make different kinds of resources available for use by experimenters. Examples of such resources include:

Backbone networks. Geographically distributed GENI resources may be connected to one another using ​Internet2, ​National Lambda Rail (NLR) or the public Internet. Many aggregates can be connected using Layer 2 VLANS over Internet2 and NLR. Most aggregates can be connected using IP.

Programmable hosts. GENI provides a wide array of programmable hosts such as entire PCs from the ProtoGENI aggregate that can be booted with an experimenter specified operating system; operating system virtual machines that can host experimenter software from the PlanetLab and ProtoGENI aggregates , programming language virtual machines from the Million Node GENI aggregate and cloud computing resources from the GENICloud aggregate.

See Section 8 for a listing of GENI aggregates along with a description of the resources they provide.

4 Picking Resources for Your Experiment

As you plan your experiment you will want to consider:

The degree of control you need over your experiment. Do you need to tightly control the resources (CPU, bandwidth, etc.) allocated to your experiment or will best-effort suffice? If you need a tightly controlled environment you might want to consider one of the ProtoGENI aggregate that allocate entire PCs that can be connected in arbitrary topologies.

The desired network topology. Does your experiment have to be geographically distributed? What kinds of connectivity do you need between these geographically distributed locations. Almost all aggregates can connect using IP connectivity over the Internet. Many aggregates connect to one of the GENI backbones and allow you to set up IP connections with other resources on the backbone. This will give you a bit more control over the network. Some aggregates provide Layer 2 connectivity over a GENI backbone i.e. you can set up vlans between these aggregates and other resources on the backbone network. This allows you to run non-IP protocols across between the aggregate and other resources.

The desired control over network flows. If you need to manage network traffic to/from an aggregate you might want to use aggregates that connect to a GENI backbone using OpenFlow switches or set up vlans to these aggregates through the ProtoGENI Backbone Nodes or the SPP Nodes.

The number of resources you need from an aggregate. Aggregates vary from small installations such as the GPO Lab ProtoGENI aggregate that consists of eleven nodes to the PlanetLab and ProtoGENI aggregates that consist of hundreds of nodes.

If the aggregate accepts GENI credentials. You will likely be able to use resources from these aggregates with a credential issued by a GENI clearinghouse; you do not have to contact the aggregate owner to get an account for the aggregate. Additionally, aggregates that accept GENI credentials typically implement the GENI Aggregate Manager API. A growing number of GENI experiment control tools support this API i.e. these tools can be used to create slices, add resources from aggregates that support the GENI API, etc. Examples of such tools include the ​Flack, ​Omni and ​Gush.

The GENI Project Office is happy to help find the best match of resources for your experiments. Please contact ​help@geni.net for assistance.

5 Experimenter Tools

5.1 Experiment Control Tools

GENI experiment control tools are used to create slices, add or remove resources to slices, and delete slices. Some tools may also help with the installation of experimenter specified software into resources in slices; starting, pausing, resuming and stopping the execution of an experiment; and monitoring of the resources in slices for failures. Examples of GENI experiment control tools include ​Gush, ​Omni, ​PlanetLab SFI and ​Flack.

In addition to these experiment control tools, individual aggregates provide experimenters with additional tools to install and manage software on their resources. For example, the Million Node GENI aggregate provides a set of tools to manage the virtual machines it proves as computing resources.

5.2 Instrumentation and Measurement Tools

GENI instrumentation tools are currently aggregate specific. Examples of such tools include ​Instrumentation Tools for the Kentucky ProtoGENI aggregate, ​Owl for the PlanetLab aggregate and ​OMF/OML for the ORBIT aggregate.

8 GENI Aggregates Currently Available to Experimenters

8.1 Backbone Networks

Internet2 provides the U.S. research and education community a dynamic hybrid optical and packet network. GENI experimenters have access to 1Gbps of dedicated bandwidth from Internet2. Experimenters may create their own topologies using Layer 2 VLANS.

NLR provides the testbed for advanced research at over 280 universities and private and U.S. government laboratories. GENI experimenters have access to up to 30Gbps of non-dedicated bandwidth on NRL. Experimenters may create their own topologies using Layer 2 VLANS.

GENI network core is a set of OpenFlow-capable switches in NLR and Internet2. There are currently two standing VLANs (3715 and 3716) carried on the ten switches in the core. Experimenters may use the GENI Core network without having to coordinate with Internet2 or NLR operations to set up VLANS for their experiments. They will however have to coordinate with their campus and/or regional networks to connect to the GENI core. The two standing VLANS in the network core also bridge between the Internet2 and NLR networks.

Over 500 co-located PCs that can be loaded with an experimenter specified OS image and connected in arbitrary topologies. Includes 60 nodes with 2 WiFi cards each, plus software-defined radio peripherals (USRP2)

Complete PCs or virtual machines on PCs

PCs can be set up as routers, plus experimenter-controllable switches (HP ProCurves)

8.3 Programmable Networks

Aggregate

Description

Compute Resources

Accepts GENI Credentials

Network Connectivity

Experimenter Tools

Supercharged PlanetLab Platform (SPP) Nodes

Five high-performance PlanetLab nodes at Internet2 co-location sites. Nodes incorporate high-performance server and network processor blades to support service delivery over high speed overlay networks.

Experimenters program the General-Purpose Processing Engines (GPEs) and Network Processor Blades (NPE) of the SPP nodes.

Sensor networking testbed consisting of 96 nodes. Each node has one XSM, 4 Telosbs, and one iMote2, all of which are attached to a Stargate. The Stargates are connected using both wired and wireless ethernet. The nodes have 802.11, 802.15.4, and 900 MHz Chipcon CC1000 radios

9 GENI Aggregates: Coming Soon

Network testbed centered on a Midwest US regional optical network between The University of Kansas, Kansas State University, University of Nebraska – Lincoln, and University of Missouri – Kansas City, supported with optical switches from Ciena interconnected by Qwest fiber infrastructure.

Network consisting of several segments of dark fiber and includes a reconfigurable fiber switch (layer 0) to generate different physical topologies, out of band network management to access equipment at PoPs and remote power management for resetting and powering down of experimental equipment.