Workshop Program

The ANRW ’16 took place in the Charlottenburg II/III room at the InterContinental in Berlin, Germany, the venue of the IETF-96 meeting, on Saturday, July 16, 2016.

The ANRW ’16 was recorded and live-streamed. The recordings are available below.

Recordings

The recordings from the workshop are available on YouTube in five parts.

Program

Time

Session

9:00

Opening

9:15

Session: Multipath

An enhanced socket API for Multipath TCP

Benjamin Hesmans (UCL) and Olivier Bonaventure (UCL)

Multipath TCP is a TCP extension that enables hosts to send data belonging to a single TCP connection over different paths. It was designed as an incrementally deployable evolution of TCP. For this reason, the Multipath TCP specification assumes that applications use the unmodified socket interface. Given the growing interest in using Multipath TCP for specific applications, there is a demand for an advanced API that enables application developers to control the operation of the Multipath TCP stack. Keeping with the incremental deployment objectives of Multipath TCP, we propose a simple but powerful socket API that uses new socket options to control the operation of the underlying stack. We implement this extension in the reference implementation of Multipath TCP in the Linux kernel and illustrate its usefulness in several use cases.

Multipath bonding at Layer 3

Recent work has applied Multipath TCP proxies to the problem of bonding a customer’s multiple access interfaces to the Internet, in order to augment available bandwidth, especially in areas with marginal fixed connectivity. However, such proxies only apply to TCP traffic, and while UDP-based media streams can be tunneled through bonded TCP connections, this would lose the advantages of loss-tolerant media-oriented transports. We therefore propose an approach to do interface bonding at layer 3, design a scheduling algorithm to shift traffic between fixed and mobile lines, implemented Linux-based bonding gateways, and tested them within a testbed on Swisscom’s production DSL and LTE networks.

Towards a Multipath TCP Aware Load Balancer

Multipath TCP has been recently introduced in order to allow a better resource consumption and user quality-of-experience. This is achieved by allowing a connection between two hosts through multiple subflows. However, with the rise of middleboxes and inherent Internet ossification, the large-scale deployment of this TCP extension is difficult. In particular, a load balancer at the entry point of a data center may forward subflows to different servers, cancelling so the advantages of Multipath TCP. In this paper, we introduce MpLb, a Multipath TCP aware load balancer that fixes this particular issue without any modification to the Multipath TCP protocol itself. We demonstrate advantages of MpLb through a proof-of-concept.

Multi-Homed on a Single Link: Using Multiple IPv6 Access Networks

Small companies and branch offices often have bandwidth demands and redundancy needs that go beyond the commercially available Internet access products in their price range. One way to overcome this problem is to bundle existing Internet access products. In effect, they become multi-homed often without running BGP or even getting an AS number. Currently, these users rely on proprietary L4 load balancing routers, proprietary multi-channel VPN routers, or sometimes LISP, to bundle their “cheaper” Internet access network links, e.g., via (v)DSL, DOCSIS, HSDPA, or LTE. While most products claim transport-layer transparency they add complexity via middleboxes, map each TCP connection to a single interface, and have limited application support. Thus, in this paper we propose an alternative: Auto-configuration of multiple IPv6 prefixes on a single L2 link. We discuss how this enables applications to take advantage of combining multiple access networks at with minimal system changes.

Multi-Homed on a Single Link: Using Multiple IPv6 Access Networks. Short Paper Philipp S. Tiesel (TU-Berlin), Bernd May (TU-Berlin), and Anja Feldmann (TU-Berlin).

Session: SDN, Routing and Peering

Towards Decentralized Fast Consistent Updates

Updating data plane state to adapt to dynamic conditions is a fundamental network control operation. Software-Defined Networking (SDN) offers abstractions for updating network state while preserving consistency properties. However, realizing these abstractions in a purely centralized fashion is inefficient, due to the inherent delays between switches and the SDN controller, we argue for delegating the responsibility of coordinated updates to the switches. To make our case, we propose ez-Segway, a mechanism that enables decentralized network updates while preventing forwarding anomalies and avoiding link congestion. In our architecture, the controller is only responsible for computing the intended network configuration. This information is distributed to the switches, which use partial knowledge and direct message passing to efficiently schedule and implement the update. This separation of concerns has the key benefit of improving update performance as the communication and computation bottlenecks at the controller are removed. Our extensive simulations show update speedups up to a factor of 2.

In this paper we define the notion of composition for software-defined network applications and show the theoretical and practical approaches to composition in software-defined networks and explain the challenges associated with it. We explore the feasibility of OpenFlow as an Application Programming Interface (API) for a composition engine and argue that its design as Southbound controller interface makes it unsuitable for this task.

The growing relevance of Internet eXchange Points (IXPs), where an increasing number of networks exchange routing information, poses fundamental questions regarding the privacy guarantees of confidential business information. To facilitate the exchange of routes among their members, IXPs provide Route Server (RS) services to dispatch the routes according to each member’s export policies. Nowadays, to make use of RSes, these policies must be disclosed to the IXP. This state of affairs raises privacy concerns among network administrators and even deters some networks from subscribing to RS services. We design SIXPACK (which stands for “Securing Internet eXchange Points Against Curious onlooKers”), a RS service that leverages Secure Multi-Party Computation (SMPC) techniques to keep export policies confidential, while maintaining the same functionalities as today’s RSes. We assess the effectiveness and scalability of our system by evaluating our prototype implementation and using traces of data from one of the largest IXPs in the world.

Computing Customer Cones of Peering Networks

We present a method to compute the customer cones of peering networks using PCH data. Our method computes location dependent customer cones (LDCCs) for networks that are present at more than one IXP instead of computing a single customer cone for each network. We use our method to compute 5753 LDCCs for 3290 IXP participants. Our preliminary analysis of the LDCCs reveals that IXP participants often have different customer cones at different locations.

11:55

Lunch & Posters

13:00

Session: Transport Quality and “Happy Eyeballs”

Measuring the Effects of Happy Eyeballs

The IETF has developed protocols that promote a healthy IPv4 and IPv6 co-existence. The Happy Eyeballs (HE) algorithm, for instance, prevents bad user experience in situations where IPv6 connectivity is broken. Using an active test (happy) that measures TCP connection establishment times, we evaluate the effects of the HE algorithm. The happy test measures against ALEXA top 10K websites from 80 SamKnows probes connected to dual-stacked networks representing 58 different ASes. Using a 3-years long (2013 - 2016) dataset, we show that TCP connect times to popular websites over IPv6 have considerably improved over time. As of May 2016, 18% of these websites are faster over IPv6 with 91% of the rest at most 1 ms slower. The historical trend shows that only around 1% of the TCP connect times over IPv6 were ever above the HE timer value (300 ms), which leaves around 2% chance for IPv4 to win a HE race towards these websites. As such, 99% of these websites prefer IPv6 connections more than 98% of the time. We show that although absolute TCP connect times (in ms) are not that far apart in both address families, HE with a 300 ms timer value tends to prefer slower IPv6 connections in around 90% of the cases. We show that lowering the HE timer value to 150 ms gives us a margin benefit of 10% while retaining same preference levels over IPv6.

Measuring the Effects of Happy Eyeballs. Full Paper Vaibhav Bajpai (Jacobs University Bremen) and Jürgen Schönwälder (Jacobs University Bremen).

Concerns have been raised in the past several years that introducing new transport protocols on the Internet has become increasingly difficult, not least because there is no agreed-upon way for a source end host to find out if a transport protocol is supported all the way to a destination peer. A solution to a similar problem—finding out support for IPv6—has been proposed and is currently being deployed: the Happy Eyeballs (HE) mechanism. HE has also been proposed as an efficient way for an application to select an appropriate transport protocol. Still, there are few, if any, performance evaluations of transport HE. This paper demonstrates that transport HE could indeed be a feasible solution to the transport support problem. The paper evaluates HE between TCP and SCTP using TLS encrypted and unencrypted traffic, and shows that although there is indeed a cost in terms of CPU load to introduce HE, the cost is relatively small, especially in comparison with the cost of using TLS encryption. Moreover, our results suggest that HE has a marginal impact on memory usage. Finally, by introducing caching of previous connection attempts, the additional cost of transport HE could be significantly reduced.

Start Me Up: Determining and Sharing TCP’s Initial Congestion Window

When multiple TCP connections are used between the same host pair, they often share a common bottleneck – especially when they are encapsulated together, e.g. in VPN scenarios. Then, all connections after the first should not have to guess the right initial value for the congestion window, but rather get the appropriate value from other connections. This allows short flows to complete much faster – but it can also lead to large bursts that cause problems on their own. Prior work used timer-based pacing methods to alleviate this problem; we introduce a new algorithm that “paces” packets by instead correctly maintaining the ACK clock, and show its positive impact in combination with a previously presented congestion coupling algorithm.

14:05

Session: Measurement

Revisiting Benchmarking Methodology for Interconnect Devices

Daniel Raumer (Technical University of Munich), Sebastian Gallenmüller (Technical University of Munich), Florian Wohlfart (Technical University of Munich), Paul Emmerich (Technical University of Munich), Patrick Werneck (Technical University of Munich), and Georg Carle (Technical University of Munich)

Ever growing demand for network bandwidth makes computer networks an area of constant development and fast adjustments. The steady change makes good performance assessments equally necessary and challenging. This development motivated us to revisit the established benchmarking methodology. We provide an overview of the state-of-the-art in router benchmarking, the currently available benchmarking tools, and challenges for benchmarks. A discussion of benchmarking results for three different devices (routers based on Linux and FreeBSD, and a MikroTik router) reveal different properties currently not covered by standardized benchmarks. We conclude by adding tests to the common benchmarking methodology reflecting these properties to make the results more valuable. The prototype software implementation of our own benchmarking tool and its measurement reports are publicly available.

Revisiting Benchmarking Methodology for Interconnect Devices. Full Paper Daniel Raumer (Technical University of Munich), Sebastian Gallenmüller (Technical University of Munich), Florian Wohlfart (Technical University of Munich), Paul Emmerich (Technical University of Munich), Patrick Werneck (Technical University of Munich), and Georg Carle (Technical University of Munich).

PATHspider: A tool for active measurement of path transparency

In today’s Internet we see an increasing deployment of middleboxes. While middleboxes provide in-network functionality that is necessary to keep networks manageable and economically viable, any packet mangling – whether essential for the needed functionality or accidental as an unwanted side effect – makes it more and more difficult to deploy new protocols or extensions of existing protocols. For the evolution of the protocol stack, it is important to know which network impairments exist and potentially need to be worked around. While classical network measurement tools are often focused on absolute performance values, we present a new measurement tool, called PATHspider that performs A/B testing between two different protocols or different protocol extension to perform controlled experiments of protocol-dependent connectivity problems as well as differential treatment. PATHspider is a framework for performing and analyzing these measurements, while the actual A/B test can be easily customized. This paper describes the basic design approach and architecture of PATHspider and gives guidance how to use and customize it.

Diurnal and Weekly Cycles in IPv6 Traffic

Stephen D. Strowes (Yahoo)

IPv6 activity is commonly reported as a fraction of network traffic per day. Within this traffic, however, are daily and weekly characteristics, driven by non-uniform IPv6 deployment across ISPs and regions. This paper discusses some of the more apparent patterns we observe today.

How to say that you’re special: Can we use bits in the IPv4 header?

The IP header should be the ideal part of a packet that an end system could use to ask the network for special treatment. Recently, there has been renewed interest in using bits of this header – e.g. the ECN and the DSCP fields. But can we really use these bits? Or should we try to use other bits? We contribute to the body of work that tries to answer these questions by reporting on IPv4 measurements regarding the DSCP field and the Evil bit. Our findings also confirm recent results on IP Options and ECN.

How to say that you’re special: Can we use bits in the IPv4 header? Short Paper Runa Barik (University of Oslo, Norway), Michael Welzl (University of Oslo, Norway), and Ahmed Elmokashfi (Simula Research Laboratory, Norway).

The Internet is full of Middleboxes that change packets and flows. In fact, there is probably no IP o TCP header that is not affected by at least one middlebox. Obviously, middleboxes impede path transparency, i.e., the idea that an exchange of messages results in more or less the same packets, no matter what path the packets take. But no one seems to have a truly global view of what middleboxes do to packets on what Internet paths, which would however be essential knowledge for new transport protocols to be successfully deployed. We address these concerns in the MAMI project by building an observatory of path transparency measurements. The project hosts an extensive set of path transparency measure-ments—we believe it to be the first dataset to deal specifically with middlebox involvement. In this paper, we describe that Observatory and a number of questions that we want to address with the data in that Observatory. Eventually, the project will provide public access to that Observatory so that researchers and the interested public can ask their own questions about path transparency issues and middlebox involvement.

Making Google Congestion Control robust over Wi-Fi networks using packet grouping

Google congestion control (GCC) has been proposed for the case of delay sensitive traffic (i.e. video-conference) in the WebRTC framework. In this paper we analyze the effect of wireless channel outages on the GCC. We have observed that, when a channel outage ends, there are packets that arrive at the receiver as a burst. This behavior impairs the delay-based controller employed by GCC, resulting in throughput degradation. We propose a solution to make GCC robust with respect to channel outages. In particular, by grouping packets that arrive in a burst, the delay-based controller avoids to misinterpret a burst as network congestion. In order to prove the effectiveness of the proposed solution we have carried out a trace-driven experimental evaluation in a loaded Wi-Fi scenario.

Implementing Real-Time Transport Services over an Ossified Network

Real-time applications require a set of transport services not currently provided by widely-deployed transport protocols. Ossification prevents the deployment of novel protocols, restricting solutions to protocols using either TCP or UDP as a substrate. We describe the transport services required by real-time applications. We show that, in the short-term (i.e., while UDP is blocked at current levels), TCP offers a feasible substrate for providing these services. Over the longer term, protocols using UDP may reduce the number of networks blocking UDP, enabling a shift towards its use as a demultiplexing layer for novel transport protocols.