Summary

The UHD-on-5G project is an associated team between researchers at INRIA and NICT; it is financed by INRIA and the Japanese Society for the Promotion of Science. The UHD-on-5G associated team started formally for Inria in January 2016 for a duration of three years ending in December 2018. However, from the japanese side, the associated team started in April 2016 for a duration of three years.

Main objectives

The aim of this collaboration is to design and develop efficient mechanisms for streaming UHD video on 5G networks and to evaluate them in a realistic and reproducible way by using novel experimental testbeds. In a nutshell, our approach leverages and extends when necessary ICN and SDN technologies to allow very high quality video streaming at large scale. We also plan to use Virtual Network Functions (VNF) in order to place easily and dynamically different functions (e.g. transcoding, caching) at strategic locations within the network. Specifically, the placement of these functions will be decided by SDN controllers to optimize the quality of experience (QoE) of users. Moreover, we plan to integrate ICN functionalities (e.g., name-based forwarding and multipath transport using in-network caching) with SDN/NFV to provide better QoE and mobility services support to users than traditional IP architectures. Monitoring mechanisms such as the Contrace tool we developed in the SIMULBED associated team will be helpful to provide an accurate view of the network at the SDN controllers side. In addition, we will build a large-scale testbed to evaluate our solutions through reproducible experimentations based on two testbeds: the ICN wired CUTEi testbed developed by NICT and the wireless FIT R2lab testbed developed by Inria.

Context

With the tremendous bandwidth improvement made in access links and backbone networks, end users are now expecting very high-quality multimedia applications to work anywhere, and on a variety of heterogeneous devices, e.g., Ultra-high-definition (UHD) video streaming on smartphones and 4K or 8K TV. Recently, as one of the new evolutionary network architectures, Information-Centric Networking (ICN) has been proposed to facilitate applications such as content dissemination and high-quality streaming services. However, several challenges including cooperation with in-network caching, multipath forwarding, and quality or congestion control, are still to be addressed before service deployment in the Internet. Indeed, most of related works are based on various assumptions in which, for example, target applications are only non real-time applications, the target network must be a small management network or a wired (or wireless) only network, and so on. In parallel, Software-Defined Networking and Network Functions Virtualization (SDN/NFV) are remarkable technologies to control not only communication flows but also service functions in the networks. They have been recently developed by international vendors and deployed to the market. However, as they are basically designed to work with the traditional point-to-point IP networks usually limited to a single autonomous system, they require some extensions to enable very high-quality video streaming to a large number of heterogeneous users on the wide Internet. Furthermore, the design of such communication mechanisms requires rigorous evaluation on realistic testbeds before deployment. The collaboration leverages the two platforms developed by each partner: the CUTEi ICN testbed at NICT and the FIT R2lab wireless testbed at Inria that allows to reproduce wireless experiments in an anechoic chamber.

Scientific Contributions

Traceroute facility for Content-Centric Networks:

Concerning the activity on Name Based Routing, we have published an Internet-Draft at the IETF ICN Research Group, describing the specification of Contrace, an active network measurement tool for investigating the path and caching condition in Content Centric Networks. In CCN/NDN, while consumers do not generally need to know which content forwarder is transmitting the content to them, operators and developers may want to identify the content forwarder and observe the forwarding path information per name prefix for troubleshooting or investigating the network conditions. The well-known Traceroute tool does not help for this as IP-based network tool cannot trace the name prefix paths used in CCN/NDN. Moreover, given a source-rooted forwarding path per name prefix, specifying a forwarding source (i.e., router or publisher) for any content is difficult, because we do not always know which branch of the source tree the consumer is on. Additionally, it is not feasible to flood the entire source-rooted tree to find the path from a source to a consumer. Furthermore, such IP-based network tool does not allow the states of the in-network cache to be discovered. The Contrace tool is designed to overcome these difficulties and investigates in particular: 1) the forwarding path information per name prefix, device name, and function/application, 2) the Round-Trip Time (RTT) between content forwarder and consumer, and 3) the states of in-network cache per name prefix. An Internet-Draft describing Contrace is in discussion at the IRTF ICN Research Group.

Very High Quality Streaming using In-Network Coding and Caching

Another important task in the same activity domain is to enhance streaming quality. Owing to the rapid growth in high-quality video streaming over the Internet, preserving high-level robustness against data loss and low latency, while maintaining higher data transmission rates, is becoming an increasingly important issue for high-quality real-time delay-sensitive streaming. We have proposed a low latency, low loss streaming mechanism, L4C2, specialized for high-quality delay-sensitive streaming. With L4C2, nodes in a network estimate the acceptable delay and packet loss probability in their uplinks, aiming at retrieving lost data packets from in-network cache and/or coded data packets using in-network coding within an acceptable delay, by extending the Content-Centric Networking (CCN) approach. Furthermore, L4C2 naturally provides multiple path and multicast technologies to efficiently utilize network resources while sharing network resources fairly with competing data flows by adjusting the video quality as necessary. We validate through comprehensive simulations that L4C2 achieves a high success probability of data transmission considering the acceptable one-way delay and outperforms the existing solution. This work has been presented at the IEEE Infocom conference in May 2017. Then, we designed a new mechanism enabling efficient video content delivery using dynamic adaptive streaming. It exploits in-network coding and caching offered by data-centric networks and uses a rate-based adaptation mechanism with an innovative probing interest scheme that allows consumers which want to increase their playback bitrate to request coded data for multiple bitrate segments. We validated our proposal through simulations and the performance results show a higher QoE and improved cache hit ratio compared to the existing buffer-based adaptation mechanisms. This work is in submission to a conference. In the same context, we recently described in an Internet-Draft the current research outcomes regarding Network Coding (NC) for Content-Centric Networking (CCN) / Named Data Networking (NDN), where we clarified the requirements and challenges for applying NC into CCN/NDN.

Scalable Multicast Service in Software Defined ISP networks

In the context of the SDN-based multicast mechanisms activity, we have proposed an architectural solution to provide scalable multicast service in ISP networks. In fact, new applications where anyone can broadcast video are becoming very popular on smartphones. With the advent of high definition video, ISP providers may take the opportunity to propose new high quality broadcast services to their clients. Because of its centralized control plane, Software Defined Networking (SDN) seems an ideal way to deploy such a service in a flexible and bandwidth-efficient way. But deploying large scale multicast services on SDN requires smart group membership management and a bandwidth reservation mechanism to support QoS guarantees that should neither waste bandwidth nor impact too severely best effort traffic. We have proposed a Network Function Virtualization based solution for Software Defined ISP networks to implement scalable multicast group management. We also propose in the same paper a routing algorithm called Lazy Load balancing Multicast (L2BM) for sharing the network capacity in a friendly way between guaranteed-bandwidth multicast traffic and best-effort traffic. Our framework implementation made on Floodlight controllers and Open vSwitches is used to study the performance of L2BM. This work has been presented at the IEEE ICC conference in May 2017 and an extended version of it has been published in the IEEE Transactions on Network and Service Management journal in December 2017. Furthermore, we have recently tested the NFV multicast mechanism proposed on the R2lab platform, see the demo code available at URL https://github.com/fit-r2lab/r2lab-demos/tree/master/l2bm. This code uses the floodlight SDN controller along with OVS switches running on R2lab nodes.

Towards Unifying Content level and Network level operations

The programmability of the network to provide content level operations is highly desirable. With the advent of virtualization and network function softwarization, the networking world shifts to Software Defined Networking (SDN) and OpenFlow is one of the most suitable candidates to implement the southbound API. In the meanwhile, the generalization of broadband Internet has led to massive content consumption. However, while content is usually retrieved via layer 7 protocols, OpenFlow operations are performed at lower layers (layer 4 or lower) making the protocol ineffective to deal with contents. To address this issue, we define an abstraction to unify network level and content level operations and present a straw-man logically centralized architecture proposal to support it. Our implementation demonstrates the feasibility of the solution and its advantage over fully centralized approach. This task performed by a Master level student has been published in the CoNext student workshop in December 2016. A demonstration was also presented at IEEE SDN/NFV conference in November 2016.

Impact of Caching on HTTP Adaptive Streaming Decisions

In the context of the scalable and reliable high-quality video streaming activity, we have studied the impact of caching on HTTP Adaptive streaming. The interplay between caching and HTTP Adaptive Streaming (HAS) is known to be intricate, and possibly detrimental to QoE. We have studied this topic and made the case for caching-aware rate decision algorithms at the client side, which do not require any collaboration with cache or server. To this goal, we introduced an optimization model which allows to compute the optimal rate decisions in the presence of cache, and compare the current main representatives of HAS algorithms (RBA and BBA) to this optimal. This allows us to assess how far from the optimal these versions are, and on which to build a caching-aware rate decision algorithm. The results were published in IEEE Infocom Student Workshop and IEEE Globecom Workshop on QoE for Multimedia Communications in December 2016.

Assessing Video Streaming Performance on Mobile Networks

Gauging network performance w.r.t. the video Quality of Experience (QoE) is of paramount importance to both telecom operators and regulators. Modern video streaming systems have huge catalogs of billions of different videos that vary significantly in content type. Owing to this difference, the QoE of different videos as perceived by end users can vary for the same network Quality of Service (QoS). We proposed a methodology for benchmarking performance of mobile operators w.r.t Internet video that considers this variation in QoE. We took a data-driven approach to build a predictive model using supervised machine learning (ML) that takes into account a wide range of videos and network conditions. To that end, we first built and analyzed a large catalog of YouTube videos. We then proposed and demonstrated a framework of controlled experimentation on R2lab based on active learning to build the training data for the targeted ML model. Using this model, we then devised YouScore, an estimate of the percentage of YouTube videos that may play out smoothly under a given network condition. Finally, to demonstrate the benchmarking utility of YouScore, we applied it on an open dataset of real user mobile network measurements to compare performance of mobile operators for video streaming. This work will be presented at the 21st ACM MSWiM conference in October 2018.

Resiliency in Service Function Chaining (SFC)

In the context of the dynamic placement of Virtual Network Functions in the network activity, we have studied the importance of resiliency in service functions chaining. When deploying network service function chains the focus is usually given on metrics such as the cost, the latency, or the energy and it is assumed that the underlying cloud infrastructure provides resiliency mechanisms to handle with the disruptions occurring in the physical infrastructure. In this first work, we advocate that while usual performance metrics are essential to decide on the deployment of network service function chains, the notion of resiliency should not be neglected as the choice of virtual-to-physical placement may dramatically improve the ability of the service chains to handle with failures of the infrastructure without requiring complex resiliency mechanisms. A position paper on this topic has been published in the PROCON Workshop in September 2016. Then, we considered the problem of SFCs placement in the Cloud. Such service chains are used for critical services like e-health or autonomous transportation systems and thus require high availability. Respecting some availability level is hard in general, but it becomes even harder if the operator of the service is not aware of the physical infrastructure that will support the service, which is the case when SFCs are deployed in multi-tenant data centers. We proposed an algorithm to solve the placement of topology-oblivious SFC demands such that placed SFCs respect availability constraints imposed by the tenant. The algorithm leverages Fat-Tree properties to be computationally doable in an online manner. Our simulation results show that it is able to satisfy as many demands as possible by spreading the load between the replicas and enhancing the network resources utilization. This second work will be presented at the IEEE CloudNet conference in October 2018.

4G/5G support on R2lab

We have deployed the Open Air Interface 4G/5G Cellular Stack on commercial off-the-shelf (COTS) hardware in R2lab. It is possible now to run end-to-end scenarios involving 4G/5G operator networks. This will be very useful in the future to test the quality of experience of streaming applications on LTE networks. A reproducible demonstration has been done in the R2lab inauguration ceremony and a demo has been presented at the ACM SIGCOMM conference in August 2017 on how to deploy a 5G network in R2lab in less than 5 minutes. We are currently integrating the Mosaic5G suite code within R2lab in order to easily run on R2lab various scenarios such as FlexRAN (Flexible and Programmable Platform for Software-Defined Radio Access Networks) and automatically support the latest OpenAirInterface (OAI) versions.

Efficient Experiment Control and Orchestration for R2lab

Experimentation is an essential step for realistic evaluation of wireless network protocols. The evaluation methodology entails controllable environment conditions and a rigorous and efficient experiment control and orchestration for a variety of scenarios. Existing experiment control tools such as OMF often lack in efficiency in terms of resource management and rely on abstractions that hide the details about the wireless setup. We developped nepi-ng, an efficient experiment control tool that leverages job oriented programming model and efficient single-thread execution of parallel programs using asyncio. nepi-ng provides an efficient and modular fine grain synchronization mechanism for networking experiments with light software dependency footprint. We present and discuss our design choices and compare to the state of the art tools, mainly OMF. This work will be presented at the ACM WINTECH conference in November 2018. We will also demonstrate at this conference how to use nepi-ng to easily run and analyze performance of wireless mesh routing protocols.

Slice orchestration for emerging 5G networks

Ultra-dense networks (UDNs) are a natural deployment evolution for handling the tremendous traffic increase related to the emerging 5G services, especially in urban environments. However, the associated infrastructure cost may become prohibitive. The evolving paradigm of network slicing can tackle such a challenge while optimizing the network resource usage, enabling multi-tenancy and facilitating resource sharing and efficient service-oriented communications. Indeed, network slicing in UDN deployments can offer the desired degree of customization in not only vanilla radio access network (RAN) designs, but also in the case of disaggregated multi-service RANs. We proposed a novel multi-service RAN environment capable of supporting slice orchestration procedures and enabling flexible customization of slices as per tenant needs. Each network slice can exploit a number of services, which can be either dedicated or shared between multiple slices over a common RAN. We present the results for a disaggregated UDN deployment where the RAN runtime is used to support slice-based multi-service chain creation and chain placement, with an auto-scaling mechanism to increase the performance. This work was published at the IEEE Communication Magazine in August 2018.