IEEE User-facing services deployed in data centers must respond quickly to user actions. The measurement of network latencies is of paramount importance. Recently, a new family of compact data structures has been proposed to estimate one-way latencies. In order to achieve scalability, these new methods rely on timestamp aggregation. Unfortunately, this approach suffers from serious accuracy problems in the presence of packet loss and reordering, given that a single lost or out-of-order packet may invalidate a huge number of aggregated samples. In this paper, we unify the problem to detect lost and reordered packets within the set reconciliation framework. Although the set reconciliation approach and the data structures for aggregating packet timestamps are previously known, the combination of these two principles is novel. We present a space-efficient synopsis called reconcilable difference aggregator (RDA). RDA maximizes the percentage of useful packets for latency measurement by mapping packets to multiple banks and repairing aggregated samples that have been damaged by lost and reordered packets. RDA simultaneously obtains the average and the standard deviation of the latency. We provide a formal guarantee of the performance and derive optimized parameters. We further design and implement a user-space passive latency measurement system that addresses practical issues of integrating RDA into the network stack. Our extensive evaluation shows that compared with existing methods, our approach improves the relative error of the average latency estimation in 10-15 orders of magnitude, and the relative error of the standard deviation in 0.5-6 orders of magnitude.

Space Division Multiplexing (SDM) is a key technology to cope with the bandwidth limitations of single mode fibers. Multi-Core Fibers (MCFs) are considered as a promising candidate technology to implement SDM, due to their low inter-core crosstalk (ICXT), experimentally proven in laboratory prototypes. Among the different channel allocation options making use of the newly enabled space dimension, the so-called spatial super-channel (Spa-SCh) is the most likely solution to be implemented, given the inherent cost reduction of the joint-switching operation (i.e., jointly switching a spectrum portion in all MCF cores at once). This work targets the cost-effective Spa-SCh allocation over MCF-enabled Flex-Grid optical core networks. To this goal, state-of-the-art 22-core MCFs are assumed, although the proposed solutions are applicable to any MCF type. In particular, we propose and evaluate a partial-core assignment as a cost-effective strategy to improve spectrum utilization and save Capital Expenditure (CapEx) costs by minimizing the number of optical transceivers used per Spa-SCh. Numerical results reveal that reductions up to 44% and 33% in the number of active transceivers in the network can be obtained in national- and continental-wide backbone networks, respectively, without affecting the network Grade-of-Service (GoS), measured in terms of Bandwidth Blocking Probability (BBP). To evaluate the impact of the ICXT, we also compare the performance of the MCF scenarios under study against equivalent Multi-Fiber (MF) ones. From the obtained results, ICXT in MCF scenarios requires the utilization of less efficient modulation formats, which reduces the admissible offered network load by up to 17% for a 1% BBP target. Furthermore, this lower spectral efficiency also demands an increase of the symbol rate per sub-channel up to a 26%, a key indicator of the modulator electronic complexity.

The research community has considered in the past the application of Machine Learning (ML) techniques to control and operate networks. A notable example is the Knowledge Plane proposed by D.Clark et al. However, such techniques have not been extensively prototyped or deployed in the field yet. In this thesis, we explore the reasons for the lack of adoption and posit that the rise of two recent paradigms: Software-Defined Networking (SDN) and Network Analytics (NA), will facilitate the adoption of ML techniques in the context of network operation and control. We describe a new paradigm that accommodates and exploits SDN, NA and ML, and provide use-cases that illustrate its applicability and benefits. We also present some relevant use-cases, in which ML tools can be useful. We refer to this new paradigm as Knowledge-Defined Networking (KDN).
In this context, ML can be used as a network modeling technique to build models that estimate the network performance. Network modeling is a central technique to many networking functions, for instance in the field of optimization. One of the objective of this thesis is to provide an answer to the following question: Can neural networks accurately model the performance of a computer network as a function of the input traffic?. In this thesis, we focus mainly on modeling the average delay, but also on estimating the jitter and the packets lost. For this, we assume the network as a black-box that has as input a traffic matrix and as output the desired performance matrix. Then we train different regressors, including deep neural networks, and evaluate its accuracy under different fundamental network characteristics: topology, size, traffic intensity and routing. Moreover, we also study the impact of having multiple traffic flows between each pair of nodes.
We also explore the use of ML techniques in other network related fields. One relevant application is traffic forecasting. Accurate forecasting enables scaling up or down the resources to efficiently accommodate the load of traffic. Such models are typically based on traditional time series ARMA or ARIMA models. We propose a new methodology that results from the combination of an ARIMA model with an ANN. The Neural Network greatly improves the ARIMA estimation by modeling complex and nonlinear dependencies, particularly for outliers. In order to train the Neural Network and to improve the outliers estimation, we use external information: weather, events, holidays, etc. The main hypothesis is that network traffic depends on the behavior of the end-users, which in turn depend on external factors. We evaluate the accuracy of our methodology using real-world data from an egress Internet link of a campus network. The analysis shows that the model works remarkably well, outperforming standard ARIMA models.
Another relevant application is in the Network Function Virtualization (NFV). The NFV paradigm makes networks more flexible by using Virtual Network Functions (VNF) instead of dedicated hardware. The main advantage is the flexibility offered by these virtual elements. However, the use of virtual nodes increases the difficulty of modeling such networks. This problem may be addressed by the use of ML techniques, to model or to control such networks. As a first step, we focus on the modeling of the performance of single VNFs as a function of the input traffic. In this thesis, we demonstrate that the CPU consumption of a VNF can be estimated only as a function of the input traffic characteristics.

In this paper we design and evaluate a Deep-Reinforcement Learning agent that optimizes routing. Our agent adapts automatically to current traffic conditions and proposes tailored configurations that attempt to minimize the network delay. Experiments show very promising performance. Moreover, this approach provides important operational advantages with respect to traditional optimization algorithms.

Given the current expansion of cloud computing, the expected advent of the Internet of Things, and the requirements of future fifth-generation network infrastructures, significantly larger pools of computational and storage resources will soon be required. This emphasizes the need for more scalable data centers that are capable of providing such an amount of resources in a cost-effective way. A quick look into today's commercial data centers shows that they tend to rely on variations of well-defined leaf-spine/Clos data center network (DCN) topologies, offering low latency, ultrahigh bisectional bandwidth, and enhanced reliability against concurrent failures. However, DCNs are typically restricted by the use of the Transmission Control Protocol/Internet Protocol (TCP/IP) suite, thus suffering limited routing scalability. In this work, we study the benefits that replacing TCP/IP with the recursive internetwork architecture (RINA) can bring into commercial DCNs, focusing on forwarding and routing scalability. We quantitatively evaluate the benefits that RINA solutions can yield against those based on TCP/IP and highlight how, by deploying RINA, topological routing solutions can improve even more the efficiency of the network. To this goal, we propose a rule-and-exception forwarding policy tailored to the characteristics of several DCN variants, enabling fast forwarding decisions with merely neighbors' information. Upon failures, few exceptions are necessary, whose computation can also profit from the known topology. Extensive numerical results show that the proposed policy requirements depend mainly on the number of neighbors and concurrent failures in the DCN rather than its size, dramatically reducing the amount of forwarding and routing information stored at DCN nodes.

This is the peer reviewed version of the following article: Leon Gaixas S, Perelló J, Careglio D, Grasa E, López DR, Aranda PA. Scalable topological forwarding and routing policies in RINA-enabled programmable data centers. Trans Emerging Tel Tech. 2017;28:e3256, DOI 10.1002/ett.3256, which has been published in final form at DOI: 10.1002/ett.3256. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving

We consider Web services defined by orchestrations in the Orc language and two natural quality of services measures, the number of outputs and a discrete version of the first response time. We analyse first those subfamilies of finite orchestrations in which the measures are well defined and consider their evaluation in both reliable and probabilistic unreliable environments. On those subfamilies in which the QoS measures are well defined, we consider a set of natural related problems and analyse its computational complexity. In general our results show a clear picture of the difficulty of computing the proposed QoS measures with respect to the expressiveness of the subfamilies of Orc. Only in few cases the problems are solvable in polynomial time pointing out the computational difficulty of evaluating QoS measures even in simplified models.

To effectively keep pace with the global IP traffic growth forecasted in the years to come, flex-grid over multi-core fiber (MCF) networks can bring superior spectrum utilization flexibility, as well as bandwidth scalability far beyond the non-linear Shannon’s limit. In such a network scenario, however, full node switching re-configurability will require enormous node complexity, pushing the limits of current optical device technologies with prohibitive capital expenditures. Therefore, cost-effective node solutions will most probably be the key enablers of flex-grid/MCF networks, at least in the short- and mid-term future. In this context, this paper proposes a cost-effective reconfigurable optical add/drop multiplexer (ROADM) architecture for flex-grid/MCF networks, called CCC-ROADM, which reduces technological requirements (and associated costs) in exchange for demanding core continuity along the end-to-end communication. To assess the performance of the proposed CCC-ROADM in comparison with a fully flexible ROADM (i.e., a fully non-blocking ROADM, called FNB-ROADM in this work) in large-scale network scenarios, a novel lightweight heuristic to solve the route, modulation, core, and spectrum assignment problem in flex-grid/MCF networks is presented in this work, whose goodness is successfully validated against optimal ILP formulations previously proposed for the same goal. The obtained numerical results in a significant number of representative network topologies with different MCF configurations of 7, 12, and 19 cores show almost identical network performance in terms of maximum network throughput when deploying CCC-ROADMs versus FNB-ROADMs, while decreasing network capital expenditures to a large extent.

On-chip communication remains as a key research issue at the gates of the manycore era. In response to this, novel interconnect technologies have opened the door to new Network-on-Chip (NoC) solutions towards greater scalability and architectural flexibility. Particularly, wireless on-chip communication has garnered considerable attention due to its inherent broadcast capabilities, low latency, and system-level simplicity. This work presents ORTHONOC, a wired-wireless architecture that differs from existing proposals in that both network planes are decoupled and driven by traffic steering policies enforced at the network interfaces. With these and other design decisions, ORTHONOC seeks to emphasize the ordered broadcast advantage offered by the wireless technology. The performance and cost of ORTHONOC are first explored using synthetic traffic, showing substantial improvements with respect to other wired-wireless designs with a similar number of antennas. Then, the applicability of ORTHONOC in the multiprocessor scenario is demonstrated through the evaluation of a simple architecture that implements fast synchronization via ordered broadcast transmissions. Simulations reveal significant execution time speedups and communication energy savings for 64-threaded benchmarks, proving that the value of ORTHONOC goes beyond simply improving the performance of the on-chip interconnect.

The present document defines the profiles of ETSI EN 319 532-3 [8] specification, taking in account concepts and semantic defined in ETSI EN 319 532-1 and ETSI EN 319 532-2, addressing issues relating authentication, authenticity and integrity of the information, with the purpose to address the achievement of interoperability across REM service providers, implemented according the aforementioned specifications and using the same or different formats and/or transport protocols

This document provides the binding of the ERD messages for Common Services Interfaces, whose semantics is defined in ETSI EN 319 522-2 and whose format is defined in ETSI EN 319 522-3, to specific transmission protocols

The present document provides the format for the semantic content that flows across the different interfaces of ERD systems as defined in ETSI EN 319 522-2: "Electronic Registered Delivery Services. Part 2: Semantic contents"

Network performance anomalies can be defined as abnormal and significant variations in a network's traffic levels. Being able to detect anomalies is critical for both network operators and end users. However, the accurate detection without raising false alarms can become a challenging task when there is high variance in the traffic. To address this problem, we present in this paper a novel methodology for detecting performance anomalies based on contextual information. The proposed method is compared with the state of the art and is evaluated with high accuracy on both synthetic and real network traffic.

Web tracking is currently recognized as one of the most important privacy threats on the Internet. Over the last years, many methodologies have been developed to uncover web trackers. Most of them are based on static code analysis and the use of predefined blacklists. However, our main hypothesis is that web tracking has started to use obfuscated programming, a transformation of code that renders previous detection methodologies ineffective and easy to evade. In this paper, we propose a new methodology based on dynamic code analysis that monitors the actual JavaScript calls made by the browser and compares them to the original source code of the website in order to detect obfuscated tracking. The main advantage of this approach is that detection cannot be evaded by code obfuscation. We applied this methodology to detect the use of canvas-font tracking and canvas fingerprinting on the top-10K most visited websites according to Alexa's ranking. Canvas-based tracking is a fingerprinting method based on JavaScript that uses the HTML5 canvas element to uniquely identify a user. Our results show that 10.44% of the top-10K websites use canvas-based tracking (canvas-font and canvas fingerprinting), while obfuscation was used in 2.25% of them. These results confirm our initial hypothesis that obfuscated programming in web tracking is already in use. Finally, we argue that canvas-based tracking can be more present in secondary pages than in the home page of websites.

Obtaining flow-level measurements, similar to those provided by Netflow/IPFIX, with OpenFlow is challenging as it requires the installation of an entry per flow in the flow tables. This approach does not scale well with the number of concurrent flows in the traffic as the number of entries in the flow tables is limited and small. Flow monitoring rules may also interfere with forwarding or other rules already present in the switches, which are often defined at different granularities than the flow level. In this paper, we present a transparent and scalable flow-based monitoring solution that is fully compatible with current off-the-shelf OpenFlow switches. As in NetFlow/IPFIX, we aggregate packets into flows directly in the switches and asynchronously send traffic reports to an external collector. In order to reduce the overhead, we implement two different traffic sampling methods depending on the OpenFlow features available in the switch. We developed our complete flow monitoring solution within OpenDaylight and evaluated its accuracy in a testbed with Open vSwitch. Our experimental results using real-world traffic traces show that the proposed sampling methods are accurate and can effectively reduce the resource requirements of flow measurements in OpenFlow.

Privacy seems to be the Achilles' heel of today's web. Most web services make continuous efforts to track their users and to obtain as much personal information as they can from the things they search, the sites they visit, the people they contact, and the products they buy. This information is mostly used for commercial purposes, which go far beyond targeted advertising. Although many users are already aware of the privacy risks involved in the use of internet services, the particular methods and technologies used for tracking them are much less known. In this survey, we review the existing literature on the methods used by web services to track the users online as well as their purposes, implications, and possible user's defenses. We present five main groups of methods used for user tracking, which are based on sessions, client storage, client cache, fingerprinting, and other approaches. A special focus is placed on mechanisms that use web caches, operational caches, and fingerprinting, as they are usually very rich in terms of using various creative methodologies. We also show how the users can be identified on the web and associated with their real names, e-mail addresses, phone numbers, or even street addresses. We show why tracking is being used and its possible implications for the users. For each of the tracking methods, we present possible defenses. Some of them are specific to a particular tracking approach, while others are more universal (block more than one threat). Finally, we present the future trends in user tracking and show that they can potentially pose significant threats to the users' privacy.

Graphene supports surface plasmon polaritons with comparatively slow propagation velocities in the THz region, which becomes increasingly interesting for future communication technologies. This ability can be used to realize compact antennas, which are up to two orders of magnitude smaller than their metallic counterparts. For a proper functionality of these antennas some minimum material requirements have to be fulfilled, which are presently difficult to achieve, since the fabrication and transfer technologies for graphene are still evolving. In this work we analyze available graphene materials experimentally and extract intrinsic characteristics at THz frequencies, in order to predict the dependency of the THz signal emission threshold as a function of the graphene relaxation time Tr and the chemical potential µc.

Virtual Data Centre (VDC) allocation requires the provisioning of both computing and network resources. Their joint provisioning allows for an optimal utilization of the physical Data Centre (DC) infrastructure resources. However, traditional DCs can suffer from computing resource underutilization due to the rigid capacity configurations of the server units, resulting in high computing resource fragmentation across the DC servers. To overcome these limitations, the disaggregated DC paradigm has been recently introduced. Thanks to resource disaggregation, it is possible to allocate the exact amount of resources needed to provision a VDC instance. In this paper, we focus on the static planning of a shared optically interconnected disaggregated DC infrastructure to support a known set of VDC instances to be deployed on top. To this end, we provide optimal and sub-optimal techniques to determine the necessary capacity (both in terms of computing and network resources) required to support the expected set of VDC demands. Next, we quantitatively evaluate the benefits yielded by the disaggregated DC paradigm in front of traditional DC architectures, considering various VDC profiles and Data Centre Network (DCN) topologies.

In this paper, we describe some scenarios and technologies that have been proposed to cope with the requirements of current and next generation data centre infrastructure. In particular, we discuss the extensions that have been implemented at both orchestration and control levels to efficiently manage the data centres resources. We put the focus on the integration between the Orchestrator and the SDN Controller by describing the communication interfaces and their interaction to provision optimized Virtual Data Centres (VDC) instances over novel data centre infrastructure, with special mention to the different solutions adopted to manage multiple optical technologies at the data plane.

Looking at the volume of publications dealing with Optical Packet (OPS) and those dealing with Elastic Optical Networking (EON) in the period from 2000 to 2016, one clearly can observe that between 2004 and 2009 there is a boom of the OPS publications and then they start to diminish. At the same time, EON publications start to take-off (around 2011) and is steadily growing. EON is an emerging technology that may be considered as a killer technology in the sense that its performance, flexibility, associated cost, ease of deployment and marketing seem to be displacing OPS from the optical networking arena. The purpose of this paper is discuss if, effectively, EON is the killer technology of OPS. So then, besides review the EON technology, the paper analyzes its actual deployment possibilities.

A wide range of social, technological and communication systems can be described as complex networks. Scale-free networks are one of the well known classes of complex networks in which nodes' degrees follow a power-law distribution. The design of scalable, adaptive and resilient routing schemes in such networks is very challenging. In this article we present an overview of required routing functionality, categorize the potential design dimensions of routing protocols among existing routing schemes, and analyze experimental results and analytical studies performed so far to identify the main trends/trade-offs and draw main conclusions. Besides traditional schemes such as hierarchical/shortest-path path-vector routing, the article pays attention to advances in compact routing and geometric routing since they are known to significantly improve scalability in terms of memory space. The identified trade-offs and the outcomes of this overview enable more careful conclusions regarding the (un-)suitability of different routing schemes to large-scale complex networks and provide a guideline for future routing research.

The research community has considered in the past the application of Artificial Intelligence (AI) techniques to control and operate networks. A notable example is the Knowledge Plane proposed by D.Clark et al. However, such techniques have not been extensively prototyped or deployed in the field yet. In this paper, we explore the reasons for the lack of adoption and posit that the rise of two recent paradigms: Software-Defined Networking (SDN) and Network Analytics (NA), will facilitate the adoption of AI techniques in the context of network operation and control. We describe a new paradigm that accommodates and exploits SDN, NA and AI, and provide use-cases that illustrate its applicability and benefits. We also present simple experimental results that support, for some relevant use-cases, its feasibility. We refer to this new paradigm as Knowledge-Defined Networking (KDN).

Graphene is enabling a plethora of applications in a wide range of fields due to its unique electrical, mechanical, and optical properties. In the realm of wireless communications, graphene shows great promise for the implementation of miniaturized and tunable antennas in the terahertz band. These unique advantages open the door to disruptive wireless applications in highly integrated scenarios where conventional communications means cannot be employed. In this paper, recent advances in plasmonic graphene antennas are presented. Wireless Network-on-Chip (WNoC) and Software-Defined Metamaterials (SDMs), two new area-constrained applications uniquely suited to the characteristics of graphene antennas, are then described. The challenges in terms of antenna design and channel characterization are outlined for both case scenarios.

In an increasingly competitive market environment with smaller product offer differentiation, a continuous maximization of efficiency, while guarantying the quality of the provided services, remains a main objective for any telecom operator. In this work, we address the reduction of the operational costs of the optical transport network as one of the possible fields of action to achieve this aim. We propose to apply cognitive science for reducing these costs, specifically by reducing operation margins. We base our work on the case-based reasoning technique by proposing several new schemes to reduce the operation margins established during the design and commissioning phases of the optical links power budgets. From the obtained results, we find that our cognitive proposal provides a feasible solution allowing significant savings on transmitted power that can reach a 49%. We show that there is a certain dependency on network conditions, achieving higher efficiency in low loaded networks where improvements can raise up to 53%.

This is a post-peer-review, pre-copyedit version of an article published in Photonic Network Communications. The final authenticated version is available online at: https://doi.org/10.1007/s11107-017-0717-9.

Graphene is a unique material for the implementation of terahertz antennas due to extraordinary properties of the resulting devices, such as tunability and compactness. Existing graphene antennas are based on pure plasmonic structures, which are compact but show moderate to high losses. To achieve higher efficiency with low cost, one can apply the theory behind dielectric resonator antennas widely used in millimeter-wave systems. This paper presents the concept of hybridization of surface plasmon and dielectric wave modes. Then, via an analysis of one-dimensional structures, a comparison of the potential capabilities of pure and hybrid plasmonic antennas is performed from the perspectives of radiation efficiency, tunability, and miniaturization. Additionally, the impact of the quality of graphene upon the performance of the compared structures is evaluated. On the one hand, results show that hybrid structures deliver high gain with moderate miniaturization and tunability, rendering them suitable for applications requiring a delicate balance between the three aspects. On the other hand, pure plasmonic structures can provide higher miniaturization and tunability, yet with low efficiency, suggesting their use for application domains with high flexibility requirements or stringent physical constraints.

Among the different options to instantiate overlays, the Locator/ID Separation Protocol (LISP) [7] has gained significant traction among industry and academia [5], [6], [8]–[11], [14], [15]. Interestingly, LISP offers a standard, inter-domain, and dynamic overlay that enables low capital expenditure (CAPEX) innovation at the network layer [8]. LISP follows a map-and-encap approach where overlay identifiers are mapped to underlay locators. Overlay traffic is encapsulated into locator-based packets and routed through the underlay. LISP leverages a public database to store overlay-to-underlay mappings and on a pull mechanism to retrieve those mappings on demand from the data plane. Therefore, LISP effectively decouples the control and data planes, since control plane policies are pushed to the database rather than to the data plane. Forwarding elements reflect control policies on the data plane by pulling them from the database. In that sense, LISP can be used as an SDN southbound protocol to enable programmable overlay networks [5].

Optical technologies play nowadays a crucial role in different network scenarios (e.g., data centres, metro and access networks), mainly due to their support for higher bandwidth and scalability in comparison to electronic-based opaque solutions. In this context, the deployment of the Software Defined Networking (SDN) paradigm over optical networks has gained interest from both industry and research communities, spawning several implementations over multiple well documented use cases. However, in Transparent Optical Networks (TONs), where nodes offer optical switching capabilities, neighbouring nodes adjacencies have to be manually configured at both data and control plane levels, which is a lengthy task that can potentially lead to misconfiguration. Alternative solutions rely on the use of supervisory networks or dedicated channels. Nevertheless, implementing these methods requires both effort and resources to provide adjacencies discovery, making the awareness of the correct underlying topology by the SDN controller a tedious and occasionally very complex process. In this paper, we present a novel SDN-based cost-effective topology discovery method, allowing TONs to automatically learn physical adjacencies between optical devices. In particular, this is achieved by means of a test-signal mechanism and the OpenFlow protocol. The SDN control plane and optical agent implementations, as well as the message exchange flow between the subsystems and the controller are described. Then, the proposed discovery mechanism is experimentally assessed over an emulated TON test-bed, analysing the average time required for the optical topology discovery in different network scenarios.

The majority of the research studies on Flex-Grid over multi-core fiber (Flex-Grid/MCF) networks are built on the assumption of fully non-blocking ROADMs (FNB-ROADMs), able to switch any portion of the spectrum from any input core of any input fiber to any output core of any output fiber. Such flexibility comes at an enormous extra hardware cost. In this paper, we explore the trade-off of using ROADMs that impose the so-called core continuity constraint (CCC). Namely, a CCC-ROADM can switch spectrum from a core on an input fiber to a chosen output fiber, but cannot choose the specific output core. For instance, if all fibers have the same number of cores, the i-th core in the input fibers can be just switched to the i-th core in the output fibers. To evaluate the performance vs. cost trade-off of using CCCROADMs, we present two Integer Linear Programming (ILP) formulations for optimally allocating incoming demands in Flex-Grid/MCF networks, where the CCC constraint is imposed or not, respectively. A set of results are extracted applying both schemes in two different backbone networks. Transmission reach estimations are conducted accounting for the fiber’s linear and non-linear effects, as well as the inter-core crosstalk (ICXT) impairment introduced by laboratory MCF prototypes of 7, 12 and 19 cores. Our numerical evaluations show that the performance penalty of CCC is minimal, i.e., below 1% for 7 and 12-core MCF and up to 10% for 19-core MCF, while the cost reduction is large. In addition, results reveal that the ICXT effect can be significant when the number of cores per MCF is high, up to a point that equipping the network with 12-core MCFs can yield superior effective capacity than with 19-core MCFs.

We show the first THz emission of a graphene based plasmonic antenna structure. Furthermore we present the minimum material requirements for an operational graphene antenna in terms of chemical potential µc and relaxation time t.

Elastic optical networking (EON) is a viable solution to meet future dynamic capacity requirements of Internet service provider and inter-datacenter networks. At the core of EON, wavelength selective switches (WSSs) are applied to individually route optical circuits, while assigning an arbitrary bandwidth to each circuit. Critically, the WSS control scheme and configuration time may delay the creation time of each circuit in the network. In this paper, we first detail the WSS-based optical data-plane implementation of a metropolitan network test-bed. Then, we review a software-defined networking (SDN) application designed to enable dynamic and fast circuit setup. Subsequently, we introduce a WSS logical model that captures the WSS time-sequence and is used to estimate the circuit-setup response time. Then, we present two batch service policies that aim to reduce the circuit-setup response time by bundling multiple WSS reconfiguration steps into a single SDN command. Resulting performance gains are estimated through simulation.

We quantify the network capacity scaling from 7 to 30 spatial channels. While multifiber provides a 5x capacity increase, MCF limits it to 4x and 2x in national and continental backbone networks, respectively.

In this paper, we present a study on the performance of direct-oversampling correlator-type receivers in chaos-based direct-sequence code division multiple access systems over frequency non-selective fading channels. At the input, the received signal is sampled at a sampling rate higher than the chip rate. This oversampling step is used to precisely determine the delayed-signal components from multipath fading channels, which can be combined together by a correlator for the sake of increasing the SNR at its output. The main advantage of using direct-oversampling correlator-type receivers is not only their low energy consumption due to their simple structure, but also their ability to exploit the non-selective fading characteristic of multipath channels to improve the overall system performance in scenarios with limited data speeds and low energy requirements, such as low-rate wireless personal area networks. Mathematical models in discrete-time domain for the conventional transmitting side with multiple access operation, the generalized non-selective Rayleigh fading channel, and the proposed receiver are provided and described. A rough theoretical bit-error-rate (BER) expression is first derived by means of Gaussian approximation. We then define the main component in the expression and build its probability mass function through numerical computation. The final BER estimation is carried out by integrating the rough expression over possible discrete values of the PFM. In order to validate our findings, PC simulation is performed and simulated performance is compared with the corresponding estimated one. Obtained results show that the system performance get better with the increment of the number of paths in the channel.

The book presents the state of art in the emerging field of molecular and nanoscale communication. It gives special attention to fundamental models, and advanced methodologies and tools used in the field. It covers a wide range of applications, e.g. nanomedicine, nanorobot communication, bioremediation and environmental managements. It addresses advanced graduate students, academics and professionals working at the forefront in their fields and at the interfaces between different areas of research, such as engineering, computer science, biology and nanotechnology.