Voice over IP (VoIP) can be defined as the ability to make phone calls and to send faxes (i.e., to do everything we can do today with the Public Switched Telephone Network, PSTN) over IP−based data networks with a suitable quality of service and potentially a superior cost/benefit ratio. There is a desire to provide (VoIP) with the suitable security without effecting the performance of this technology. This becomes even more important when VoIP utilizes wireless technologies as the data networks (such as Wireless Local Area Networks, WLAN), given the bandwidth and other constraints of wireless environments, and the data processing costs of the security mechanisms. As for many other (secure) applications, we should consider the security in Mobile VoIP as a chain, where every link, from the secure establishment to the secure termination of a call, must be secure in order to maintain the security of the entire process.

This document presents a solution to these issues, providing a secure model for Mobile VoIP that minimizes the processing costs and the bandwidth consumption. This is mainly achieved by making use of high− throughput, low packet expansion security protocols (such as the Secure Real−Time Protocol, SRTP); and high−speed encryption algorithms (such as the Advanced Encryption Standard, AES).

In the thesis I describe in detail the problem and its alternative solutions. I also describe in detail the selected solution and the protocols and mechanisms this solution utilizes, such as the Transport Layer Security (TLS) for securing the Session Initiation Protocol (SIP), the Real−Time Protocol (RTP) profile Secure Real−Time Protocol (SRTP) for securing the media data transport , and the Multimedia Internet KEYing (MIKEY) as the key−management protocol. Moreover, an implementation of SRTP, called MINIsrtp, is also provided. The oral presentation will provide an overview of these topics, with an in depth examination of those parts which were the most significant or unexpectedly difficult.

Regarding my implementation, evaluation, and testing of the model, this project in mainly focused on the security for the media stream (SRTP). However, thorough theoretical work has also been performed and will be presented, which includes other aspects, such as the establishment and termination of the call (using SIP) and the key−management protocol (MIKEY).

Currently, the IPv4 protocol is heavily used by institutions, companies and individuals, but every day there is a higher number of devices connected to the network such as home appliances, mobile phones or tablets. Each machine or device needs to have its own IP address to communicate with other machines connected to Internet. This implies the need for multiple IP addresses for a single user and the current protocol begins to show some deficiencies due to IPv4 address space exhaustion. Therefore, for several years experts have been working on an IP protocol update: the IPv6 128-bit version can address up to about 340 quadrillion system devices concurrently. With IPv6, today, every person on the planet could have millions of devices simultaneously connected to the Internet.

The choice of the IP protocol version affects the performance of the UMTS mobile network and the browsers as well. The aim of the project is to measure how the IPv6 protocol performs compared to the previous IPv4 protocol. It is expected that the IPv6 protocol generates a smaller amount of signalling and less time is required to fully load a web page. We have analysed some KPIs (IP data, signalling, web load time and battery) in lab environment using Smartphones, to observe the behaviour of both, the network and the device. The main conclusion of the thesis is that IPv6 really behaves as expected and generates savings in signalling, although the IP data generated is larger due to the size of the headers. However, there is still much work as only the most important webpages and the applications with a high level of market penetration operate well over the IPv6 protocol.

The model and design of a generic security provider provides a comprehensive set of security services, mechanisms, encapsulation methods, and security protocols for Java applications. The model is structured in four layers; each layer provides services to the upper layer and the top layer provide services to applications. The services reflect security requirements derived from a wide range of applications; from small desktop applications to large distributed enterprise environments. Based on the abstract model, this paper describes design and implementation of an instance of the provider comprising various generic security modules: symmetric key cryptography, asymmetric key cryptography, hashing, encapsulation, certificates management, creation and verification of signatures, and various network security protocols. This paper also describes the properties extensibility, flexibility, abstraction, and compatibility of the Java Security Provider.

The model and design of a generic security provider provides a comprehensive set of security services, mechanisms, encapsulation methods, and security protocols for Java applications. The model is structured in four layers; each layer provides services to the upper layer and the top layer provide services to applications. The services reflect security requirements derived from a wide range of applications; from small desktop applications to large distributed enterprise environments. Based on the abstract model, this paper describes design and implementation of an instance of the provider comprising various generic security modules: symmetric key cryptography, asymmetric key cryptography, hashing, encapsulation, certificates management, creation and verification of signatures, and various network security protocols. This paper also describes the properties for extensibility, flexibility, abstraction, and compatibility of the Java security provider.

Elastic optical networks (EONs) can help overcome the flexibility challenges imposed by emerging heterogeneous and bandwidth-intensive applications. Among the different solutions for flexible optical nodes, optical white box switches implemented by architecture on demand (AoD) have the capability to dynamically adapt their architecture and module configuration to the switching and processing requirements of the network traffic. Such adaptability allows for unprecedented flexibility in balancing the number of required nodal components in the network, spectral resource usage, and length of the established paths. To investigate these trade-offs and achieve cost-efficient network operation, we formulate the routing, modulation, and spectrum assignment (RMSA) problem in AoD-based EONs and propose three RMSA strategies aimed at optimizing a particular combination of these performance indicators. The strategies rely on a newly proposed internal node configuration matrix that models the structure of optical white box nodes in the network, thus facilitating hardware-aware routing of connection demands. The proposed strategies are evaluated in terms of the number of required modules and the related cost, spectral resource usage, and average path length. Extensive simulation results show that the proposed RMSA strategies can achieve remarkable cost savings by requiring fewer switching modules than the benchmarking approaches, at a favorable trade-off with spectrum usage and path length.

In this paper, we study the performance of device-To-device (D2D) based range extension in terms of sum rate and power efficiency when a relaying user equipment (UE) helps to improve the coverage for cell-edge UEs. In our design, the relaying UE has own traffic to transmit and receive to/from the cellular base station (BS) and can operate either in amplify-And-forward (AF) or decode-And-forward (DF) modes and can make use of either digital or analogue (PHY layer) network coding. In this rather general setting, we propose mode selection, resource allocation and power control schemes and study their performance by means of system simulations. We find that the performance of the DF scheme with network coding is superior both to the traditional cellular and the AF based relaying schemes, including AF with two-slot or three-slot PHY layer network coding.

This paper addresses the problem of weighted sumrate maximization and mean squared error (MSE) minimization for the multiple-input multiple-output (MIMO) interference channel. Specifically, we consider a weighted minimum MSE architecture where each receiver employs successive interference cancellation (SIC) to separate the various received data streams and derive a hybrid beamforming scheme, where the transmitters operate with a number of radio frequency chains smaller than the number of antennas, particularly suited for millimeter-wave channels and 5G applications. To derive our proposed schemes, we first study the relationship between sum-rate maximization and weighted MSE minimization when using SIC receivers, assuming fully digital beamforming. Next, we consider the important-and, as it turns out, highly non-trivial-case where the transmitters employ hybrid digital/analog beamforming, developing a distributed joint hybrid precoding and SIC-based combining algorithm. Moreover, for practical implementation, we propose a signaling scheme that utilizes a common broadcast channel and facilitates the acquisition of channel state information, assuming minimal assistance from a central node such as a cellular base station. Numerical results show that both the proposed weighted MMSE-SIC schemes exhibit great advantages with respect to their linear counterparts in terms of complexity, feedback information, and performance.

The performance of the uplink of multiuser multiple input multiple output systems depends critically on the receiver architecture and on the quality of the acquired channel state information. A popular approach is to design linear receivers that minimize the mean squared error (MSE) of the received data symbols. Unfortunately, most of the literature does not take into account the presence of channel state information errors in the MSE minimization. In this letter we develop a linear minimum MSE (MMSE) receiver that employs the noisy instantaneous channel estimates to minimize the MSE, and highlight the dependence of the receiver performance on the pilot-to-data power ratio. By invoking the theory of random matrices, we calculate the users' signal-to-interference-plus-noise ratio as a function of the number of antennas and the pilot-to-data power ratio of all users. Numerical results indicate that this new linear receiver outperforms the classical mismatched MMSE receiver.

Although network assisted device-to-device (D2D) communications is known to improve the spectraland energy efficiency of proximal communications, its performance is less understood when employedto extend the coverage of cellular networks.In this paper, we study the performance of D2D basedrange extension in terms of sum rate and power efficiency when a relaying user equipment (UE) helps to improvethe coverage for cell-edge UEs.In our design, the relaying UE has own traffic to transmit and receive to/from the cellular base station (BS) andcan operate either in amplify-and-forward (AF) or decode-and-forward (DF) modes and can make use of either digital oranalogue (PHY layer) network coding.In this rather general setting, we propose mode selection, resource allocation and power control schemesand study their performance by means of system simulations.We find that the performance of the DF scheme with network coding is superior both to the traditional cellularand the AF based relaying schemes, including AF with two-slot or three-slot PHY layer network coding.

This paper presents and evaluates an Inter-Access Point Coordination protocol for dynamic channel selection in IEEE 802.11 WLANs. It addresses an open issue for the implementation of many distributed and centralized dynamic channel selection policies proposed to mitigate interference problems in Wireless LANs (WLANs). The presented protocol provides services to a wide range of policies that require different levels of coordination among APs by enabling them to actively communicate and exchange information. An Intra-Cell protocol that enables interaction between the AP and its accommodated stations to handle channel switching within the same cell is also presented.

Long-Term Evolution (LTE) is the latest development in wide area cellular mobile network technology. In contrast with the earlier generations of circuit-switched mobile networks, LTE is all-IP packet-switched network. Both voice and data are sent inside IP packets. Voice over IP (VoIP) is used to provide voice service to LTE users. The speech frames are encapsulated into real-time protocol (RTP) packets and sent over the network. The underlying UDP and IP layers prepend their headers to this small RTP packet resulting in a relatively high overhead. The small size of the RTP packets containing voice/audio leads to an overhead problem as the protocol overhead is in addition to the large LTE frame overhead, thus wasting network resources. This master’s thesis project proposes to multiplex RTP and data packets at the user’s device as a solution to reduce the overhead. Moreover, the capability of modern user devices to switch between several interfaces (such as LTE and WLAN), is taken into account and the multiplexing of multiple traffic flows or a single traffic flow are studied in the case of a vertical handover. Performance and cost metrics are used to evaluate different potential demultiplexing points, and then the best possible demultiplexing point is identified. The results of this evaluation show that several demultiplexing points can be used based on the operator’s needs. The increased packet payload size increases the energy efficiency of LTE and may avoid the need of the UE to switch to WLAN to save power. In addition, to ensure high quality of service for VoIP traffic, the simultaneous use of multiple interfaces is efficient if the multiplexer is enabled. The multiplexing solution proposed by this thesis is also fully compatible with any virtual private network encapsulation protocol.

Displaying the architecture of a software system is not a simple task. Showing all of the available information will unnecessarily complicate the view, while showing too little might render the view unhelpful. Furthermore, showing the dynamics of the operation of such a system is even more challenging.

This thesis project describes the development of a graphical tool that can both display the configuration of an advanced authentication, authorization, and accounting (AAA) system and the messages passed between nodes in the system. The solution described uses force-based graph layouts coupled with adaptive filters as well as vector-based rendering to deliver a view of the status of the system. Force-based layout spreads out the nodes in an adaptive fashion. The adaptive filters starts by showing what is most often the most relevant information, but can be configured by the user. Finally, the vector based rendering offers unlimited zoom into the individual nodes in the graph in order to display additional detailed information.

Unified Modeling Language (UML) sequence charts are used to display the message flow inside the system (both between nodes and inside individual nodes).

To validate the results of this thesis project each iteration of the design was evaluated through meetings with the staff at Aptilo Networks. These meetings provided feedback on the direction the project was taking as well as provided input (such as ideas for features to implement).

The result of this thesis project shows a way to display the status of an AAA system with multiple properties displayed at the same time. It combines this with a view of the flow of messages and application of policies in the network via a dynamically generated UML sequence diagram. As a result human operators are able to see both the system’s architecture and the dynamics of its operation using the same user interface. This integrated view should enable more effective management of the AAA system and facilitate responding to problems and attacks.

Large investments are currently being made for the infrastructure of voice and data services. The network providers revenues consist of fees from the users of the network. Until today it has been difficult to charge for the actual usage, instead so-called flat rate charging has been applied.

Dynamic synchronous Transfer Mode (DTM) is a circuit switched technique that is designed to meet the requirements of future multimedia services. This includes support for real time applications and high throughput. Physically separated channels, whose capacity can easily be adjusted to the users demands, provide the service. The channels and their guaranteed service make DTM a very interesting technique when it comes to charging for network usage, as the characteristics of the channel can be easily described by relatively few parameters.

This thesis describes how accounting management can be applied on a DTM network. It identifies the parameters that need to be collected and describes how to gather these parameters into call data records (CDRs). Furthermore, it outlines how the transportation of the CDRs to the network providers billing system can be performed.

The recent increased focus on patient safety in hospitals has yielded a flood of new technologies and tools seeking to improve the quality of patient care at the point of care. Hospitals are complex institutions by nature, and are constantly challenged to improve the quality of healthcare delivered to patients while trying to reduce the rate of medical errors and improve patient safety. Here a simple mistake such as patient misidentification, specimen misidentification, wrong medication, or wrong blood transfusion can cause the loss of a patient’s life. Misidentification of patients is a common problem that many hospitals face on the daily basis. Patient misidentification is one of the leading causes of medical errors and medical malpractice in hospitals and it has been recognised as a serious risk to patient safety.

Recent studies have shown that an increasing number of medical errors are primarily caused by adverse drug events which are caused directly or indirectly by incorrect patient identification. In recognition of the increasing threat to patient safety, it is important for hospitals to prevent these medical errors from happening by adopting a suitable patient identification system that can improve upon current safety procedures.

The focus of this master’s thesis is the design, implementation, and evaluation of a handheld-based patient identification system that uses radio frequency identification (RFID) and IEEE 802.11b wireless local area networks to identify patients. In this solution, each patient is given a RFID wristband which contains demographic information (patient ID number, ward number, hospital code, etc.) of the patient. A handheld device equipped with IEEE 802.11b wireless local area network connectivity and a RFID reader is then used by the medical staff to read the patient’s wristband, identify the patient, and access the relevant records of this patient.

This work was carried out at the Department of Medical Physics and Bioengineering at the University College Hospital Galway (UCHG), Ireland and the National University of Ireland, Galway.

The increased focus on patient safety in hospitals has yielded a flood of new technologies and tools seeking to improve the quality of patient care at the point-of-care. Hospitals are complex institutions by nature, and are constantly challenged to improve the quality of healthcare delivered to patients while trying to reduce the rate of medical errors and improve patient safety. Here a simple mistake such as patient misidentification, specimen misidentification, wrong medication, or wrong blood transfusion can cause the loss of a patient's life. The focus of this paper is the implementation and evaluation of a handheld-based patient identification system that uses radio frequency identification (RFID) and 802.11b wireless networks to identify patients. In this approach, each patient is given a RFID wristband which contains demographic information (patient ID number, patient summary, hospital code) of the patient. A handheld device equipped with 802.11b wireless connectivity and a RFID reader is then used by the medical staff to read the patient's wristband and identify the patient. This work was carried out at the Department of Medical Physics and Bioengineering at the University College Hospital Galway, Ireland and in co-operation with the National University of Ireland, Galway.

This textbook provides the reader with a basic understanding of the design and analysis of wireless and mobile communication systems. It deals with the most important techniques, models and tools used today in the design of mobile wireless links and gives an introduction to the design of wireless networks. Topics covered include: fundamentals of radio propagation and antennas; transmission schemes, including modulation, coding and equalising schemes for broadband wireless communications; diversity systems; wireless data transmission; introduction to Wireless Network design and resource management. The fundamentals are illustrated by examples from state-of-the-art technologies such as OFDM, WCDMA, WLANs and others. The book contains a significant number of worked examples and more than 160 problems with answers. It is intended for use in a first graduate course in wireless communications and the reader should be familiar with the fundamentals of probability and communication theory.

This article expounds a multi-band compact shaped antenna, which is based on CPW ground plane. FR-4 with a thickness of 1.6 mm is used as a substrate for the proposed antenna. The proposed antenna is capable of operating at 1.56 GHz for (Global Positioning System), 2.45 GHz (Wireless Local Area Network) and 4.49 GHz (Aeronautical Mobile Telemetry (AMT) fixed services). The efficiency at 1.56, 2.45, and 4.49 GHz is 79.7, 76.9 and 76.7%, respectively. The VSWR of the presented antenna is less than 1.5 at all the desired resonance modes, which confirms its good impedance matching. The performance of the proposed antenna is evaluated in terms of VSWR, return loss, radiation pattern and efficiency. CST (R) MWS (R) software is used for simulations. In order to validate the simulation results, a prototype of the designed antenna is fabricated and a good agreement is found between the simulated and measured results.

Optical networks based on Wavelength Division Multiplexing (WDM) technology show many clear benefits in terms of high capacity, flexibility and low power consumption. All these benefits make WDM networks the preferred choice for today’s and future transports solutions which are strongly driven by a plethora of emerging online services.

In such a scenario, capability to provide high capacity during the service provisioning phase is of course very important, but it is not the only requirement that plays a central role. Traffic dynamicity is another essential aspect to consider because in many scenarios, e.g., in the case of real time multimedia services, the connections are expected to be provisioned and torn down quickly and relatively frequently. High traffic dynamicity may put a strain on the network control and management operations (i.e., the overhead due to control message exchange can grow rapidly) that coordinate any provisioning mechanisms. Furthermore, survivability, in the presence of new failure scenarios that goes beyond the single failure assumption, is still of the utmost importance to minimize the network disruptions and data losses. In other words, protection against any possible future failure scenario where multiple faults may struck simultaneously, asks for highly reliable provisioning solutions.

The above consideration have a general validity i.e., can be equally applied to any network segment and not just limited to the core part. So, we also address the problem of service provisioning in the access paradigm. Long reach Passive Optical Networks (PONs) are gaining popularity due to their cost, reach, and bandwidth advantages in the access region. In PON, the design of an efficient bandwidth sharing mechanism between multiple subscribers in the upstream direction is crucial. In addition, Long Reach PONs (LR-PONs) introduces additional challenges in terms of packet delay and network throughput, due to their extended reach. It becomes apparent that effective solutions to the connection provisioning problem in both the core and access optical networks with respect to the considerations made above can ensure a truly optimal end-to-end connectivity while making an efficient usage of resources.

The first part of this thesis focuses on a control and management framework specifically designed for concurrent resource optimization in WDM-based optical networks in a highly dynamic traffic scenario. The framework and the proposed provisioning strategies are specifically designed with the objective of: (i) allowing for a reduction of the blocking probability and the control overhead in a Path Computation Element (PCE)-based network architecture, (ii) optimizing resource utilization for a traffic scenario that require services with diverse survivability requirements which are achieved by means of dedicated and shared path-protection, and (iii) designing provisioning mechanism that guarantees high connection availability levels in Double Link Failures (DLF) scenarios. The presented results show that the proposed dynamic provisioning approach can significantly improve the network blocking performance while making an efficient use of primary/backup resources whenever protection is required by the provisioned services. Furthermore, the proposed DLF schemes show good performance in terms of minimizing disruption periods, and allowing for enhanced network robustness when specific services require high connection availability levels.

In the second part of this thesis, we propose efficient resource provisioning strategies for LR-PON. The objective is to optimize the bandwidth allocation in LR-PONs, in particular to: (i) identify the performance limitations associated with traditional (short reach) TDM-PON based Dynamic Bandwidth Allocation (DBA) algorithms when employed in long reach scenarios, and (ii) devise efficient DBA algorithms that can mitigate the performance limitations imposed by an extended reach. Our proposed schemes show noticeable performance gains when compared with conventional DBA algorithms for short-reach PON as well as specifically devised approaches for long reach.

Maximizing connection availability in WDM networks is critical because even small disruptions can cause huge data losses. However, there is a trade-off between the level of network survivability and the cost related to the backup resources to be provided. The 100% survivability can be achieved by dedicated path protection with multiple pre-reserved protection paths for each provisioned connection, i.e., DPP (1:N). Unfortunately, the blocking probability performance of DPP (1:N) is negatively affected by the large number of pre-reserved backup wavelengths standing-by unutilized. On the other hand, path restoration (PR)-based solutions ensure good blocking performance at the expense of lower connection availability.

The work in this paper aims at finding hybrid network survivability strategies that combine the benefits of both techniques (i.e., high availability with low blocking rate). More specifically, the paper focuses on a double link failure scenario and proposes two strategies. The first one, couples dedicated path protection DPP (1:1) with path restoration (referred to as DPP + PR) to minimize the number of dropped connections. The second scheme adds up the concept of backup reprovisioning (BR), referred to as DPP + BR + PR, in order to further increase the connection availability achieved by DPP + PR. Integer Linear Programming (ILP) models for the implementation of the proposed schemes are formulated. Extensive performance evaluation conducted in a PCE-based WDM network scenario shows that DPP + BR + PR and DPP + PR can significantly lower the blocking probability value compared to DPP (1:2) without compromising too much in terms of connection availability.

Sensors are light-weight, low powered devices that measure some aspect of a physical or virtual environment and transmit this information in some format. This thesis describes how to integrate a sensor onto devices to enable network connectivity.

The phrase “internet of things” suggests that within a few years many devices will be connected to an internet. Devices, including common household appliances, will transmit and receive data over a network. The CEO of Ericsson has stated that there will be more than 50 billion connected devices by 2020[1]. These devices could be microwaves, fridges, lights, or temperature sensors. Devices that are usually not associated with internet connectivity will be integrated into networks and play a larger role in providing information and controlling other devices. Sensors will have a major role in “the internet of things”. These small computers could be integrated in any appliances and transmit data over the network. The sensors’ low power and low cost, as well as their light weight, makes them very attractive to integrate them into many devices. The goal of this thesis project is to build upon this trend toward “The internet of things” by integrating a sensor into a bathroom scale thus enabling the scale to have networking connectivity. The sensor will be low cost and simple. It should wirelessly or via USB transmit the current weight that it measures to a receiver (specifically a gateway). This gateway will forward the message over the network to a website or mobile phone for visual presentation of the data. This thesis describes different techniques and approaches toward developing this sensor. The thesis also evaluates these different choices in order to select one technique that will be implemented. This solution will be evaluated in terms of its cost and ease of integration into an existing commercially produced scale.

Demand for high data rates is increasing rapidly for the future wireless generations, due to the requirement ofubiquitous coverage for wireless broadband services. More base stations are needed to deliver these services, in order tocope with the increased capacity demand and inherent unreliable nature of wireless medium. However, this would directly correspond to high infrastructure costand energy consumption in cellular networks. Nowadays, high power consumption in the network is becoming a matter of concern for the operators,both from environmental and economic point of view.

Cooperative communications, which is regarded as a virtual multi-input-multi-output (MIMO) channel, can be very efficient in combating fading multi-path channels and improve coverage with low complexity and cost. With its distributed structure, cooperativecommunications can also contribute to the energy efficiency of wireless systems and green radio communications of the future. Using networkcoding at the top of cooperative communication, utilizes the network resources more efficiently.

Here we look at the case of large scale use of low cost relays as a way of making the links reliable, that directly corresponds to reductionin transmission power at the nodes. A lot of research work has focused on highlighting the gains achieved by using network codingin cooperative transmissions. However, there are certain areas that are not fully explored yet. For instance, the kind of detectionscheme used at the receiver and its impact on the link performance has not been addressed.The thesis looks at the performancecomparison of different detection schemes and also proposes how to group users at the relay to ensure mutual benefit for the cooperating users.Using constellation selection at the nodes, the augmented space formed at the receiver is exploited for making the links more reliable. Thenetwork and the channel coding schemes are represented as a single product code, that allows us to exploit the redundancy present in theseschemes efficiently and powerful coding schemes can also be designed to improve the link performance.

Heterogeneous network deployments and adaptive power management has been used in order to reduce the overall energy consumption in acellular network. However, the distributed structure of nodes deployed in the network, is not exploited in this regard. Here we have highlightedthe significance of cooperative relaying schemes in reducing the overall energy consumption in a cellular network. The role of differenttransmission and adaptive resource allocation strategies in downlink scenarios have been investigated in this regard.It has been observed that the adaptive relaying schemes can significantly reduce the total energy consumption as compared to the conventionalrelaying schemes. Moreover, network coding in these adaptive relaying schemes, helps in minimizing the energy consumption further.The balance between the number of base stations and the relays that minimizes the energy consumption, for each relaying scheme is also investigated.

The rising demand for the high data rates in the future cellular systems, is directly linked with the power consumption at the transmitting nodes. Due to the various economical and environmental factors, it is becoming difficult to maintain the current rate of power consumption per unit of data, for the upcoming generations of the cellular systems. This has shifted the focus of many researchers towards the energy efficiency aspect of the cellular systems and power consumption has become an important design parameter in the recent works. In this article, we propose the use of cooperative communications using low cost fixed relays, in order to reduce the energy consumption at the transmitting nodes for a given quality of service requirement. It has been investigated that, how different factors, such as cell radius, relay position, number of relays and target data rate effects the area energy consumption for the different relaying schemes. It has been shown that, the cooperative relaying schemes along with adaptive resource allocation provides minimum energy consumption along with the better coverage as compared to the non adaptive cooperative relaying schemes.

Employing network coding at the relay in cooperative relay system can improve the system throughput. However, XOR based network coding does not provide a strong error correction capability that can be used at the base station (receiver) in decoding the information of the cooperating users. Instead a block code can be used to combine the received user packets at the relay station where only the extra redundancy of the block code are forwarded by the relay station. With this structure a better error correction capability is embedded to the network coding scheme providing a better help to the receiver when decoding the user information. Combining this network coding structure with the individual block codes of the users, an overall product code can be obtained which gives the possibility of generating more powerful overall codes and increases the correction capability of the decoding process at the receiver. The obtained results show that the proposed scheme outperforms the conventional XOR based network coding scheme and gives the possibility of combining more users in the cooperation process.

Future wireless systems are being designed for extremely high data rates and are directly contributing to the global energy consumption. This trend is undesirable not only due to the environmental concerns, but to cost as well since energy costs are becoming a significant part of the operating expenditures for the telecom operators. Recently, energy efficient wireless systems have become a new research paradigm. Cooperative communication has shown good potential in improving coverage, providing robust radio links, reducing infrastructure cost, and has the possibility to reduce the total system energy consumption. This paper looks at possible deployment strategies for wireless networks that can reduce the energy consumption. We look at the tradeoff between the number of relay nodes and the number of base stations and their implications on the total energy consumption of wireless networks. The obtained results show that adaptive resource allocation between the base station and the relay node is an efficient way of reducing the energy consumption of the system. Furthermore, this reduction in energy consumption increases with increasing the system target spectral efficiency.

In this paper we investigate the impact that introduction of new Ambient Networks (AN) functionality will have on usage of system resources and on connection delay. The signalling load for multiple attachment and negotiation procedures is assessed by modelling signalling sequences for a WLAN system enabled with AN technology. The load is computed for varying numbers of users and for users with different levels of “willingness to evaluate and negotiate offers”. The results show that the most important parameter is the number of attachment attempts per time unit, which is an indicator of user activity level. In the investigated scenarios, the relative load of signalling is 0.1 – 1.0 % of the transferred user data. The delay depends on the current load situation of the network.

Recent years have witnessed remarkable developements in wireless positioning systems to satisfy the need of the market for real-time services. At Arlanda airport in Stockholm, LFV - department of research and developement wanted to invest in an indoor positioning system to deliver services for customers at the correct time and correct place.

In this thesis, three different technologies, WLAN, Bluetooth, and RFID and their combination are investigated for this purpose. Several approaches are considered and two searching algorithms are compared, namely Trilateration and RF fingerprinting. The proposed approaches should rely on an existing WLAN infrastructure which is already deployed at the airport. The performances of the different considered solutions in the aforementioned approaches are quantified by means of simulations.

This thesis work has shown that RF fingerprinting provide more accurate results than Trilateration algorithm especially in indoor environments, and that infrastructures with a combination of WLAN and Bluetooth technologies result in lower average error if compared to infrastructures that adopt only WLAN.

This thesis evaluated the performance of four different virtual private networks (VPNs): IP security (IPsec), OpenVPN, SSH port forwarding and SSH using virtual interfaces. To evaluate these VPNs, three comparative performance tests were carried out in which the maximum throughput of each VPN was measured. In every test, a specific parameter was varied to observe how it affected the VPNs throughput. The parameters varied were the type of transport layer protocol used, the encryption algorithm used and whether the VPN used compression or not. The results showed, among others, that when TCP traffic was transferred through the VPN and AES-128 was used as encryption algorithm in a Gigabit Ethernet network, the throughput for SSH port forwarding was 168 Mbit/s, 165 Mbit/s for IPsec, 95,0 Mbit/s for SSH using virtual interfaces and 83,3 Mbit/s for OpenVPN. These results are to be compared to the through put measured when no VPN was used, 940 Mbit/s. Three conclusions are drawn from the results of the performance tests. The first conclusion is that the throughput of a VPN depends on the technology the VPN solution is based on, the encryption method that is used and the type ofdata that is sent over the VPN. The second conclusion is that IPsec and SSH port forwarding are the most effective VPNs of the ones compared in this thesis, while OpenVPN and SSH using virtual interfaces are less effective. Lastly, it is concluded that although the different parameters affected the throughput of each VPN, the relation between the VPNs is the same in almost every test. In other words a VPN that performs well in one test performs well in every test.

As a potential candidate architecture for 5G systems,cloud radio access network (CRAN) enhances the system’s capacityby centralizing the processing and coordination at the centralcloud. However, this centralization imposes stringent bandwidthand delay requirements on the fronthaul segment of the networkthat connects the centralized baseband processing units (BBUs)to the radio units (RUs). Hence, hybrid CRAN is proposed toalleviate the fronthaul bandwidth requirement. The concept ofhybrid CRAN supports the proposal of splitting/virtualizing theBBU functions processing between the central cloud (centraloffice that has large processing capacity and efficiency) and theedge cloud (an aggregation node which is closer to the user,but usually has less efficiency in processing). In our previouswork, we have studied the impact of different split points onthe system’s energy and fronthaul bandwidth consumption. Inthis study, we analyze the delay performance of the end user’srequest. We propose an end-to-end (from the central cloud tothe end user) delay model (per user’s request) for differentfunction split points. In this model, different delay requirementsenforce different function splits, hence affect the system’s energyconsumption. Therefore, we propose several research directionsto incorporate the proposed delay model in the problem ofminimizing energy and bandwidth consumption in the network.We found that the required function split decision, to achieveminimum delay, is significantly affected by the processing powerefficiency ratio between processing units of edge cloud and centralcloud. High processing efficiency ratio ( 1) leads to significantdelay improvement when processing more base band functionsat the edge cloud.

KTH Royal Institute of Technology and Scania are entering the GCDC 2011 under the name Scoop –Stockholm Cooperative Driving. This paper is an introduction to their team and to the technical approach theyare using in their prototype system for GCDC 2011.

The objective of the SQUID project is to develop and in flight verify a miniature version of a wire boom deployment mechanism to be used for electric field measurements in the ionosphere. In February 2011 a small ejectable payload, built by a team of students from The Royal Institute of Technology (KTH), was launched from Esrange on-board the REXUS-10 sounding rocket. The payload separated from the rocket, deployed and retracted the wire booms, landed with a parachute and was subsequently recovered. Here the design of the experiment and post fight analysis are presented.

Providing passenger internet on board trains with continuous connectivity at high speeds and over large rural distances is a challenging issue. A frequently used solution to the problem is to use an on board WiFi network connected to the 3G or 4G networks deployed outside the train. In order to be able to provide the capacity and the data rates that tomorrow's business travelers are expecting it has been suggested to use a combination of MIMO and carrier aggregation in the LTE-Advance standard. In this study, we practically investigate the plausibility of using MIMO functionality in an 900 LTE system when the receive antennas are mounted at a train roof about 4m above ground and the base stations antennas are on average placed 2.3km away from the track in towers with an average height of 45m and, hence, most of the time in line of site. It is found that along our test route MIMO is in practice supported by the radio channel at around 70% of the time when the train is travelling with an average speed of 185km/h and the MIMO antennas are mounted 10.5m apart.

Providing passenger internet on board trains with continuous connectivity at high speeds and over large rural distances is a challenging issue. A frequently used solution to the problem is to use an on board WiFi network connected to the 3G or 4G networks deployed outside the train. In order to be able to provide the capacity and the data rates that tomorrow's business travelers are expecting it has been suggested to use a combination of MIMO and carrier aggregation in the LTE-Advance standard. In this study, we practically investigate the plausibility of using MIMO functionality in an 900 LTE system when the receive antennas are mounted at a train roof about 4m above ground and the base stations antennas are on average placed 2.3km away from the track in towers with an average height of 45m and, hence, most of the time in line of site. It is found that along our test route MIMO is in practice supported by the radio channel at around 70% of the time when the train is travelling with an average speed of 185km/h and the MIMO antennas are mounted 10.5m apart.

Providing broadband passenger internet on board trains with continuous connectivity at high speeds and over large rural distances is a challenging issue. One solution to the problem is to use an onboard WiFi network connected to multiple 3G and 4G networks deployed outside the train and aggregate their combined capacity at the IP protocol level. In order to be able to provide the capacity and the data rates that tomorrow’s travelers are expecting, the future 4G standard (LTE-Advance) uses a combination of high order MIMO and carrier aggregation. In this study we use the Swedish company Icomera’s passenger internet system for our investigation. The system provides aggregation of multiple carrier and handover on an IP level. For about 10 years the system has in Sweden primarily been using multiple 3G communication links. However, here we present analysis and onboard measurements of a 2×2 MIMO channel to fast moving train in a live LTE 900 network. The results indicate that MIMO works surprisingly well and it is discussed that by combining 8×8 MIMO with carrier aggregation in future releases of 4G, it may be possible to bring gigabit internet connections to trains.

This paper provides an overview of recent advances in the modeling, analysis, and measurements of interactions between antennas and the propagation channel in multiple antenna systems based on the spherical vector wave mode expansion of the electromagnetic field and the antenna scattering matrix. It demonstrates the importance and usefulness of this approach to gain further insights into a variety of topics such as physics-based propagation channel modeling, mean effective gain, channel correlation, propagation channel measurements, antenna measurements and testing, the number of degrees of freedom of the radio propagation channel, channel throughput, and diversity systems. The paper puts particular emphasis on the unified approach to antenna-channel analysis at the same time as the antenna and the channel influence are separated. Finally, the paper provides the first bibliography on the application of the spherical vector wave mode expansion of the electromagnetic field to antenna-channel interactions.

3G networks and services are being launched all over the world The basic investments on equipment have already been done according to preliminary traffic forecasts. However, if the mobile data traffic acquires "internet- like" proportions, network capacity shortage will become a reality in densely populated areas, such as city centers and business parks. In that case smart antennas may be the solution. Based on this assumption the financial aspects of the deployment of smart antenna systems in the 3G UMTS networks have been evaluated. We have evaluated the potential CAPEX and OPEX savings provided by such a system compared to more conventional antenna systems. Despite their indicative nature, our calculations show that cost savings of the order of 10% to 25% are feasible if the cost increase of the smart antenna equipment is of the order of 100% to 50% of the conventional antennas equipment costs, respectively.

Nowadays, telecommunication services are commonly deployed in private networks, which are controlled and maintained by the telecommunication operators themselves, by co-location services providers, or, to some extent, by their hardware and software providers. However, with the present development of cloud computing resources, one might consider if these services could and should be implemented in the Cloud, thus taking advantage of cloud computing’s high availability, geographic distribution, and ease of usage. Additionally, this migration could reduce the telecommunication operators’ concerns in terms of hardware and network maintenance, leaving those to the Cloud computing providers who will need to supply a highly available and consistent service, to fulfill the telecommunication services’ requirements. Furthermore, virtualization provides the possibility of easily and rapidly changing the Cloud network topology facilitating the addition and removal of machines and services, allowing telecommunication services providers to adapt to their demands on the fly.

The aim of this thesis project is to analyze and evaluate the level of performance, from the network point of view, that can be achieved when using Cloud computing resources to implement a telecommunication service, carrying out practical experiments both in laboratory and real environments. These measurements and analyses were conducted using an Ericsson prototype mobile switching center server (MSC-S) application, although the results obtained could be adapted to other applications with similar requirements.

In order to potentially test this approach in a real environment, a prior providers’ survey was utilized to evaluate their services based on our requirements in terms of hardware and network characteristics, and thus select a suitable candidate environment for our purposes. One cloud provider was selected and its service was further evaluated based on the MSC-S application requirements. We report the results of our bench-marking process in this environment and compare them to the results of testing in a laboratory environment. The results of both sets of testing were well correlated and indicate potential for hosting telecommunication services in a Cloud environment, providing the Cloud meets the requirements imposed by the telecom services.

This project began with a microprocessor platform developed by two master’s students: Albert López and Francisco Javier Sánchez. Their platform was designed as a gateway for sensing devices operating in the 868 MHz band. The platform consists of a Texas Instruments MSP430F5437A microcontroller and a Microchip ENC28J60 Ethernet cont roller connected to the MSP430 processor by a Serial Peripheral Interface.

Javier Lara Peinado implemented prototype white space sensors using the platform developed by the earlier two students. As part of his effort, he partially implemented a Trivial File Transfer Protocol (TFTP) system for loading programs in to the flash memory of the microcontroller using Microchip’s TCP/IP stack. However, he was not successful in loading programs into the flash as the TFTP transfer got stuck at the first block.

The first purpose of this project was to find and fix the error(s) in the TFTP loading of programs into the MSP430’s flash memory. The second purpose of this project was to evaluate Microchip’s TCP/IP stack in depth. This report describes measurements of UDP transmission rates. Additionally, the TFTP processing rate is measured and the TFTP program loading code is documented. The report concludes with suggestions for possible improvements of this system.

Near field communication (NFC) is a short-range wireless communication technology envisioned to support a large gamut of smart-device applications, such as payment and ticketing. Although two NFC devices need to be in close proximity to communicate (up to 10 cm), adversaries can use a fast and transparent communication channel to relay data and, thus, force an NFC link between two distant victims. Since relay attacks can bypass the NFC requirement for short-range communication cheaply and easily, it is important to evaluate the security of NFC applications. In this work, we present a general framework that exploits formal analysis and especially model checking as a means of verifying the resiliency of NFC protocol against relay attacks. Toward this goal, we built a continuous-time Markov chain (CTMC) model using the PRISM model checker. Firstly, we took into account NFC protocol parameters and, then, we enhanced our model with networking parameters, which include both mobile environment and security-aware characteristics. Combining NFC specifications with an adversary's characteristics, we produced the relay attack model, which is used for extracting our security analysis results. Through these results, we can explain how a relay attack could be prevented and discuss potential countermeasures.

Near Field Communication (NFC) is a short-ranged wireless communication technology envisioned to support a large gamut of smart-device applications, such as payment and ticketing applications. Two NFC-enabled devices need to be in close proximity, typically less than 10 cm apart, in order to communicate. However, adversaries can use a secret and fast communication channel to relay data between two distant victim NFC-enabled devices and thus, force NFC link between them. Relay attacks may have tremendous consequences for security as they can bypass the NFC requirement for short range communications and even worse, they are cheap and easy to launch. Therefore, it is important to evaluate security of NFC applications and countermeasures to support the emergence of this new technology. In this work we present a probabilistic model checking approach to verify resiliency of NFC protocol against relay attacks based on protocol, channel and application specific parameters that affect the successfulness of the attack. We perform our formal analysis within the probabilistic model checking environment PRISM to support automated security analysis of NFC applications. Finally, we demonstrate how the attack can be thwarted and we discuss the successfulness of potential countermeasures.

Intensive efforts in industry, academia and standardization bodies have brought vehicular communications (VC) one step before commercial deployment. In fact, future vehicles will become significant mobile platforms, extending the digital life of individuals with an ecosystem of applications and services. To secure these services and to protect the privacy of individuals, it is necessary to revisit and extend the vehicular Public Key Infrastructure (PKI)-based approach towards a multi-service security architecture. This is exactly what this work does, providing a design and a proof-of-concept implementation. Our approach, inspired by long-standing standards, is instantiated for a specific service, the provision of short-term credentials (pseudonyms). Moreover, we elaborate on its operation across multiple VC system domains, and craft a roadmap for further developments and extensions that leverage Web-based approaches. Our current results already indicate our architecture is efficient and can scale, and thus can meet the needs of the foreseen broad gamut of applications and services, including the transportation and safety ones.

Vehicular Communications (VC) are reaching a near deploment phase and will play an important role in improving road safety, driving efficiency and comfort. The industry and the academia have reached a consensus for the need of a Public Key Infrastructure (PKI), in order to achieve security, identity management, vehicle authentication, as well as preserve vehicle privacy. Moreover, a gamut of proprietary and safety applications, such as location-based services and pay-as-you-drive systems, are going to be offered to the vehicles. The emerging applications are posing new challenges for the existing Vehicular Public Key Infrastructure (VPKI) architectures to support Authentication, Authorization and Accountability (AAA), without exposing vehicle privacy. In this work we present an implementation of a VPKI that is compatible with the VC standards. We propose the use of tickets as cryptographic tokens to provide AAA and also preserve vehicle privacy against adversaries and the VPKI. Finally, we present the efficiency results of our implementation to prove its applicability.

In this paper we describe an access control model for multilevel-security documents, those structured into multiple sections based on certain security classifications. Our access control system uses XACML policies to allow documents, whose contents have varying sensitivity levels, to be created, viewed, and edited by groups that have members with varying clearance levels, while enforcing the required security constraints.

Cloud computing is a new buzzword in the modern information technology world. Today cloud computing can be considered as a service, similar to the way that electricity is considered a service in urban areas. A cloud user can utilize different computing resources (e.g. network, storage, software application), whenever required, without being concerned with the complex underlying technology and infrastructure architecture. The most important feature is that the computing resources are available whenever they are needed. Additionally, users pay only for the resource they actually use. As a result, cloud users can easily scale their information technology infrastructure, based on their business policy and requirements. This scalability makes the business process more agile.

The motivation for this thesis was the need for a suitable set of security guidelines for ifoodbag (and similar companies) when implementing web applications in the cloud. The goal of this thesis is to provide security in a system, being developed in another Master’s thesis project, to implement the ifoodbag web application in a cloud. To achieve this goal, we began by identifying the risks, threats, and vulnerabilities in the system model proposed by these other students for their implementation. A study was made of several different security mechanisms that might reduce or eliminate risks and secure the most vulnerable points in the proposed system’s design. Tests of these alternatives were conducted to select a set of mechanisms that could be applied to the proposed system’s design. Justification for why these specific mechanisms were selected is given. The tests allowed the evaluation of how each of these different security mechanisms affected the performance of the system. This thesis presents the test results and their analysis. From this analysis a set of mechanisms were identified that should be included in the prototype of the system. In conclusion, we found that DNSSEC, HTTPS, VPN, AES, Memcached with SASL authentication, and elliptic curve cryptography gave the most security, while minimizing the negative impact on the system. Additionally, client & server mutual authentication and a multi-level distributed database security policy were essential to provide the expected security and privacy that users would expect under the Swedish Data Protection law and other laws and regulations.

Online social networks have become a fast and efficient way of sharing information and experiences. Over the past few years the trend of using social networks has drastically increased with an enormous amount of users’ private contents injected into the providers’ data centers. This has raised concerns about how the users’ contents are protected and how the privacy of users is preserved by the service providers. Moreover, current social networks have been subject to much criticism over their privacy settings and access control mechanism. The providers own the users’ contents and these contents are subject to potential misuse. Many socially engineered attacks have exposed user contents due to the lack of sufficient privacy and access control. These security and privacy threats are addressed by Project Safebook, a distributed peer-to-peer online social networking solution leveraging real life trust. By design Safebook decentralizes data storage and thus the control over user content is no longer in the service provider’s hands. Moreover, Safebook uses an anonymous routing technique to ensure communication privacy between different users.

This thesis project addresses privacy aware data management for Safebook users and a data access control solution to preserve users’ data privacy and visibility utilizing a peer to peer paradigm. The solution focuses on three sub-problems: (1) preserving the user’s ownership of user data, (2) providing an access control scheme which supports fine grained access rights, and (3) secure key management. In our proposed system, the user profile is defined over a collection of small data artifacts. An artifact is the smallest logical entity of a profile. An artifact could be a user’s status tweak, text comment, photo album metadata, or multimedia contents. These artifacts are then logically arranged to form a hierarchical tree, call the User Profile Hierarchy. The root of the profile hierarchy is the only entry point exposed by Safebook from where the complete user profile can be traversed. The visibility of portions of the user profile can be defined by exposing a subset of profile hierarchy. This requires limiting access to child artifacts, by encrypting the connectivity information with specific access keys. Each artifact is associated with a dynamic access chain, which is an encrypted string and contains the information regarding the child nodes. A dynamic access chain is generated using a stream cipher, where each child’s unique identifier is encrypted with its specific access key and concatenated to form the dynamic access chain. The decryption process will reveal only those child artifacts whose access keys are shared. The access keys are managed in a hierarchical manner over the profile hierarchy. Child artifacts inherit the parent’s access key or their access key can be overridden with a new key. In this way, fine grained access rights can be achieved over a user’s artifacts. Remote users can detect changes in a specific branch of a profile hierarchy and fetch new artifacts through our proposed profile hierarchy update service. On top of the proposed access control scheme, any social networking abstraction (such as groups, circles, badges, etc.) can be easily implemented.

In this letter, both the number of participating nodes and spatial dispersion are incorporated to establish a bi-objective optimization problem for maximizing the quality of aggregation under interference and delay constraints in tree-based wireless sensor networks (WSNs). The formulated problem is proved to be NP-hard with respect to Weighted-sum scalarization and a distributed heuristic aggregation scheduling algorithm, named SDMAX, is proposed. Simulation results show that SDMAX not only gives a close approximation of the Pareto-optimal solution, but also outperforms the best, to our knowledge, existing alternative proposed so far in the literature.