Advanced search

Advanced search is divided into two main parts, and one or more groups in each of the main parts. The main parts are the "Search for" (including) and the "Remove from search" (excluding) part. (The excluding part might not be visible until you hit "NOT" for the first time.) You can add new groups to both the including and the excluding part by using the buttons "OR" or "NOT" respectively, and you can add more search options to all groups through the drop down menu on the last row (in each group).

For a result to be included in the search result, is it required to fit all added including parameters (in at least one group) and not fit all parameters in one of the excluding groups. This system with the two main parts and their groups makes it possible to combine two (or more) distinct searches into one search result, while being flexible in removing results from the final list.

This paper addresses the problem of protecting cloud environments against targeted attacks, which have become a popular mean of gaining access to organization's confidential information and resources of cloud providers. Only in 2015 eleven targeted attacks have been discovered by Kaspersky Lab. One of them - Duqu2 - successfully attacked the Lab itself. In this context, security researchers show rising concern about protecting corporate networks and cloud infrastructure used by large organizations against such type of attacks. This article describes a possibility to apply a sandboxing method within a cloud environment to enforce security perimeter of the cloud.

Context. Network Function Virtualization was recently proposed by European Telecommunications Standards Institute (ETSI) to improve the network service flexibility by virtualization of network services and applications that run on hardware. To virtualize network functions, the software is decoupled from underlying physical hardware. NFV aims to transform industries by reducing capital investments on hardware by using commercial-of-the-shelf (COTS) hardware. NFV makes rapid innovative growth in telecom services through software based service deployment.

Objectives. This thesis work aims to investigate how business organizations function and the roles in defining a service relationship model. The work also aims to define a service relationship model and to validate it via proof of concept using network function virtualization as a service. For this thesis, we finally apply lean principles for the defined service relationship model to reduce waste and investigate how lean benefits the model to be proven as performance service oriented.

Methods. The essence of this work is to make a business organization lean by investigating its actions and applying lean principles. To elaborate, this thesis work involves in a research of papers from IEEE, TMF, IETF and Ericsson. It results in modelling of a PoC by following requirement analysis methodology and by applying lean principles to eliminate unnecessary processes which doesn’t add any value.

Results. The results of the work include a full-fledged service relationship model that include three service levels with roles that can fit in to requirement specifications of NFV infrastructure. The results also show the service levels functionalities and their relationships between the roles. It has also been observed that the services that are needed to be standardized are defined with syntax for ways to describe network functions. It is observed that lean principles benefit the service relationship model from reducing waste factors and hereby providing a PoC which is performance service oriented.

Conclusions. We conclude that roles defined are fit for the service relationship model designed. Moreover, we conclude that the model can hence contain the flow of service by standardizing the subservices and reducing waste interpreted with lean principles and there is a need for further use case proof of the model in full scale industry trials. It also concludes the ways to describe network functions syntax which follows lean principles that are essential to have them for the sub-services standardization. However, PoC defined can be an assurance to the NFV infrastructure.

In recent years, there has been a significant growth in multimedia services such as mobile video streaming, Video-on-Demand and video conferencing. This has led to the development of various video coding techniques, aiming to deliver high quality video while using available bandwidth efficiently. This upsurge in the usage of video applications has also resulted in making endusers more quality-conscious. In order to meet the users’ expectations, the Quality of Experience (QoE) studies has gained utmost importance from both researchers and service providers. This thesis aims to compare the performance of H.264/AVC, Xvid and WebM/VP8 video codecs in wired and wireless networks. The codec performance is evaluated for different packet loss and delay variation values. The evaluation of codec performance is done using both subjective and objective assessment methods. In subjective assessment method, the evaluation of video codec performance is done using ITU-T recommended Absolute Category Rating (ACR) method. Using this method the perceptual video quality ratings are taken from the users, which are then averaged to obtain Mean Opinion Score. These obtained scores are used to analyze the performance of encoded videos with respect to users’ perception. In addition to subjective assessment method, the quality of encoded video is also measured using objective assessment method. The objective metric SSIM (Structural Similarity) is used to evaluate the performance of encoded videos. Based on the results, it was found that for lower packet loss and delay variation values H.264 showed better results when compared to Xvid and WebM/VP8 whereas, WebM/VP8 outperformed Xvid and H.264 for higher packet loss and delay variation values. On the whole, H.264 and WebM/VP8 performed better than Xvid. It was also found that all three video codecs performed better in wired network when compared to the wireless network.

Cloud computing stands as a revolution in IT world in recent years. This technology facilitates resource sharing by reducing hardware costs for business users and promises energy efficiency and better resource utilization to the service providers. CPU utilization is a key metric considered in resource management across clouds.

The main goal of this thesis study is directed towards investigating CPU utilization behavior with regard to host and guest, which would help us in understanding the relationship between them. It is expected that perception of these relationships would be helpful in resource management.

Working towards our goal, the methodology we adopted is experi- mental research. This involves experimental modeling, measurements and observations from the results. The experimental setup covers sev- eral complex scenarios including cloud and a standalone virtualization system. The results are further analyzed for a visual correlation.

Results show that CPU utilization in cloud and virtualization sce- nario coincides. More experimental scenarios are designed based on the first observations. The obtaining results show the irregular behav- ior between PM and VM in variable workload.

CPU utilization retrieved from both cloud and a standalone system is similar. 100% workload situations showed that CPU utilization is constant with no correlation co-efficient obtained. Lower workloads showed (more/less) correlation in most of the cases in our correlation analysis. It is expected that more number of iterations can possibly vary the output. Further analysis of these relationships for proper resource management techniques will be considered.

Each day, the dream of seamless networking and connectivity everywhere is getting closer to become a reality. In this regard, mobile Ad-Hoc networks (MANETs) have been a hot topic in the last decade; but the amount of MANET usage nowadays confines to a tiny percentage of all our network connectivity in our everyday life, which connectivity through infrastructured networks has the major share. On the other hand, we know that future of networking belongs to Ad-Hocing , so for now we try to give our everyday infrastructure network a taste of Ad-Hocing ability; these types of networks are called Wireless Mesh Networks (WMN) and routing features play a vital role in their functionality. In this thesis we examine the functionality of 3 Ad-Hoc routing protocols known as AODV, OLSR and GRP using simulation method in OPNET17.5. For this goal we set up 4 different scenarios to examine the performance of these routing protocols; these scenarios vary from each other in amount of nodes, background traffic and mobility of the nodes. Performance measurements of these protocols are done by network throughput, end-end delay of the transmitted packets and packet loss ratio as our performance metrics. After the simulation run and gathering the results we study them in a comparative view, first based on each scenario and then based on each protocol. For conclusion, as former studies suggest AODV, OLSR and DRP are among the best routing protocols for WMNs, so in this research we don’t introduce the best RP based on the obtained functionality results, instead we discuss the network conditions that each of these protocols show their best functionality in them and suggest the best routing mechanism for different networks based on the analysis from the former part.

The storage vendors have their own standards for the management of their storage resources but it creates interoperability issues on different storage products. With the recent advent of the new protocol named Storage Management Initiative-Specification (SMI-S), the Storage Networking Industry Association (SNIA) has taken a major step in order to make the storage management more effective and organized. SMI-S has replaced its predecessor Storage Network Management Protocol (SNMP) and it has been categorized as an ISO standard. The main objective of the SMI-S is to provide interoperability management of the heterogeneous storage vendor systems by unifying the Storage Area Network (SAN) management, hence making the dreams of the network managers come true. SMI-S is a guide to build systems using modules that ‘plug’ together. SMI-S compliant storage modules that use CIM ‘language’ and adhere to CIM schema interoperate in a system regardless of which vendor built them. SMI-S is object-oriented, any physical or abstract storage-related elements can be defined as a CIM object. SMI-S can unify the SAN management systems and it works well with the heterogeneous storage environment. SMI-S has offered a cross-platform, cross-vendor storage resource management. This thesis work discusses the use of SMI-S at Compuverde which is a storage solution provider, located in the heart of the Karlskrona, the southeastern part of Sweden. Compuverde was founded by Stefan Bernbo in Karlskrona,Sweden. Just like all others leading storage providers, Compuverde has also decided to deploy the Storage Management Initiative-Specification (SMI-S) to manage their Storage Area Network (SAN) and to achieve interoperability. This work was done to help Compuverde to deploy the SMI-S protocol for the management of the Storage Area Network (SAN) which, among many of its features, would create alerts/traps in case of a disk failure in the SAN. In this way, they would be able to keep the data of their clients, safe and secure and keep their reputation for being reliable in the storage industry. Since Compuverde regularly use Microsoft Windows and Microsoft have started to support SMI-S for storage provisioning in System Center 2012 Virtual Machine Manager (SCVMM), this work was done using the SCVMM 2012 and the Windows Server 2012.The SMI-S provider which was used for this work was QNAP TS- 469 Pro.

Cloud computing enables on-demand access to a shared pool of computing resources, that can beeasily provisioned, configured and released with minimal management cost and effort. OpenStack isan open source cloud management platform aimed at providing private or public IaaS cloud onstandard hardware. Since, deploying OpenStack manually is tedious and time-consuming, there are several tools that automate the deployment of OpenStack. Usually, cloud admins choose a tool basedon its level of automation, ease of use or interoperability with existing tools used by them. However,another desired factor while choosing a deployment tool is its deployment speed. Cloud admins cannot select based on this factor since, there is no previous work on the comparison of deploymenttools based on deployment time. This thesis aims to address this issue.

The main aim of the thesis is to evaluate the performance of OpenStack deployment tools with respectto operating system provisioning and OpenStack deployment time, on physical servers. Furthermore,the effect of varying number of nodes, OpenStack architecture deployed and resources (cores andRAM) provided to deployment node on provisioning and deployment times, is also analyzed. Also,the tools classified based on stages of deployment and method of deploying OpenStack services. In this thesis we evaluate the performance of MAAS, Foreman, Mirantis Fuel and Canonical Autopilot.

The performance of the tools is measured via experimental research method. Operating system provisioning time and OpenStack deployment times are measured while varying the number of nodes/OpenStack architecture and resources provided to deployment node i.e. cores and RAM.

Results show that provisioning time of MAAS is less than Mirantis Fuel, which is less than Foreman.Furthermore, for all 3 tools as number of nodes increases provisioning time increases. However, the amount of increase is lowest for MAAS than Mirantis Fuel and Foreman. Similarly, results for baremetal OpenStack deployment time show that, Canonical Autopilot outperforms Mirantis Fuel by asignificant difference for all OpenStack scenarios considered. Furthermore, as number of nodes in an OpenStack scenario increases, the deployment time for both the tools increases.

From the research, it is concluded that MAAS and Canonical Autopilot perform better as provisioningand bare metal OpenStack deployment tool respectively, than other tools that have been analyzed.Furthermore, from the analysis it can be concluded that increase in number of nodes/ OpenStackarchitecture, leads to an increase in both provisioning time and OpenStack deployment time for all the tools.

In this paper, we investigate session-related performance statistics of a Web-based Real-Time Communication (WebRTC) application called appear. in. We explore the characteristics of these statistics and explore how they may relate to users' Quality of Experience (QoE). More concretely, we have run a series of tests involving two parties and according to different test scenarios, and collected real-time session statistics by means of Google Chrome's WebRTC-internals tool. Despite the fact that the Chrome statistics have a number of limitations, our observations indicate that they are useful for QoE research when these limitations are known and carefully handled when performing post-processing analysis. The results from our initial tests show that a combination of performance indicators measured at the sender's and receiver's end may help to identify severe video freezes (being an important QoE killer) in the context of WebRTC-based video communication. In this paper the performance indicators used are significant drops in data rate, non-zero packet loss ratios, non-zero PLI values, and non-zero bucket delay.

Over the past few years, Internet traffic took a ramp increase. Of which, most of the traffic is video traffic. With the latest Cisco forecast it is estimated that, by 2017 online video will be highly adopted service with large customer base. As the networks are being increasingly ubiquitous, applications are turning equally intelligent. A typical video communication chain involves transmission of encoded raw video frames with subsequent decoding at the receiver side. One such intelligent codec that is gaining large research attention is H.264/SVC, which can adapt dynamically to the end device configurations and network conditions. With such a bandwidth hungry, video communications running over lossy mobile networks, its extremely important to quantify the end user acceptability. This work primarily investigates the problems at player user interface level compared to the physical layer disturbances. We have chosen Inter frame time at the Application layer level to quantify the user experience (player UI) for varying lower layer metrics like noise and link power with nice demonstrator telling cases. The results show that extreme noise and low link level settings have adverse effect on user experience in temporal dimension. The video are effected with frequent jumps and freezes.

There is a demand for video quality measurements in modern video applications specifically in wireless and mobile communication. In real time video streaming it is experienced that the quality of video becomes low due to different factors such as encoder and transmission errors. HEVC/H.265 is considered as one of the promising codecs for compression of ultra-high definition videos. In this research, full reference based video quality assessment is performed. The raw format reference videos have been taken from Texas database to make test videos data set. The videos are encoded using HM9 reference software in HEVC format. Encoding errors has been set during the encoding process by adjusting the QP values. To introduce packet loss in the video, the real-time environment has been created. Videos are sent from one system to another system over UDP protocol in NETCAT software. Packet loss is induced with different packet loss ratios into the video using NETEM software. After the compilation of video data set, to assess the video quality two kind of analysis has been performed on them. Subjective analysis has been carried on different human subjects. Objective analysis has been achieved by applying five quality matrices PSNR, SSIM, UIQI, VFI and VSNR. The comparison is conducted on the objective measurement scores with the subjective and in the end results deduce from classical correlation methods.

Context: In the present world, there is an increase in the usage of com- munication services. The growth in the usage and services relying on the communication network has brought in the increase in energy consumption for all the resources involved like computers and other networking compo- nent. Energy consumption has become an other efficient metric, so there is a need of efficient networking services in various fields which can be obtained by using the efficient networking components like computers. For that pur- pose we have to know about the energy usage behavior of that component. Similarly as there is a growth in use of large data-centers there is a huge requirement of computation resources. So for an efficient use of these re- sources we need the measurement of each component of the system and its contribution towards the total power consumption of the system. This can be achieved by power profiling of different heterogeneous computers for es- timating and optimizing the usage of the resources.

Objectives: In this study, we investigate the power profiles of different heterogeneous computers, under each system component level by using a predefined workload. The total power consumption of each system compo- nent is measured and evaluated using the open energy monitor(OEM). Methods: In oder to perform the power profile an experimental test bed is implemented. Experiments with different workload on each component are conducted on all the computers. The power for all the system under test is measured by using the OEM which is connected to each system under test(SUT).

Results: From the results obtained, the Power profiles of different SUT’s are tabulated and analyzed. The power profiles are done in component level under different workload scenarios for four different heterogeneous comput- ers. From the results and analysis it can be stated that there is a variation in power consumed by each component of a computer based on its con- figuration. From the results we evaluate the property of super positioning principle.

Context: There is continuous growth in data generation due to wide usage of modern communication systems. Systems have to be designed which can handle the processing of these data volumes efficiently. Mediation systems are meant to serve this purpose. Databases form an integral part of the mediation systems. Suitability of the databases for such systems is the principle theme of this work.

Objectives: The objective of this thesis is to identify the key requirements for databases that can be used as part of Mediation systems, gain a thorough understanding of various features, the data models commonly used in databases and to benchmark their performance.

Methods: Previous work that has been carried out on various databases is studied as a part of literature review. Test bed is set up as a part of experiment and performance metrics such as throughput and total time taken were measured through a Java based client. Thorough analysis has been carried out by varying various parameters like data volumes, number of threads in the client etc.

Results: Cassandra has a very good write performance for event and batch operations. Cassandra has a slightly better read performance when compared to MySQL Cluster but this differentiation withers out in case of fewer number of threads in the client.

Conclusions: On evaluation of MySQL Cluster and Cassandra we conclude that they have several features that are suitable for mediation systems. On the other hand, Cassandra does not guarantee ACID transactions while MySQL Cluster has good support. There is need for further evaluation on new generation databases which are not mature enough as of now.

In this paper, we study the influence of video stalling on QoE. We provide QoE models that are obtained in realistic scenarios on the smartphone, and provide energy-saving approaches for smartphone by leveraging the proposed QoE models in relation to energy. Results show that approximately 5J is saved in a 3 minutes video clip with an acceptable Mean Opinion Score (MOS) level when the video frames are skipped. If the video frames are not skipped, then it is suggested to avoid freezes during a video stream as the freezes highly increase the energy waste on the smartphones.

The competence to manage risks related to health, security, fire and safety is a sought-after skill.This is especially noticeable in both business and public administration job postings for therecruitment process of managers, administrators or coordinators to security departments. At thesame time there is little specialist literature available in Swedish on the subject of risk managementin the context of protecting assets and people from physical security threats. The lack of literatureaffects the study of risk management from a physical and procedural security perspective,particularly at an academic level where this is a relatively new topic. To move forward and expandthe field of knowledge is an important step, not only for the scientific community but also for theindustry. This bachelor thesis attempts to be an initial but significant contribution to a topic thatis likely to grow. By mapping what has already been published on the subject in English as wellas summing up and analyzing the scientific knowledge from similar disciplines the thesis has alsohad an additional goal: to reach out with knowledge to those dealing with risk management inpractice, and thus raising their awareness and developing their professional skills.The purpose of this study is to present the current state of knowledge and at the same time toshow the width and depth of the risk management process. This is done by identifying similaritiesand differences in definitions, process descriptions, problems and best practice of the studied areaswhile at the same time account for any criticism offered against risk management as a concept.The results show that there are more similarities than differences in the risk management processand methods regardless of whether the purpose is to protect people and assets from healthhazards, crime, fire or accidents.The paper has been conducted as a descriptive literature study and a comparative textual analysis.The risk management process has been described with reference to the generic ISO standard(31000:2009, Risk management - Principles and guidelines). Also, ten common risk analysismethods that cover all steps in the risk assessment process have been described. The narrative andrelated analysis follow the same order as the ISO-standard process description.The material has been supplemented and compared with guidelines and scientific papers from threetypes of risks management contexts: (1) health hazards, (2) fire and safety, and (3) security.The paper also provides examples of the inconsistent use of terms and definitions both between andwithin different disciplines involved in risk management. One of the conclusions of the report is thatcreating a unified, universal terminology to be used in the security context probably is impossibleas well as being not necessary. Instead, certain terminological misunderstandings can be avoided byproviding clear definitions and explanations of their meaning in each particular case.

With the increase in computing power and software engineering in the past years computer based stochastic discrete-event simulations have become very commonly used tool to evaluate performance of various, complex stochastic systems (such as telecommunication networks). It is used if analytical methods are too complex to solve, or cannot be used at all. Stochastic simulation has also become a tool, which is often used instead of experimentation in order to save money and time by the researchers. In this thesis, we focus on the statistical correctness of the final estimated results in the context of steady-state simulations performed for the mean analysis of performance measures of stable stochastic processes. Due to various approximations the final experimental coverage can differ greatly from the assumed theoretical level, where the final confidence intervals cover the theoretical mean at much lower frequency than it was expected by the preset theoretical confidence level. We present the results of coverage analysis for the methods of dynamic partially-overlapping batch means, spectral analysis and mean squared error optimal dynamic partially-overlapping batch means. The results show that the variants of dynamic partially-overlapping batch means, that we propose as their modification under Akaroa2, perform acceptably well for the queueing processes, but perform very badly for auto-regressive process. We compare the results of modified mean squared error optimal dynamic partially-overlapping batch means method to the spectral analysis and show that the methods perform equally well.

It has been realized that the success of multimedia services or applications relies on the analysis of the entire user experience (UX). The relevance of this paradigm ranges from Internet protocol television to video-on-demand systems for distributing and sharing professional television (TV) and user-generated content that is consumed and produced ubiquitously. To obtain a pleasurable user experience, a large amount of aspects have to be taken into account. Major challenges in this context include the identification of relevant UX factors and the quantification of their influence on Quality of Experience (QoE). This special issue is dedicated to advances in, tools, techniques and practices for multimedia QoE that tackle several of the aforementioned challenges.

This thesis involves the area of virtualization. We have studied about improving load balancing in data center by using automated live migration techniques. The main idea is to migrate virtual machine(s) automatically from high loaded hosts to less loaded hosts with efficiency. The successful implementation can help administrator of data center maintain a load-balanced environment with less effort than before. For example, such a system can automatically identify hotspots and coldspots in a large data center and also decide which virtual machine to migrate, and which host should the machine be migrated to. We have implemented previously developed Push and Pull strategies on a real testbed for Xen and KVM. A new strategy, Hybrid, which is the combination of Push and Pull, has been created. All scripts applied in the experiments are Python-based for further integration to the orchestration framework OpenStack. By implementing the algorithms on a real testbed, we have solved a node failure problem in the algorithms, which was not detected previously through simulation. The results from simulation and those from testbed are similar. E.g. Push strategy has quick responses when the load is medium to high, while Pull strategy has quick responses when the load is low to medium. The Hybrid strategy behaves similar to Push strategy with high load and to Pull strategy with low load, but with greater number of migration attempts, and it responds quickly regardless to the load. The results also show that our strategies are able to handle different incidents such as burst, drain, or fluctuation of load over time. The comparison of results from different hypervisors, i.e., Xen and KVM, shows that both hypervisors conduct in the same way when applying same strategies in the same environment. It means the strategies are valid for both of them. Xen seems to be faster in improving the System performance. The migration attempts are similar, but KVM has much less Migrations over time than Xen with same scenario.

Context: Software testing is the process of assessing quality of a software product to determine whether itmatches with the existing requirements of the customer or not. Software testing is one of the “Verification and Validation,” or V&V, software practices. The two basic techniques of software testing are Black-box testing and White box testing. Black-box testing focuses solely on the outputs generated in response to the inputs supplied neglecting the internal components of the software. Whereas, White-box testing focuses on the internal mechanism of the software of any application. To explore the feasibility of black-box and white-box testing under a given set of conditions, a proper test automation framework needs to be deployed. Automation is deployed in order to reduce the manual effort and to perform testing continuously, thereby increasing the quality of the product.

Objectives:In this research, cloud hosted application is automated using TestComplete tool. The objective ofthis thesis is to verify the functionality of Cloud application known as Test data library or Test Report Analyzer through automation and to measure the impact of the automation on release cycles of the organization.

Methods: Here automation is implemented using scrum methodology which is an agile development softwareprocess. Using scrum methodology, the product with working software can be delivered to the customers incrementally and empirically with updating functionalities in it. Test data library or Test Report Analyzer functionality of Cloud application is verified deploying testing device thereby the test cases can be analyzed thereby analyzing the pass or failed test cases.

Results: Automation of test report analyzer functionality of cloud hosted application is made usingTestComplete and impact of automation on release cycles is reduced. Using automation, nearly 24% of change in release cycles can be observed thereby reducing the manual effort and increasing the quality of delivery.

Conclusion: Automation of a cloud hosted application provides no manual effort thereby utilization of time canbe made effectively and application can be tested continuously increasing the efficiency and the quality of an application.

The video streaming applications contribute to a major share of the Internet traffic. Consequently, monitoring and management of video streaming quality has gained a significant importance in the recent years. The disturbances in the video, such as, amount of buffering and bitrate adaptations affect user Quality of Experience (QoE). Network operators usually monitor such events from network traffic with the help of Deep Packet Inspection (DPI). However, it is becoming difficult to monitor such events due to the traffic encryption. To address this challenge, this thesis work makes two key contributions. First, it presents a test-bed, which performs automated video streaming tests under controlled time-varying network conditions and measures performance at network and application level. Second, it develops and evaluates machine learning models for the detection of video buffering and bitrate adaptation events, which rely on the information extracted from packets headers. The findings of this work suggest that buffering and bitrate adaptation events within 60 second intervals can be detected using Random Forest model with an accuracy of about 70%. Moreover, the results show that the features based on time-varying patterns of downlink throughput and packet inter-arrival times play a distinctive role in the detection of such events.

Context In the present world, there is an increase in the usage of the telecommunication networking services, as there is a need of efficient networking services in various fields which can be obtained by using the efficient networking components. For that purpose we have to know about the components parameters. One of the most important parameter is the energy usage of networking components. Therefore, there is a need in power profiling of the network switches.

Objectives The objective of this research is to profile the power usage of different network components(Switches) for various load scenarios. Power measurements are done by using the open energy monitoring tool called emonpi.

Methods The research method has been carried out by using an experimental test bed. In this research, we are going to conduct the experiments with different configurations to obtain different load conditions for sources and destinations which will be passed through DUT(Device Under Test). For that DUT’s we will measure power usage by monitoring tool called emonpi. Then the experiments are conducted for different load scenarios for different switches and results are discussed.

Conclusion From the results obtained, the Power profiles of different DUT’s are tabulated and analyzed. These were done under different ports and load scenarios for Cisco2950, Cisco3560 and Netgear GS-724T. From the results and analysis it can be stated that the power usage of Cisco 2950 is having the maximum power usage in all the considered scenarios with respect to packet rate and also number of active ports. The Netgear-GS724T is having the minimum power usage from the three switches as it having the green switch characteristics in all scenarios. And the Cisco 3560 is in between the above two switches as it is having energy efficient management from Cisco. From this we have proposed a simple model for energy/power measurement.

Due to the rapid development of wireless communications together with the inflexibility of the current spectrum allocation policy, radio spectrum becomes more and more exhausted. One of the critical challenges of wireless communication systems is to efficiently utilize the limited frequency resources to be able to support the growing demand of high data rate wireless services. As a promising solution, cognitive radios have been suggested to deal with the scarcity and under-utilization of radio spectrum. The basic idea behind cognitive radios is to allow unlicensed users, also called secondary users (SUs), to access the licensed spectrum of primary users (PUs) which improves spectrum utilization. In order to not degrade the performance of the primary networks, SUs have to deploy interference control, interference mitigating, or interference avoidance techniques to minimize the interference incurred at the PUs. Cognitive radio networks (CRNs) have stimulated a variety of studies on improving spectrum utilization. In this context, this thesis has two main objectives. Firstly, it investigates the performance of single hop CRNs with spectrum sharing and opportunistic spectrum access. Secondly, the thesis analyzes the performance improvements of two hop cognitive radio networks when incorporating advanced radio transmission techniques. The thesis is divided into three parts consisting of an introduction part and two research parts based on peer-reviewed publications. Fundamental background on radio propagation channels, cognitive radios, and advanced radio transmission techniques are discussed in the introduction. In the first research part, the performance of single hop CRNs is analyzed. Specifically, underlay spectrum access using M/G/1/K queueing approaches is presented in Part I-A while dynamic spectrum access with prioritized traffics is studied in Part I-B. In the second research part, the performance benefits of integrating advanced radio transmission techniques into cognitive cooperative radio networks (CCRNs) are investigated. In particular, opportunistic spectrum access for amplify-and-forward CCRNs is presented in Part II-A where collaborative spectrum sensing is deployed among the SUs to enhance the accuracy of spectrum sensing. In Part II-B, the effect of channel estimation error and feedback delay on the outage probability and symbol error rate (SER) of multiple-input multiple-output CCRNs is investigated. In Part II-C, adaptive modulation and coding is employed for decode-and-forward CCRNs to improve the spectrum efficiency and to avoid buffer overflow at the relay. Finally, a hybrid interweave-underlay spectrum access scheme for a CCRN is proposed in Part II-D. In this work, the dynamic spectrum access of the PUs and SUs is modeled as a Markov chain which then is utilized to evaluate the outage probability, SER, and outage capacity of the CCRN.

This paper studies the performance of adaptive modulation and coding in a cognitive incremental decode-and-forward relaying network where a secondary source can directly communicate with a secondary destination or via an intermediate relay. To maximize transmission efficiency, a policy which flexibly switches between the relaying and direct transmission is proposed. In particular, the transmission, which gives higher average transmission efficiency, will be selected for the communication. Specifically, the direct transmission will be chosen if its instantaneous signal-to-noise ratio (SNR) is higher than one half of that of the relaying transmission. In this case, the appropriate modulation and coding scheme (MCS) of the direct transmission is selected only based on its instantaneous SNR. In the relaying transmission, since the MCS of the transmissions from the source to the relay and from the relay to the destination are implemented independently to each other, buffering of packets at the relay is necessary. To avoid buffer overflow at the relay, the MCS for the relaying transmission is selected by considering both the queue state and the respective instantaneous SNR. Finally, a finite-state Markov chain is modeled to analyze key performance indicators such as outage probability and average transmission efficiency of the cognitive relay network.

This paper investigates the system performance of a cognitive relay network with underlay spectrum sharing wherein the relay is exploited to assist both the primary and secondary transmitters in forwarding their signals to the respective destinations. To exploit spatial diversity, beamforming transmission is implemented at the transceivers of the primary and secondary networks. Particularly, exact expressions for the outage probability and symbol error rate (SER) of the primary transmission and tight bounded expressions for the outage probability and SER of the secondary transmission are derived. Furthermore, an asymptotic analysis for the primary network, which is utilized to investigate the diversity and coding gain of the network, is developed. Finally, numerical results are presented to show the benefits of the proposed system.

In this study, the authors analyse the average end-to-end packet delay for a cognitive ad hoc network where multiple secondary nodes randomly contend for accessing the licensed bands of primary users in non-slotted time mode. Before accessing the licensed bands, each node must perform spectrum sensing and collaboratively exchange the sensing results with other nodes of the corresponding communication as a means of improving the accuracy of spectrum sensing. Furthermore, the medium access control with collision avoidance mechanism based distributed coordination function speciﬁed by IEEE802.11 is applied to coordinate spectrum access for this cognitive ad hoc network. To evaluate the system performance, the authors model the considered network as an open G/G/1 queuing network and utilise the method of diffusion approximation to analyse the end-to-end packet delay. The authors’ analysis takes into account not only the number of secondary nodes, the arrival rate of primary users and the arrival rate of secondary users but also the effect of the number of licensed bands when assessing the average end-to-end packet delay of the networks.

We develop a dynamic spectrum access (DSA) strategy for cognitive radio networks where prioritized traffic is considered. Assume that there are three classes of traffic, one traffic class of the primary user and two traffic classes of the secondary users, namely, Class 1 and Class 2. The traffic of the primary user has the highest priority, i.e., the primary users can access the spectrum at any time with the largest bandwidth demand. Furthermore, Class 1 has higher access and handoff priority as well as larger bandwidth demand as compared to Class 2. To evaluate the performance of the proposed DSA, we model the state transitions for DSA as a multi-dimensional Markov chain with three-state variables which present the number of packets in the system of the primary users, the secondary Class 1, and secondary Class 2. In particular, the blocking probability and dropping probability of the two secondary traffic classes are assessed.

In this paper, we study a hybrid interweave-underlay spectrum access system that integrates amplify-and-forward relaying. In hybrid spectrum access, the secondary users flexibly switch between interweave and underlay schemes based on the state of the primary users. A continuous-time Markov chain is proposed to model and analyze the spectrum access mechanism of this hybrid cognitive cooperative radio network (CCRN). Utilizing the proposed Markov model, steady-state probabilities of spectrum access for the hybrid CCRN are derived. Furthermore, we assess performance in terms of outage probability, symbol error rate (SER), and outage capacity of this CCRN for Nakagami-m fading with integer values of fading severity parameter m. Numerical results are provided showing the effect of network parameters on the secondary network performance such as the primary arrival rate, the distances from the secondary transmitters to the primary receiver, the interference power threshold of the primary receiver in underlay mode, and the average transmit signal-to-noise ratio of the secondary network in interweave mode. To show the performance improvement of the CCRN, comparisons for outage probability, SER, and capacity between the conventional underlay scheme and the hybrid scheme are presented. The numerical results show that the hybrid approach outperforms the conventional underlay spectrum access.

In this paper, we study the performance of multiple-input multiple-output cognitive amplify-and-forward relay networks using orthogonal space–time block coding over independent Nakagami-m fading. It is assumed that both the direct transmission and the relaying transmission from the secondary transmitter to the secondary receiver are applicable. In order to process the received signals from these links, selection combining is adopted at the secondary receiver. To evaluate the system performance, an expression for the outage probability valid for an arbitrary number of transceiver antennas is presented. We also derive a tight approximation for the symbol error rate to quantify the error probability. In addition, the asymptotic performance in the high signal-to-noise ratio regime is investigated to render insights into the diversity behavior of the considered networks. To reveal the effect of network parameters on the system performance in terms of outage probability and symbol error rate, selected numerical results are presented. In particular, these results show that the performance of the system is enhanced when increasing the number of antennas at the transceivers of the secondary network. However, increasing the number of antennas at the primary receiver leads to a degradation in the secondary system performance.

In this paper, we propose a strategy to coordinate the dynamic spectrum access (DSA) of different types of traffic. It is assumed that the DSA assigns spectrum bands to three kinds of prioritized traffic, the traffic of the primary network, the Class 1 traffic and Class 2 traffic of the secondary network. Possessing the licensed spectrum, the primary traffic has the highest access priority and can access the spectrum bands at anytime. The secondary Class 1 traffic has higher priority compared to secondary Class 2 traffic. In this system, a channel reservation scheme is deployed to control spectrum access of the traffic. Specifically, the optimal number of reservation channels is applied to minimize the forced termination probability of the secondary traffic while satisfying a predefined blocking probability of the primary network. To investigate the system performance, we model state transitions of the DSA as a multi-dimensional Markov chain with three-state variables representing the number of primary, Class 1, and Class 2 packets in the system. Based on this chain, important performance measures, i.e., blocking probability and forced termination probability are derived for the Class 1 and Class 2 secondary traffic.

This paper proposes a novel hybrid interweave-underlay spectrum access for a cognitive amplify-and-forward relay network where the relay forwards the signals of both the primary and secondary networks. In particular, the secondary network (SN) opportunistically operates in interweave spectrum access mode when the primary network (PN) is sensed to be inactive and switches to underlay spectrum access mode if the SN detects that the PN is active. A continuous-time Markov chain approach is utilized to model the state transitions of the system. This enables us to obtain the probability of each state in the Markov chain. Based on these probabilities and taking into account the impact of imperfect spectrum sensing of the SN, the probability of each operation mode of the hybrid scheme is obtained. To assess the performance of the PN and SN, we derive analytical expressions for the outage probability, outage capacity, and symbol error rate over Nakagami-m fading channels. Furthermore, we present comparisons between the performance of underlay cognitive cooperative radio networks (CCRNs) and the performance of the considered hybrid interweave-underlay CCRN in order to reveal the advantages of the proposed hybrid spectrum access scheme. Eventually, with the assistance of the secondary relay, performance improvements for the PN are illustrated by means of selected numerical results.

In this paper, we study the secrecy capacity of an underlay cooperative cognitive radio network (CCRN) where multiple relays are deployed to assist the secondary transmission. An optimal power allocation algorithm is proposed for the secondary transmitter and secondary relays to obtain the maximum secrecy capacity while satisfy the interference power constraint at the primary receiver and the transmit power budget of the CCRN. Since the optimization problem for the secrecy capacity is non-convex, we utilize an approximation and fitting method to convert the optimization problem into a geometric programming problem which then is solved by applying the Logarithmic barrier function. Numerical results are provided to study the effect of network parameters on the secrecy capacity. Through the numerical results, the advantage of the proposed power allocation algorithm compared to equal power allocation can also be observed.

In this paper, we investigate the performance of cognitive multiple decode-and-forward relay networks under the interference power constraint of the primary receiver wherein the cognitive downlink channel is shared among multiple secondary relays and secondary receivers. In particular, only one relay and one secondary receiver which offers the highest instantaneous signal-to-noise ratio is scheduled to transmit signals. Accordingly, only one transmission route that offers the best end-to-end quality is selected for communication at a particular time instant. To quantify the system performance, we derive expressions for outage probability and symbol error rate over Nakagami-m fading with integer values of fading severity parameter m. Finally, numerical examples are provided to illustrate the effect of system parameters such as fading conditions, the number of secondary relays and secondary receivers on the secondary system performance.

The ever-increasing demand for high quality services require a good quantification of performance parameters such as delay and jitter. Let’s consider one of the parameters, jitter, which is the difference between the inter arrival time of two subsequent packets and the average inter-arrival time. The arrival or departure time of a packet is termed as time-stamp. The accuracy of the timestamp will influence any performance metrics based on the arrival/departure time of a packet. Hence, the knowledge or awareness of time-stamping accuracy is important for performance evaluation. This study investigates how the time-stamping process is affected by virtualization

In recent years, there has been a significant increase in the demand for streaming of high quality videos on the smart mobile phones. In order to meet the user quality requirements, it is important to maintain the end user quality while taking the resource consumption into consideration. This demand caught the attention of the research communities and network providers to prioritize Quality of Experience (QoE) in addition to the Quality of Service (QoS). In order to meet the users’ expectations, the QoE studies have gained utmost importance, thus creating the challenge of evaluating it in such a way that the quality, cost and energy consumption are taken into account. This gave way to the concept of QoE-aware sustainable throughput, which denotes the maximal throughput at which QoE problems can be still kept at a desired level.

The aim of the thesis is to determine the sustainable throughput values from the QoE perspective. The values are observed for different delay and packet loss values in wireless and mobile scenarios. The evaluation is done using the subjective video quality assessment method.

In the subjective assessment method, the evaluation is done using the ITU-T recommended Absolute Category Rating (ACR). The video quality ratings are taken from the users, and are then averaged to obtain the Mean Opinion Score (MOS). The obtained scores are used for analysis in determining the sustainable throughput values from the users’ perspective.

From the results it is determined that, for all the video test cases, the videos are rated better quality at low packet loss values and low delay values. The quality of the videos with the presence of delay is rated high compared to the video quality in the case of packet loss. It was observed that the high resolution videos are feeble in the presence of higher disturbances i.e. high packet loss and larger delays. From considering all the cases, it can be observed that the QoE disturbances due to the delivery issues is at an acceptable minimum for the 360px video. Hence, the 480x360 video is the threshold to sustain the video quality.

This work focuses on improving video transmission quality over a mobile link. More specifically, the impact of buffering and link outages on the freeze probability of transmitted videos is studied. It introduces a new fluid flow model that provides an approximation of the freeze probability in the presence of play-out hysteresis. The proposed model is used to study the impact of two streaming buffer sizes over different possible combinations of outage parameters (data channel on/off times). The outcome of this thesis shows that outage parameters play a dominant role in freezing of streaming video content, and that an increase in these parameters cannot be easily compensated for by an increase in the size of the receiving buffer. Generally, in most cases when there is a variation in outage parameters, an increased buffer size has a negative impact on the freeze probability. To lower the probability of freeze during video playback over a weak mobile link, it is better to sacrifice resolution just to keep the video content playing. Similarly, shifting focus from off to on times brings better results than increasing buffer size.

In this paper, we investigate the asymmetric property of multiple-input multiple-output (MIMO) dual-hop amplify-and- forward (AF) relay networks. We consider the difference of the two hops in terms of both fading channels and scattering environment. In particular, we analyze the symbol error probability (SEP) of a MIMO orthogonal space-time block code (OSTBC) AF relay network in which the first and second hop undergo Rayleigh fading with a rich-scattering environment and Nakagami-m fading with a poor-scattering environment, respectively. Moreover, an asymptotic SEP expression yielding insights on the diversity gain is also obtained.

Triangulation technique is one of the most commonly techniques used in three dimensional measurements. Depth reconstruction accuracy is a direct impact of the quantization process and this is related to the number of pixels of the sensor. Dithering technique (DT) benefits from relative fine movement between object and sensor to reduce the quantization error. This paper describes the theory of the DT and establishes a mathematical model. To evaluate if DT technique can improve the accuracy in real world, a control rig was designed and built, following which experiments were performed. The results showed that considerable improvement can be achieved in measurement accuracy.

To enhance the quality of healthcare in the management of chronic disease, telecare medical information systems have increasingly been used. Very recently, Zhang and Qi (J. Med. Syst. 38(5):47, 32), and Zhao (J. Med. Syst. 38(5):46, 33) separately proposed two authentication schemes for telecare medical information systems using radio frequency identification (RFID) technology. They claimed that their protocols achieve all security requirements including forward secrecy. However, this paper demonstrates that both Zhang and Qiâs scheme, and Zhaoâs scheme could not provide forward secrecy. To augment the security, we propose an efficient RFID authentication scheme using elliptic curves for healthcare environments. The proposed RFID scheme is secure under common random oracle model.

This paper investigates the potential of improving the Quality of Experience (QoE) of pervasive video delivery by buffering. It reveals the importance of delivering video without freezes to the end user. It then introduces a fluid flow model that leads to a closed formula for the freeze probability, and matches measured distributions of on- and off-times to this model's exponential distributions in order to underline its feasibility. The closed formula for the freeze probability is then used to investigate the impact of two buffer size adaptation policies, one additive and one multiplicative with regards to the deviation of the average off-time from its nominal value. It is proven that, besides of some specific conditions in the additive case, the freeze probability grows with the disturbance, which means that even an increased buffer size implies worse performance and thus QoE. It also points out a way out of this dilemma, which is to try to reduce off times by reducing the utilization of the mobile link.

This work motivates and details the concept of QoE-aware sustainable throughput in the area of video streaming. Sustainable throughput serves as a mean to compare video streaming solutions in terms of Quality of Experience (QoE) and energy efficiency (EE). It builds upon the QoE Provisioning-Delivery Hysteresis (PDH) and denotes the maximal throughput at which QoE deteriorations can be kept below a quantifiable level, which in turn allows to compare the EE of different video streaming solutions on QoE-fair grounds. In this work, we particularly focus on delivery problems stemming from outage-prone links, as they are typical for mobile systems. Well-adapted to the nature of the video-associated data streams and disturbances, a stochastic fluid flow model is used that allows for straightforward calculation of sustainable throughput values. We also discuss the application of the sustainable throughput for comparisons among different streaming solutions and their offered QoE and EE, respectively.

We present two strategies to balance the load in a system with multiple virtual machines (VMs) through automated live migration. When the push strategy is used, overloaded hosts try to migrate workload to less loaded nodes. On the other hand, when the pull strategy is employed, the light-loaded hosts take the initiative to offload overloaded nodes. The performance of the proposed strategies was evaluated through simulations. We have discovered that the strategies complement each other, in the sense that each strategy comes out as “best” under different types of workload. For example, the pull strategy is able to quickly re-distribute the load of the system when the load is in the range low-to-medium, while the push strategy is faster when the load is medium-to-high. Our evaluation shows that when adding or removing a large number of virtual machines in the system, the “best” strategy can re-balance the system in 4–15 minutes.

Quality requirements, an important class of non functional requirements, are inherently difficult to elicit. Particularly challenging is the definition of good-enough quality. The problem cannot be avoided though, because hitting the right quality level is critical. Too little quality leads to churn for the software product. Excessive quality generates unnecessary cost and drains the resources of the operating platform. To address this problem, we propose to elicit the specific relationships between software quality levels and their impacts for given quality attributes and stakeholders. An understanding of each such relationship can then be used to specify the right level of quality by deciding about acceptable impacts. The quality-impact relationships can be used to design and dimension a software system appropriately and, in a second step, to develop service level agreements that allow re-use of the obtained knowledge of good-enough quality. This paper describes an approach to elicit such quality-impact relationships and to use them for specifying quality requirements. The approach has been applied with user representatives in requirements workshops and used for determining Quality of Service (QoS) requirements based the involved users’ Quality of Experience (QoE). The paper describes the approach in detail and reports early experiences from applying the approach. Index Terms-Requirement elicitation, quality attributes, non-functional requirements, quality of experience (QoE), quality of service (QoS).

To create value with a software ecosystem (SECO), a platform owner has to ensure that the SECO is healthy and sustainable. Key Performance Indicators (KPI) are used to assess whether and how well such objectives are met and what the platform owner can do to improve. This paper gives an overview of existing research on KPI-based SECO assessment using a systematic mapping of research publications. The study identified 34 relevant publications for which KPI research and KPI practice were extracted and mapped. It describes the strengths and gaps of the research published so far, and describes what KPI are measured, analyzed, and used for decision-making from the researcher's point of view. For the researcher, the maps thus capture stateof- knowledge and can be used to plan further research. For practitioners, the generated map points to studies that describe how to use KPI for managing of a SECO.

Shared understanding of requirements between stakeholders and the development team is a critical success factor for requirements engineering. Workshops are an effective means for achieving such shared understanding since stakeholders and team representatives can meet and discuss what a planned software system should be and how it should support achieving stakeholder goals. However, some important intended recipients of the requirements are often not present in such workshops: the developers. Thus, they cannot benefit from the in-depth understanding of the requirements and of the rationales for these requirements that develops during the workshops. The simple handover of a requirements specification hardly compensates the rich requirements understanding that is needed for the development of an acceptable system. To compensate the lack of presence in a requirements workshop, we propose to record that requirements workshop on video. If workshop participants agree to be recorded, a video is relatively simple to create and is able to capture much more aspects about requirements and rationales than a specification document. This paper presents the workshop video technique and a phenomenological evaluation of its use for requirements communication from the perspective of software developers. The results show how the technique was appreciated by observers of the video, present positive and negative feedbacks from the observers, and lead to recommendations for implementing the technique in practice.

With an upsurge in number of available smart phones, tablet PCs etc. most users find it easy to access Internet services using mobile applications. It has been a challenging task for mobile application developers to choose suitable security types (types of authentication, authorization, security protocols, cryptographic algorithms etc.) for mobile applications. Choosing an inappropriate security type for a mobile application may lead to performance degradation and vulnerable issues in applications. The choice of the security type can be done by decision making. Decision making is a challenging task for humans. When choosing a single alternative among a set of alternatives with multiple criteria, it is hard to know which one is the better decision. Mobile application developers need to incorporate Multi-Criteria Decision Making (MCDM) Models to choose a suitable security type for mobile application. A decision model for application security enhances decision making for mobile application developers to decide and set the required security types for the application. In this thesis, we discuss different types of MCDM models that have been applied in an IT security area and scope of applying MCDM models in application security area. Literature review and evaluation of the selected decision models gives a detailed overview on how to use them to provide application security.