The technical contents of this paper fall within the field of statistical disclosure control (SDC), which concerns the postprocessing of the demographic portion of the statistical results of surveys containing sensitive personal information, in order to effectively safeguard the anonymity of the participating respondents. The concrete purpose of this study is to improve the efficiency of a widely used algorithm for k-anonymous microaggregation, known as maximum distance to average vector (MDAV), to vastly accelerate its execution without affecting its excellent functional performance with respect to competing methods. The improvements put forth in this paper encompass algebraic modifications and the use of the basic linear algebra subprograms (BLAS) library, for the efficient parallel computation of MDAV on CPU.

Websites and applications use personalisation services to profile their users, collect their patterns and activities and eventually use this data to provide tailored suggestions. User preferences and social interactions are therefore aggregated and analysed. Every time a user publishes a new post or creates a link with another entity, either another user, or some online resource, new information is added to the user profile. Exposing private data does not only reveal information about single users’ preferences, increasing their privacy risk, but can expose more about their network that single actors intended. This mechanism is self-evident in social networks where users receive suggestions based on their friends’ activities. We propose an information-theoretic approach to measure the differential update of the anonymity risk of time-varying user profiles. This expresses how privacy is affected when new content is posted and how much third-party services get to know about the users when a new activity is shared. We use actual Facebook data to show how our model can be applied to a real-world scenario.

On today’s Web, users trade access to their private data for content and services. App and service providers want to know everything they can about their users, in order to improve their product experience. Also, advertising sustains the business model of many websites and applications. Efficient and successful advertising relies on predicting users’ actions and tastes to suggest a range of products to buy. Both service providers and advertisers try to track users’ behaviour across their product network. For application providers this means tracking users’ actions within their platform. For third-party services following users, means being able to track them across different websites and applications. It is well known how, while surfing the Web, users leave traces regarding their identity in the form of activity patterns and unstructured data. These data constitute what is called the user’s online footprint. We analyse how advertising networks build and collect users footprints and how the suggested advertising reacts to changes in the user behaviour.

This is an Accepted Manuscript of an article published by Taylor & Francis in International Journal of Parallel, Emergent and Distributed Systems on 19/02/2017, available online: http://www.tandfonline.com/doi/abs/10.1080/17445760.2017.1282480

The Internet of Things (IoT) is on the rise. Today, various IoT platforms are already available, giving access to myriads of things. Initiatives such as BIG IoT are bringing those IoT platforms together in order to form ecosystems. BIG IoT aims to facilitate cross-platform and cross-domain application developments and establish centralized marketplaces to allow resource monetization. This combination of multi-platform applications, heterogeneity of the IoT, as well as enabling marketing and accounting of resources results in crucial challenges for security and privacy. Hence, this article analyses the requirements for security in IoT ecosystems and outlines solutions followed in the BIG IoT project to tackle those challenges. Concrete analysis of an IoT use case covering aspects such as public, private transportation, and smart parking is also presented.

Online advertising, the pillar of the “free” content on the Web, has revolutionized the marketing business in recent years by creating a myriad of new opportunities for advertisers to reach potential customers. The current advertising model builds upon an intricate infrastructure composed of a variety of intermediary entities and technologies whose main aim is to deliver personalized ads. For this purpose, a wealth of user data is collected, aggregated, processed and traded behind the scenes at an unprecedented rate. Despite the enormous value of online advertising, however, the intrusiveness and ubiquity of these practices prompt serious privacy concerns. This article surveys the online advertising infrastructure and its supporting technologies, and presents a thorough overview of the underlying privacy risks and the solutions that may mitigate them. We first analyze the threats and potential privacy attackers in this scenario of online advertising. In particular, we examine the main components of the advertising infrastructure in terms of tracking capabilities, data collection, aggregation level and privacy risk, and overview the tracking and data-sharing technologies employed by these components. Then, we conduct a comprehensive survey of the most relevant privacy mechanisms, and classify and compare them on the basis of their privacy guarantees and impact on the Web.

We develop a probabilistic variant of k-anonymous microaggregation which we term p-probabilistic resorting to a statistical model of respondent participation in order to aggregate quasi-identifiers in such a manner that k-anonymity is concordantly enforced with a parametric probabilistic guarantee. Succinctly owing the possibility that some respondents may not finally participate, sufficiently larger cells are created striving to satisfy k-anonymity with probability at least p. The microaggregation function is designed before the respondents submit their confidential data. More precisely, a specification of the function is sent to them which they may verify and apply to their quasi-identifying demographic variables prior to submitting the microaggregated data along with the confidential attributes to an authorized repository.
We propose a number of metrics to assess the performance of our probabilistic approach in terms of anonymity and distortion which we proceed to investigate theoretically in depth and empirically with synthetic and standardized data. We stress that in addition to constituting a functional extension of traditional microaggregation, thereby broadening its applicability to the anonymization of statistical databases in a wide variety of contexts, the relaxation of trust assumptions is arguably expected to have a considerable impact on user acceptance and ultimately on data utility through mere availability.

Coding schemes for multimedia fingerprinting in the presence of noise and colluders are investigated. We prove that best such codes have nonvanishing rate, i.e., have exponentially many codewords (users) and can trace the entire coalition of pirates and do it either with zero error probability or w.h.p. depending on the corresponding model of errors.

Despite the several advantages commonly attributed to social networks such as easiness and immediacy to communicate with acquaintances and friends, significant privacy threats provoked by unexperienced or even irresponsible users recklessly publishing sensitive material are also noticeable. Yet, a different, but equally significant privacy risk might arise from social networks profiling the online activity of their users based on the timestamp of the interactions between the former and the latter. In order to thwart this last type of commonly neglected attacks, this paper proposes an optimized deferral mechanism for messages in online social networks. Such solution suggests intelligently delaying certain messages posted by end users in social networks in a way that the observed online activity profile generated by the attacker does not reveal any time-based sensitive information, while preserving the usability of the system. Experimental results as well as a proposed architecture implementing this approach demonstrate the suitability and feasibility of our mechanism.

The final publication is available at Springer via http://dx.doi.org/10.1007/s10115-016-1010-4

Vehicular ad hoc networks (VANETs) leverage the communication system of Intelligent Transportation Systems (ITS). Recently, Delay-Tolerant Network (DTN) routing protocols have increased their popularity among the research community for being used in non-safety VANET applications and services like traffic reporting. Vehicular DTN protocols use geographical and local information to make forwarding decisions. However, current proposals only consider the selection of the best candidate based on a local-search. In this paper, we propose a generic Geographical Heuristic Routing (GHR) protocol that can be applied to any DTN geographical routing protocol that makes forwarding decisions hop by hop. GHR includes in its operation adaptations simulated annealing and Tabu-search meta-heuristics, which have largely been used to improve local-search results in discrete optimization. We include a complete performance evaluation of GHR in a multi-hop VANET simulation scenario for a reporting service. Our study analyzes all of the meaningful configurations of GHR and offers a statistical analysis of our findings by means of MANOVA tests. Our results indicate that the use of a Tabu list contributes to improving the packet delivery ratio by around 5% to 10%. Moreover, if Tabu is used, then the simulated annealing routing strategy gets a better performance than the selection of the best node used with carry and forwarding (default operation).

The understanding of certain data often requires the collection of similar data from different places to be analysed and interpreted. Multi-agent systems (MAS), interoperability standards (DICOM, HL7 or EN13606) and clinical Ontologies, are facilitating data interchange among different clinical centres around the world. However, as more and more data becomes available, and more heterogeneous this data gets, the task of accessing and exploiting the large number of distributed repositories to extract useful knowledge becomes increasingly complex. Beyond the existing networks and advances for data transfer, data sharing protocols to support multilateral agreements are useful to exploit the knowledge of distributed Data Warehouses. The access to a certain data set in a federated Data Warehouse may be constrained by the requirement to deliver another specific data set. When bilateral agreements between two nodes of a network are not enough to solve the constraints for accessing to a certain data set, multilateral agreements for data exchange can be a solution.
The research carried out in this PhD Thesis comprises the design and implementation of a Multi-Agent System for multilateral exchange agreements of clinical data, and evaluate how those multilateral agreements increase the percentage of data collected by a single node from the total amount of data available in the network. Different strategies to reduce the number of messages needed to achieve an agreement are also considered.
The results show that with this collaborative sharing scenario the percentage of data collected dramatically improve from bilateral agreements to multilateral ones, up to reach almost all data available in the network.

We consider collusion-resistant fingerprinting codes for multimedia content. We show that the corresponding IPP-codes may trace all guilty users and at the same time have exponentially many code words. We also establish an equivalence between signature codes for the A-channel and multimedia fingerprinting codes and prove that the rate of the best t-signature codes for A-channel is at least #(t-2). Finally, we construct a family of t-signature codes for the Achannel with polynomial decoding complexity and rate #(t-3).

On today's Web, users trade access to their private data for content and services. Advertising sustains the business model of many websites and applications. Efficient and successful advertising relies on predicting users'
actions and tastes to suggest a range of products to buy. It follows that, while surfing the Web users leave traces regarding their identity in the form of activity patterns and unstructured data. We analyse how advertising networks build user footprints and how the suggested advertising reacts to changes in the user behaviour.

¿-Anonymous microaggregation emerges as an essential building block in statistical disclosure control, a field concerning the postprocessing of the demographic portion of surveys containing sensitive information, in order to safeguard the anonymity of the respondents. Traditionally, this form of microaggregation has been formulated to characterize both the privacy attained and the inherent information loss due to the aggregation of quasi-identifiers, which may otherwise be exploited to reidentify the individuals to which a record in a published database refer.
Because the ulterior purposes of such databases involves the analysis of the statistical dependence between demographic attributes and sensitive data, we must articulate mechanisms to enable the preservation of the statistical dependence between quasi-identifiers and confidential attributes, beyond
the mere degradation of the quasi-identifiers alone.
This work addresses the problem of ¿¿-anonymous microaggregation with preservation of statistical dependence in a formal, systematic manner, modeling statistical dependence as predictability of the confidential attributes from the perturbed quasi-identifiers. We proceed by introducing a
second mean squared error term in a combined Lagrangian cost that enables us to regulate the trade-off between quasi-identifier distortion and the confidential-attribute predictability. A Lagrangian multiplier enables us to gracefully weigh the importance of each of the two competing objectives.

𝑘�������-Anonymous microaggregation emerges as an essential building block in statistical disclosure control, a field concerning the postprocessing of the demographic portion of surveys containing sensitive information, in order to safeguard the anonymity of the respondents. Traditionally, this form of microaggregation has been formulated to characterize both the privacy attained and the inherent information loss due to the aggregation of quasi-identifiers, which may otherwise be exploited to reidentify the individuals to which a record in a published database refer.
Because the ulterior purposes of such databases involves the analysis of the statistical dependence between demographic attributes and sensitive data, we must articulate mechanisms to enable the preservation of the statistical dependence between quasi-identifiers and confidential attributes, beyond the mere degradation of the quasi-identifiers alone.
This work addresses the problem of 𝑘�������𝑘�������-anonymous microaggregation with preservation of statistical dependence in a formal, systematic manner, modeling statistical dependence as predictability of the confidential attributes from the perturbed quasi-identifiers. We proceed by introducing a second mean squared error term in a combined Lagrangian cost that enables us to regulate the trade-off between quasi-identifier distortion and the confidential-attribute predictability. A Lagrangian multiplier enables us to gracefully weigh the importance of each of the two competing objectives.

Ad hoc networks have attracted much attention from the research community over the last years and important technical advances have risen as a consequence. These networks are foreseen as an important kind of next generation access networks, where multimedia services will be demanded by end users from their wireless devices everywhere. In this thesis, we specially focus our research work on mobile ad hoc networks (MANETs) and on vehicular ad hoc networks (VANETs), two kind of ad hoc networks over which interesting multimedia services can be provided. The special haracteristics of MANETs/VANETs, such as mobility, dynamic network topology (specially in VANETs), energy constraints (in case of MANETs), infrastructureless and variable link capacity, make the QoS (Quality of Service) provision over these networks an important challenge for the research community. Due to that, there is a need to develop new routing protocols specially designed for MANETs and VANETs able to provide multimedia services.
The main objective of this thesis is to contribute in the development of the communication framework for MANETs and VANETs to improve decisions to select paths or next hops in the moment of forwarding video-reporting messages. In this way, it would be possible to have a quick answer to manage daily problems in the city and help the emergency units (e.g., police, ambulances, health care units) in case of incidents (e.g., traffic accidents). Furthermore, in case of VANETs, a real scenario must be created and thus we have analysed the presence of obstacles in real maps. Also, in case of an obstacle found between the current forwarding node and the candidate next forwarding node, the packet is stored in a buffer, for a maximum time, until a forwarding neighbour node is found; otherwise, the packet is dropped. To improve the communication framework for MANETs, we propose a new routing protocol based on a game-theoretical scheme for N users specially designed to transmit video-reporting messages. Our proposal makes the network more efficient and provides a higher degree of satisfaction of the users by receiving much more packets with a lower average end-to-end delay, lower jitter and higher PSNR (Peak Signal-to-Noise Ratio).
In addition, we propose a geographical routing protocol for VANETs that considers multiple metrics named 3MRP (Multimedia Multimetric Map-Aware Routing Protocol) [1]. 3MRP is a geographical protocol based on hop-by-hop forwarding. The metrics considered in 3MRP are the distance, the density of vehicles in transmission range, the available bandwidth, the future trajectory of the neighbouring nodes and the MAC layer losses. Those metrics are weighted to obtain a multimetric score. Thus, a node selects another node among its neighbours as the best forwarding node to increase the percentage of successful packet delivery, minimizing the average packet delay and offering a certain level of quality and service. Furthermore, a new algorithm named DSW (Dynamic Self-configured Weights) computes for each metric its corresponding weight depending on the current network conditions. As a consequence, nodes are classiffied in a better way.

One of the main concerns of the cities' administration is mobility management. In Intelligent Transportation Systems (ITS), pedestrians, vehicles and public transportation systems could share information and react to any situation in the city. The information sensed by vehicles could be useful for other vehicles and for the mobility authorities. Vehicular Ad hoc Networks (VANETs) make possible the communication between vehicles (V2I) and also between vehicles and fixed infrastructure (V2I) managed by the city's authorities. In addition, VANET routing protocols minimize the use of fixed infrastructure since they employ multi-hop V2V communication to reach reporting access points of the city.
This thesis aims to contribute in the design of VANET routing protocols to enable reporting services (e.g., vehicular traffic notifications) in urban environments. The first step to achieve this global objective has been the study of components and tools to mimic a realistic VANET scenario. Moreover, we have analyzed the impact of the realism of each one of those components in the simulation results.
Then, we have improved the Address Resolution procedure in VANETs by including it in the routing signaling messages. Our approach simplifies the VANET operation and increases the packet delivery ratio as consequence. Afterwards, we have tackled the issue of having duplicate packets in unicast communications and we have proposed routing filters to lower their presence. This way we have been able to increase the available bandwidth and reduce the average packet delay with a slight increase of the packet losses.
Besides, we have proposed a Multi-Metric Map aware routing protocol (MMMR) that incorporates four routing metrics (distance, trajectory, vehicle density and available bandwidth) to take the forwarding decisions. With the aim of increasing the number of delivered packets in MMMR, we have developed a Geographical Heuristic Routing (GHR) algorithm. GHR integrates Tabu and Simulated Annealing heuristic optimization techniques to adapt its behavior to the specific scenario characteristics. GHR is generic because it could use any geographical routing protocol to take the forwarding decisions. Additionally, we have designed an easy to implement forwarding strategy based on an extended topology information area of two hops, called 2-hops Geographical Anycast Routing (2hGAR) protocol. Results show that controlled randomness introduced by GHR improves the default operation of MMMR. On the other hand, 2hGAR presents lower delays than GHR and higher packet delivery ratio, especially in high density scenarios.
Finally, we have proposed two mixed (integer and linear) optimization models to detect the best positions in the city to locate the Road Side Units (RSUs) which are in charge of gathering all the reporting information generated by vehicles.

The focus of this research is in the definition of programmable expert Personal Health Systems (PHS) to monitor patients affected by chronic diseases using agent oriented programming and mobile computing to represent the interactions happening amongst the components of the system. The paper also discusses issues of knowledge representation within the medical domain when dealing with temporal patterns concerning the physiological values of the patient. In the presented agent based PHS the doctors can personalize for each patient monitoring rules that can be defined in a graphical way. Furthermore, to achieve better scalability, the computations for monitoring the patients are distributed among their devices rather than being performed in a centralized server. The system is evaluated using data of 21 diabetic patients to detect temporal patterns according to a set of monitoring rules defined. The system’s scalability is evaluated by comparing it with a centralized approach. The evaluation concerning the detection of temporal patterns highlights the system’s ability to monitor chronic patients affected by diabetes. Regarding the scalability, the results show the fact that an approach exploiting the use of mobile computing is more scalable than a centralized approach. Therefore, more likely to satisfy the needs of next generation PHSs. PHSs are becoming an adopted technology to deal with the surge of patients affected by chronic illnesses. This paper discusses architectural choices to make an agent based PHS more scalable by using a distributed mobile computing approach. It also discusses how to model the medical knowledge in the PHS in such a way that it is modifiable at run time. The evaluation highlights the necessity of distributing the reasoning to the mobile part of the system and that modifiable rules are able to deal with the change in lifestyle of the patients affected by chronic illnesses.

Structured Peer-to-Peer (P2P) networks were proposed to solve routing problems of big distributed infrastructures. But the research community has been questioning their security for years. Most prior work in security services was focused on secure routing, reputation systems, anonymity, etc. However, the proper management of identities is an important prerequisite to provide most of these security services.
The existence of anonymous nodes and the lack of a centralized authority capable of monitoring (and/or punishing) nodes make these systems more vulnerable against selfish or malicious behaviors. Moreover, these improper usages cannot be faced only with data confidentiality, nodes authentication, non-repudiation, etc. In particular, structured P2P networks should follow the following secure routing primitives: (1) secure maintenance of routing tables, (2) secure routing of messages, and (3) secure identity assignment to nodes. But the first two problems depend in some way on the third one. If nodes’ identifiers can be chosen by users without any control, these networks can have security and operational problems. Therefore, like any other network or service, structured P2P networks require a robust access control to prevent potential attackers joining the network and a robust identity assignment system to guarantee their proper operation.
In this thesis, firstly, we analyze the operation of the current structured P2P networks when managing identities in order to identify what security problems are related to the nodes’ identifiers within the overlay, and propose a series of requirements to be accomplished by any generated node ID to provide more security to a DHT-based structured P2P network.
Secondly, we propose the use of implicit certificates to provide more security and to exploit the improvement in bandwidth, storage and performance that these certificates present compared to explicit certificates, design three protocols to assign nodes’ identifiers avoiding the identified problems, while maintaining user anonymity and allowing users’ traceability.
Finally, we analyze the operation of the most used mechanisms to distribute revocation data in the Internet, with special focus on the proposed systems to work in P2P networks, and design a new mechanism to distribute revocation data more efficiently in a structured P2P network.

Abstract Vehicular ad hoc networks (VANETs) is considered a milestone in improving the safety and efficiency in transportation. Nevertheless, when information from the vehicular communications is combined with data from the cloud, it also introduces some privacy risks by making it easier to track the physical location of vehicles. For this reason, to guarantee the proper performance of a \{VANET\} it is essential to protect the service against malicious users aiming at disrupting the proper operation of the network. Current researches usually define a traditional identity-based authentication for nodes, which are loaded with individual credentials. However, the use of these credentials in VANETs without any security mechanism enables vehicle tracking and therefore, violate users’ privacy, a risk that may be overcome by means of appropriate anonymity schemes. This comes at the cost, however, of on the one hand preventing VANET centralized authorities from identifying malicious users and revoking them from the network, or on the other hand to avoid complete anonymity of nodes in front of the CA thus to allow their revocation. In this paper, a novel revocation scheme that is able to track and revoke specific malicious users only after a number of complaints have been received while otherwise guaranteeing node’s k-anonymity is described. The proper performance of these mechanisms has been widely evaluated with NS-2 simulator and an analytical model validated with scripts. The results show that presented work is a promising approach in order to increase privacy protection while allowing revocation with little extra costs.

La universidad española ha afrontado con éxito el tránsito entre una universidad clásica, eminentemente docente, y una universidad moderna, en la que la investigación juega un papel esencial. Gran parte de este éxito reside en la instauración de sistemas de evaluación. No obstante, a pesar de los indiscutibles logros, aún queda un camino importante por recorrer para situarla en los lugares que le corresponden en las clasificaciones internacionales por el lugar que ocupamos dentro de los países de nuestro entorno. La Universidad Politécnica de Cataluña dispone de un modelo de evaluación docente, así como de información sobre la actividad investigadora del profesorado. Ello ha permitido implementar un modelo de evaluación global, que presentamos en este artículo. Asimismo, usando una muestra de 4.996 evaluaciones individuales de unos 1.700 profesores, analizamos la evolución temporal de las evaluaciones de los años 2011, 2012 y 2013, estudiamos si existen diferencias entre los ámbitos de conocimiento y examinamos si existe una relación entre la productividad académica así evaluada y la posición de la universidad en los ránquines universitarios. Los datos de este estudio corroboran que el modelo ha tenido éxito, al haber permitido que la universidad progresara en el ranquin Quacquarelli Symonds del puesto 350 al 337.
Spanish universities have moved from a classical model, a teaching university, to a modern one, in which research is essential. Part of this success is due to the implementation of quality assessment policies. Nonetheless, and in spite of this, there is still a long way to place it at the level of similar countries in university rankings. The Universidad Politécnica de Cataluña has an evaluation model of the teaching performance of its staff, and also abundant information of their research. This has allowed implementing a global evaluation model. Here we present the evaluation model. Also, using 4,996 individual evaluations corresponding to about 1,700 academic staff members we analyze the evolution of the results of the evaluations of years 2011, 2012 and 2013, we study if there are differences among the various academic fields, and we examine if there exists a correlation between the evolution of the academic productivity measured using this model and the position of the university in the university rankings. We show that this model has allowed the university to improve its classification in the Quacquarelli Symonds Top University Ranking, from position 350 to 337.

In the early age of the internet users enjoyed a large level of anonymity. At the time web pages were just hypertext documents; almost no personalisation of the user experience was offered. The Web today has evolved as a world wide distributed system following specific architectural paradigms. On the web now, an enormous quantity of user generated data is shared and consumed by a network of applications and services, reasoning upon users expressed preferences and their social and physical connections. Advertising networks follow users’ browsing habits while they surf the web, continuously collecting their traces and surfing patterns. We analyse how users tracking happens on the web by measuring their online footprint and estimating how quickly advertising networks are able to profile users by their browsing habits.

The Identifiable Parent Property guarantees, with probability 1, the identification of at least one of the traitors by the corresponding traitor tracing schemes, or, by IPP-codes. Unfortunately, for the case of binary codes the IPP property does not hold even in the case of only two traitors. A recent work has considered a natural generalization of IPP-codes for the binary case, where the identifiable parent property should hold with probability almost 1. It has been shown that almost t-IPP codes of nonvanishing rate exist for the case t = 2. Surprisingly enough, collusion secure digital fingerprinting codes do not automatically possess this almost IPP property. In practice, this means that for a given forged fingerprint, say z, a user identified as guilty by the tracing algorithm can deny this claim since he will be able to present a coalition of users that can create the same z, but he does not belong to that coalition. In this paper, we study the case of t-almost IPP codes for t > 2.

We present the deficiencies of traditional identity-based authorization models in structured Peer-to-Peer (P2P) networks where users' Public Key Certificates (PKCs) represent two roles, authentication and authorization, and the access to the network resources is controlled by Access Control Lists (ACLs). With these deficiencies in mind, we propose a complete new framework for authorization in structured P2P networks based on Attribute Certificates (ACs) and a fully distributed certificate revocation system. We argue that the proposed framework yields a more flexible and secure authorization scheme for structured P2P networks while improving the efficiency of the assignment of privileges.

Road safety applications envisaged for VANETs depend largely on the exchange of messages to deliver information to the concerned vehicles. In the last years, several dissemination protocols have been presented in the literature. In this paper we study and evaluate the dissemination of emergency messages in realistic vehicular scenarios. In the evaluation, some protocols are examined from two perspectives: qualitative and quantitative. We identify the principal mechanisms of dissemination and examine the factors that most impact on the simulation results. In addition, we investigate the effects of the shadows of buildings and other vehicles in the performance of the dissemination protocols. The simulation results suggest the need to include scenarios with fixed and mobile obstacles to increase the credibility of the performance of the evaluated protocols.

In this paper we present simtools, a software to create virtual networks of UML machines which is based on UML and VNUML. Simtools allows easily building virtual scenarios, and it is a quite simple and straighforward tool for teaching networking. In fact, we are using it in some of our subjects related to networking at the Network Engineering Department at the Universitat Politecnica de Catalunya (UPC)

Mobile ad hoc networks (MANETs) are infrastructureless networks formed by wireless mobile devices. Routing in MANETs is a process where the best path will be selected so that packets will be forwarded. When a MANET is heavily loaded, single path routing begins to have congestion problems. To cope with this issue, multipath routing protocols were proposed improving the single path routing in terms of packet delivery ratio and quality of service (QoS). Normally, QoS metrics are used to classify paths in order to chose those (i.e., number of paths >2) who satisfy the QoS-user-requirements. Besides, paths are classified with an equal metrics weights which we do not consider as an optimal way. In this article, we aimed to design a new form of distributing weight values for metrics so that paths would be classified in a better way. Each time we want to classify paths, our proposal seeks to give a proper value to each metric weight depending on the metric current value with respect to an average one previously defined. We consider that depending on the variation of metric value, a corresponding weight to this current variation must be given. To achieve that, a simple algorithm has been designed. Finally, paths are classified taking into account our algorithm to calculate all weights value. Simulations have been done to show the benefits of our proposal where interfering traffic and mobility of the nodes are present.

In this paper we describe a method for transmitting H.264/AVC encoded video. The data transfer is performed by adapting data transmission to network performance in order to ensure continuous transmission. The data transfer between server and client terminals is performed by TCP sockets. The original video is encoded in H.264/AVC for different levels of bitrate using the GStreamer library. Then each encoded video is segmented at GOP level. The purpose of segmenting the video is to facilitate switching between different video qualities to adapt the bitrate to the variable network capacity, and using a control
in the server side. Segmenting the encoded video has the
advantages of scaling the digital video service and the maximum use of network resources.

MobilitApp is a platform designed to provide smart mobility services in urban areas. It is designed to help citizens and transport authorities alike. Citizens will be able to access the MobilitApp mobile application and decide their optimal transportation strategy by visualising their usual routes, their carbon footprint, receiving tips, analytics and general mobility information, such as traffic and incident alerts. Transport authorities and service providers will be able to access information about the mobility pattern of citizens to offer their best services, improve costs and planning. The MobilitApp client runs on Android devices and records synchronously, while running in the background, periodic location updates from its users. The information obtained is processed and analysed to understand the mobility patterns of our users in the city of Barcelona, Spain.

In this paper, we propose a simple and scalable optimization model for the deployment of road site units (RSUs). The model takes advantage of the inherent stochasticity provided by the vehicles’ movements by using mobility traces to determine which are the best positions to place RSUs to maximize connectivity in a multi-hop VANET scenario and keep the number of RSU as low as possible. Our simulations results validate that the solutions offered by our model are accurate.

In a question-driven survey, the answers to one question may decide which question is presented next. In this case, encrypting the answers of the participants is not enough to protect their privacy since the system is able to learn them by inspection of the next question the participants request.; In this article, we explore the technologies involved in surveys performed through a mobile phone. Participants receive the questions using VoIP technologies and, since their answers affect which questions are presented next, they must protect the selection of the relevant questions. In addition, this paper considers the performance of the proposed encryption technologies in mobile phones. Finally, the answers to the poll must be sent to the server. This paper proposes an eVoting framework to preserve the privacy of the users while sending the answers to the system.; Such a scenery involves many different communication channels and technologies. As we will show, the decisions taken in some of the modules force some technologies and decisions in the others. (C) 2015 Elsevier B.V. All rights reserved.

Recommendation systems and content filtering approaches based on annotations and ratings, essentially rely on users expressing their preferences and interests through their actions, in order to provide personalised content. This activity, in which users engage collectively has been named social tagging, and it is one of the most popular in which users engage online, and although it has opened new possibilities for application interoperability on the semantic web, it is also posing new privacy threats. It, in fact, consists of describing online or offline resources by using free-text labels (i.e. tags), therefore exposing the user profile and activity to privacy attacks. Users, as a result, may wish to adopt a privacy-enhancing strategy in order not to reveal their interests completely. Tag forgery is a privacy enhancing technology consisting of generating tags for categories or resources that do not reflect the user's actual preferences. By modifying their profile, tag forgery may have a negative impact on the quality of the recommendation system, thus protecting user privacy to a certain extent but at the expenses of utility loss. The impact of tag forgery on content-based recommendation is, therefore, investigated in a real-world application scenario where different forgery strategies are evaluated, and the consequent loss in utility is measured and compared.

Proximity-based social applications let users interact with people that are currently close to them, by revealing some information about their preferences and whereabouts. This information is acquired through passive geo-localisation and used to build a sense of serendipitous discovery of people, places and interests. Unfortunately, while this class of applications opens different interactions possibilities for people in urban settings,
obtaining access to certain identity information could lead a possible privacy attacker to identify and follow a user in their movements in a specific period of time. The same information shared through the platform could also help an attacker to link the victim’s online profiles to physical identities. We analyse a set of popular dating application that shares users relative
distances within a certain radius and show how, by using the information shared on these platforms, it is possible to formalise a multilateration attack, able to identify the user actual position.
The same attack can also be used to follow a user in all their movements within a certain period of time, therefore identifying their habits and Points of Interest across the city. Furthermore we introduce a social attack which uses common Facebook likes to profile a person and finally identify their real identity.

Security is vital for the reliable operation of vehicular ad hoc networks (VANETs). One of the critical security issues is the revocation of misbehaving vehicles. While essential, revocation checking can leak private information. In particular, repositories receiving the certificate status queries could infer the identity of the vehicles posing the query and the target of the query. An important loss of privacy results from this ability to tie the checking vehicle with the query's target, due to their likely willingness to communicate. In this paper, we propose an Efficient and Privacy-Aware revocation Mechanism (EPA) based on the use of Merkle Hash Trees (MHT) and a Crowds-based anonymous protocol, which replaces the time-consuming certificate revocation lists checking process. EPA provides explicit, concise, authenticated and unforgeable information about the revocation status of each certificate while preserving the users' privacy. Moreover, EPA reduces the security overhead for certificate status checking, and enhances the availability and usability of the revocation data. By conducting a detailed performance evaluation, EPA is demonstrated to be reliable, efficient, and scalable.

Intelligent Transportation Systems (ITS) has impulsed the vehicular communications at the present time. The vehicular communications field is a hot research topic and is attracting a great interest in the automotive industry and telecommunications. There are essentially two main lines of work: (1) communication services related to road safety and traffic information; and (2) information and entertainment services, also named infotainment services. These latter services include both transmitting multimedia (voice over IP, streaming, on-line gaming, etc.) and classic data services (e-mail, access to private networks, web browsing, file sharing, etc.). In this thesis we will focus on these infotainment services because further research in this immature research field is necessary and, until nowadays, the main effort of the research community regarding vehicular communication has been focused on road safety and traffic information.
Vehicular nodes need to be reached from the Internet and vice versa to be able to access to infotainment services. While vehicles move along the road infrastructure, they change their wireless point of attachment to the network. During this process, connectivity breaks down until the vehicle is connected again to a new road side unit in its area. This disconnection causes a disruption in the communications. Fast handoffs are a crucial requirement for vehicular networks to avoid long disruption times, since the high speed of vehicular nodes involves suffering a lot of handoffs during an Internet connection.
This thesis is focused on Vehicular-to-Infrastructure (V2I) real-time infotainment services. The main contributions of this thesis are: i) a new testing framework for V2I communications to be able to test infotainment services in an easy way; ii) the analysis of the deployability of infotainment video services in vehicular networks using mobility protocols; and iii) the development of a new TCP architecture that will provide a better performance for all TCP-based infotainment services in a vehicular scenario with handoffs.
In this thesis, firstly, we propose a new testing framework for vehicular infotainment applications. This framework is a vehicular emulation platform that allows testing real applications installed on Linux virtual machines. Using emulation, we are able to evaluate the performance of real applications with real-time requirements, so we can test multimedia applications used to offer infotainment services in vehicular scenarios in a straightforward way.
Secondly, using the testing framework implemented in the first part of the thesis, we have done a performance evaluation of an infotainment service. Among these services, we think that video on demand services on highways will be interesting for users, and generate revenue to network operators. So we evaluated how network-layer handoffs can limit the deployment of a video streaming service. According to the results obtained, driving at high speeds will be an issue for a correct playback of video content, even using fast handoffs techniques.
Finally, we developed a new TCP architecture to enhance performance during handoffs. Most of the non-safety services on ITS rely on the Transport Control Protocol (TCP), one of the core protocols of the Internet Protocol Suite. However there exists several issues related to TCP and mobility that can affect to TCP performance, and these issues are particularly important in vehicular networks due to its high mobility. Using new IEEE 802.21 MIH services, we propose a new TCP architecture that is able to anticipate handoffs, permitting to resume the communication after a handoff, avoiding long delays caused by TCP issues and adapting the TCP parameters to the new characteristics of the network. Using the architecture proposed, the performance of TCP is enhanced, getting a higher overall throughput and avoiding TCP fairness issues between users.

Future networks will be formed by millions of devices, many of them mobile, sharing information and running applications. Android is currently the most widely used operating system in smartphones, and it is becoming more and more popular in other devices. Providing security to these mobile devices and applications is a must for the proper deployment of future networks. For this reason, this paper studies the cryptographic structure and built-in tools in Android, and shows that the operating system has been specially designed for plugging-in external cryptographic modules. We conclude that the best option for providing cryptographic capabilities is using these external modules. We show the existent options and compare some features, like licensing, source code availability and price. We define some requirements, evaluate each module, and provide guidelines for developers who want to use properly security primitives.

Location-based services (LBSs) flood mobile phones nowadays, but their use poses an evident privacy risk. The locations accompanying the LBS queries can be exploited by the LBS provider to build the user profile of visited locations, which might disclose sensitive data, such as work or home locations. The classic concept of entropy is widely used to evaluate privacy in these scenarios, where the information is represented as a sequence of independent samples of categorized data. However, since the LBS queries might be sent very frequently, location profiles can be improved by adding temporal dependencies, thus becoming mobility profiles, where location samples are not independent anymore and might disclose the user's mobility patterns. Since the time dimension is factored in, the classic entropy concept falls short of evaluating the real privacy level, which depends also on the time component. Therefore, we propose to extend the entropy-based privacy metric to the use of the entropy rate to evaluate mobility profiles. Then, two perturbative mechanisms are considered to preserve locations and mobility profiles under gradual utility constraints. We further use the proposed privacy metric and compare it to classic ones to evaluate both synthetic and real mobility profiles when the perturbative methods proposed are applied. The results prove the usefulness of the proposed metric for mobility profiles and the need for tailoring the perturbative methods to the features of mobility profiles in order to improve privacy without completely loosing utility.

The large deployment of Internet resources has facilitated the accelerated development of Intelligent Transport Systems (ITS) including a large diversity of distributed applications such as non-safety infotainment vehicular applications. In vehicular networks, nodes can connect to Internet using Road Side Units (RSUs), but the complete deployment of RSUs along all the transport network seems unfeasible. So, we can take advantage of multihomed devices using complementary access network technologies, such as satellite networks, in order to maintain the connectivity. Several solutions have been implemented in order to take advantage of multihoming and multipath capabilities of mobile nodes. However, in dynamic network scenarios involving multipath communication channels, the QoS requirements of these applications is not always globally managed and guaranteed. Moreover, specific multimedia semantics of the transmitted data is not usually considered by the available transmission mechanisms and protocols. In this work, we present a Multipath TCP communication architecture that take full advantage of the intrinsic multimedia QoS semantics based in Deep Packet Inspection in order to self-manage the available resources and to provide a more compliant e2e transport service.