Centrality and influence spread are two of the most studied concepts in social network analysis. In recent years, centrality measures have attracted the attention of many researchers, generating a large and varied number of new studies about social network analysis and its applications. However, as far as we know, traditional models of influence spread have not yet been exhaustively used to define centrality measures according to the influence criteria. Most of the considered work in this topic is based on the independent cascade model. In this paper we explore the possibilities of the linear threshold model for the definition of centrality measures to be used on weighted and labeled social networks. We propose a new centrality measure to rank the users of the network, the Linear Threshold Rank (LTR), and a centralization measure to determine to what extent the entire network has a centralized structure, the Linear Threshold Centralization (LTC). We appraise the viability of the approach through several case studies. We consider four different social networks to compare our new measures with two centrality measures based on relevance criteria and another centrality measure based on the independent cascade model. Our results show that our measures are useful for ranking actors and networks in a distinguishable way.

In this study, we describe a new coordination mechanism for non-atomic congestion games that leads to a (selfish) social cost which is arbitrarily close to the non-selfish optimal. This mechanism incurs no additional cost, in contrast to tolls that typically differ from the social cost as expressed in terms of delays.

We consider Web services defined by orchestrations in the Orc language and two natural quality of services measures, the number of outputs and a discrete version of the first response time. We analyse first those subfamilies of finite orchestrations in which the measures are well defined and consider their evaluation in both reliable and probabilistic unreliable environments. On those subfamilies in which the QoS measures are well defined, we consider a set of natural related problems and analyse its computational complexity. In general our results show a clear picture of the difficulty of computing the proposed QoS measures with respect to the expressiveness of the subfamilies of Orc. Only in few cases the problems are solvable in polynomial time pointing out the computational difficulty of evaluating QoS measures even in simplified models.

Jutge.org is an open educational online programming judge designed for students and instructors, featuring a repository of problems that is well organized by courses, topics and difficulty. Internally, Jutge.org uses a secure and efficient architecture and integrates modern verification techniques, formal methods, static code analysis and data mining. Jutge.org has exhaustively been used during the last decade at the Universitat Politecnica de Catalunya to strengthen the learn-by-doing approach in several courses. This paper presents the main characteristics of Jutge.org and shows its use and impact in a wide range of courses covering basic programming, data structures, algorithms, artificial intelligence, functional programming and circuit design.

We propose the use of the angel-daemon framework to assess the Coleman's power of a collectivity to act under uncertainty in weighted voting games.
In this framework uncertainty profiles describe the potential changes in the weights of a weighted game and fixes the spread of the weights' change. For each uncertainty profile a strategic angel-daemon game can be considered. This game has two selfish players, the angel and the daemon, the angel selects its action as to maximize the effect on the measure under consideration while daemon acts oppositely.
Players angel and daemon give a balance between the best and the worst. The angel-daemon games associated to the Coleman's power are zero-sum games and therefore the expected utilities of all the Nash equilibria
are the same. In this way we can asses the Coleman's power under uncertainty. Besides introducing the framework for this particular setting we analyse basic properties and make some computational complexity considerations. We provide several examples based in the evolution of the voting rules of the EU Council of Ministers.

The metric dimension of a graph G is the size of a smallest subset L ¿ V (G) such that for any x, y ¿ V (G) with x =/ y there is a z ¿ L such that the graph distance between x and z differs from the graph distance between y and z. Even though this notion has been part of the literature for almost 40 years, prior to our work the computational complexity of determining the metric dimension of a graph was still very unclear. In this paper, we show tight complexity boundaries for the Metric Dimension problem. We achieve this by giving two complementary results. First, we show that the Metric Dimension problem on planar graphs of maximum degree 6 is NP-complete. Then, we give a polynomial-time algorithm for determining the metric dimension of outerplanar graphs.

We propose the use of the angel-daemon framework to assess the Coleman's power of a collectivity to act under uncertainty in weighted voting games.
In this framework uncertainty profiles describe the potential changes in the weights of a weighted game and fixes the spread of the weights' change. For each uncertainty profile a strategic angel-daemon game can be considered. This game has two selfish players, the angel and the daemon, the angel selects its action as to maximize the effect on the measure under consideration while daemon acts oppositely.
Players angel and daemon give a balance between the best and the worst. The angel-daemon games associated to the Coleman's power are zero-sum games and therefore the expected utilities of all the Nash equilibria
are the same. In this way we can asses the Coleman's power under uncertainty. Besides introducing
the framework for this particular setting we analyse basic properties and make some computational complexity considerations. We provide several examples based in the evolution of the voting rules of the EU Council of Ministers.

The final publication is available at link.springer.com

We propose the use of the angel-daemon framework to assess the Coleman's power of a collectivity to act under uncertainty in weighted voting games.
In this framework uncertainty profiles describe the potential changes in the weights of a weighted game and fixes the spread of the weights' change. For each uncertainty profile a strategic angel-daemon game can be considered. This game has two selfish players, the angel and the daemon, the angel selects its action as to maximize the effect on the measure under consideration while daemon acts oppositely.
Players angel and daemon give a balance between the best and the worst. The angel-daemon games associated to the Coleman's power are zero-sum games and therefore the expected utilities of all the Nash equilibria
are the same. In this way we can asses the Coleman's power under uncertainty. Besides introducing the framework for this particular setting we analyse basic properties and make some computational complexity considerations. We provide several examples based in the evolution of the voting rules of the EU Council of Ministers.

En este trabajo se propone un método para analizar cuantitativamente la efectividad de las diferentes medidas adoptadas para mejorar el proceso de evaluación continua de una asignatura. Nuestra propuesta utiliza herramientas simples provenientes de la economía y de las ciencias sociales. Primero se analiza el efecto de dichas medidas sobre las categorías de calificaciones obtenidas por los estudiantes en relación al incremento de carga docente del profesorado. En el estudio de la evolución de las calificaciones y el trabajo de los profesores a lo largo del tiempo se utiliza una aproximación coste-beneficio marginal. En segundo lugar se realiza un análisis de desigualdad utilizando distintos tipos de desagregación de datos y en varias subpoblaciones relacionadas con las calificaciones de los estudiantes. Finalmente se analiza la evolución de indicadores estadísticos como la media y la satisfacción. Dicho método se aplica al estudio de la asignatura de Programación-1 de la Facultat d’Informàtica de Barcelona de la Universitat Politècnica de Catalunya en los primeros 5 cursos de implantación del Grado en Ingeniería Informática. La metodología propuesta pretende introducir nuevas técnicas que permitan un análisis objetivo del impacto de medidas docentes en cualquier asignatura universitaria.
In this paper we propose a method to quantitatively analyze the effectiveness of the different measures taken to improve a course’s evaluation process. Our proposal uses simple tools from
economics and the social sciences. First, we analyze the effect of these measures on the different categories of grades obtained by the students in relation to the increment of faculty’s workload by
means of a marginal cost-benefit approach. In second place, we analyze inequality using different types of data disaggregation in several subpopulations related to the students’ grades. Finally, we
study the evolution of statistical indicators such as mean and satisfaction.
This method is applied to the study of the CS1 at the Facultat d’Informàtica de Barcelona of the Universitat Politècnica de Catalunya along the first 5 years after the introduction of the new degree in Computer Engineering. The proposed methodology aims to introduce new techniques that allow an objective analysis of the impact of educational measures in any university course.

In this paper we propose a method to quantitatively analyze the effectiveness of the different measures taken to improve a course’s evaluation process. Our proposal uses simple tools from
economics and the social sciences. First, we analyze the effect of these measures on the different categories of grades obtained by the students in relation to the increment of faculty’s workload by
means of a marginal cost-benefit approach. In second place, we analyze inequality using different types of data disaggregation in several subpopulations related to the students’ grades. Finally, we
study the evolution of statistical indicators such as mean and satisfaction.
This method is applied to the study of the CS1 at the Facultat d’Informàtica de Barcelona of the Universitat Politècnica de Catalunya along the first 5 years after the introduction of the new degree in Computer Engineering. The proposed methodology aims to introduce new techniques that allow an objective analysis of the impact of educational measures in any university course.

Machine clients are increasingly making use of the Web to perform tasks. While Web services traditionally mimic remote procedure calling interfaces, a new generation of so-called hypermedia APIs works through hyperlinks and forms, in a way similar to how people browse the Web. This means that existing composition techniques, which determine a procedural plan upfront, are not sufficient to consume hypermedia APIs, which need to be navigated at runtime. Clients instead need a more dynamic plan that allows them to follow hyperlinks and use forms with a preset goal. Therefore, in this paper, we show how compositions of hypermedia APIs can be created by generic Semantic Web reasoners. This is achieved through the generation of a proof based on semantic descriptions of the APIs' functionality. To pragmatically verify the applicability of compositions, we introduce the notion of pre-execution and post-execution proofs. The runtime interaction between a client and a server is guided by proofs but driven by hypermedia, allowing the client to react to the application's actual state indicated by the server's response. We describe how to generate compositions from descriptions, discuss a computer-assisted process to generate descriptions, and verify reasoner performance on various composition tasks using a benchmark suite. The experimental results lead to the conclusion that proof-based consumption of hypermedia APIs is a feasible strategy at Web scale.

We propose the use of an angel-daemon framework to perform an uncertainty analysis of short-term macroeconomic models. The angel-daemon framework defines a strategic game where two agents, the angel and the daemon, act selfishly. These games are defined over an uncertainty profile which presents a short and macroscopic description of a perturbed situation. The Nash equilibria on these games provide stable strategies in perturbed situations, giving a natural estimation of uncertainty. We apply the framework to the uncertainty analysis of linear versions of the IS-LM and the IS-MP models.

This paper studies the complexity of computing a representation of a simple game as the intersection (union) of weighted majority games, as well as, the dimension or the codimension. We also present some examples with linear dimension and exponential codimension with respect to the number of players.

We introduce Celebrity games, a new model of network creation games. In this model players have weights (W being the sum of all the player's weights) and there is a critical distance ß as well as a link cost a. The cost incurred by a player depends on the cost of establishing links to other players and on the sum of the weights of those players that remain farther than the critical distance. Intuitively, the aim of any player is to be relatively close (at a distance less than ß ) from the rest of players, mainly of those having high weights. The main features of celebrity games are that: computing the best response of a player is NP-hard if ß>1 and polynomial time solvable otherwise; they always have a pure Nash equilibrium; the family of celebrity games having a connected Nash equilibrium is characterized (the so called star celebrity games) and bounds on the diameter of the resulting equilibrium graphs are given; a special case of star celebrity games shares its set of Nash equilibrium profiles with the MaxBD games with uniform bounded distance ß introduced in Bilò et al. [6]. Moreover, we analyze the Price of Anarchy (PoA) and of Stability (PoS) of celebrity games and give several bounds. These are that: for non-star celebrity games PoA=PoS=max{1,W/a}; for star celebrity games PoS=1 and PoA=O(min{n/ß,Wa}) but if the Nash Equilibrium is a tree then the PoA is O(1); finally, when ß=1 the PoA is at most 2. The upper bounds on the PoA are complemented with some lower bounds for ß=2.

We analyze the computational complexity of the power measure in models of collective decision: the generalized opinion leader-follower model and the oblivious and non-oblivious infuence models. We show that computing the power measure is #P-hard in all these models, and provide two subfamilies in which the power measure can be computed in polynomial time.

We study a network formation game where players wish to send traffic to other players. Players can be seen as nodes of an undirected graph whose edges are defined by contracts between the corresponding players. Each player can contract bilaterally with others to form bidirectional links or break unilaterally contracts to eliminate the corresponding links. Our model is an extension of the traffic routing model considered in Arcaute, E., Johari, R., Mannor, S., (IEEE Trans. Automat. Contr. 54(8), 1765–1778 2009) in which we do not require the traffic to be uniform and all-to-all. Player i specifies the amount of traffic tij = 0 that wants to send to player j. Our notion of stability is the network pairwise Nash stability, when no node wishes to deviate unilaterally and no pair of nodes can obtain benefit from deviating bilaterally. We show a characterization of the topologies that are pairwise Nash stable for a given traffic matrix. We prove that the best response problem is NP-hard and devise a myopic dynamics so that the deviation of the active node can be computed in polynomial time. We show the convergence of the dynamics to pairwise Nash configurations, when the contracting functions are anti-symmetric and affine, and that the expected convergence time is polynomial in the number of nodes when the node activation process is uniform.

We propose the use of an angel-daemon framework to perform an uncertainty analysis of short-term macroeconomic models with exogenous components. An uncertainty profile ¿ is a short and macroscopic description of a potentially perturbed situation. The angel-daemon framework uses ¿ to define a strategic game where two agents, the angel and the daemon, act selfishly having different goals. The Nash equilibria of those games provide the stable strategies in perturbed situations, giving a natural estimation of uncertainty.

Given any two vertices u, v of a random geometric graph G(n, r), denote by dE(u, v) their Euclidean distance and by dE(u, v) their graph distance. The problem of finding upper bounds on dG(u, v) conditional on dE(u, v) that hold asymptotically almost surely has received quite a bit of attention in the literature. In this paper we improve the known upper bounds for values of r=¿(vlogn) (that is, for r above the connectivity threshold). Our result also improves the best known estimates on the diameter of random geometric graphs. We also provide a lower bound on dE(u, v) conditional on dE(u, v).

This paper analyzes quantitatively the different measures taken in the continuous assessment of the CS1 course at the Facultat d’Informàtica de Barcelona of the Universitat Politècnica de Catalunya since 2010, when the new degree in Informatics Engineering began. The effect of those measures on the marks obtained by the students is analyzed with respect to the increment of the course load for the faculty members. These two values are compared along time through a classical marginal cost-beneﬁt approach. Using tools commonly accepted in the social sciences, this study is complemented with a preliminary analysis of inequality. Both approaches aim at introducing new techniques for an objective analysis on the impact of teaching measures.

The Generalized Second Price (GSP) auction used typically to model sponsored search auctions does not include the notion of budget constraints, which is present in practice. Motivated by this, we introduce the different variants of GSP auctions that take budgets into account in natural ways. We examine their stability by focusing on the existence of Nash equilibria and envy-free assignments. We highlight the differences between these mechanisms and find that only some of them exhibit both notions of stability. This shows the importance of carefully picking the right mechanism to ensure stable outcomes in the presence of budgets.

We analyze the computational complexity of the power measure in models of collective decision: the generalized opinion leader-follower model and the oblivious and non-oblivious in uence models. We show that computing the power measure is #P- hard in all these models, and provide two subfamilies in which the power measure can be computed in polynomial time.

We consider two collective decision-making models associated with influence games [2], the oblivious and non-oblivious influence decision models. The difference is that in oblivious influence models the initial decision of the actors that are neither leaders nor independent is never taken into account, while in the non-oblivious it is taken when the leaders cannot exert enough influence to deviate from it. Following [1] we consider the power measure, for an actor in a decision-making model. Power is ascribed to an actor when, by changing its inclination, the collective decision can be changed. We show that computing the power measure is #P-hard in both oblivious and non-oblivious influence decision models. We present two subfamilies of (oblivious and non-oblivious) influence decision models in which the power measure can be computed in polynomial time.

Teaching computer science in degrees that are not computer science related presents an important challenge: to motivate the students and to achieve good average grades. The student’s complaint is always based on his lack of motivation: What is this subject useful for? and this is specially relevant when this subject is not easy to learn by the student. In this paper we show the case of computer courses in the Statistics degree taught in the Universitat de Barcelona (UB) and the Universitat Politècnica de Catalunya (UPC) (two Catalan universities). We initially tried to reduce the complexity of their contents in order to obtain better average grades. Yet, it did not workout as expected. Therefore, we changed our strategy and instead of making the contents easier (less complex), we changed the tools that were used to teach and tried to adapt them to the students’ interests. In this particular case, we decided to use the R programming language, a language widely used by statisticians, in order to explain the basics of programming. Therefore, we changed our strategy from less (simpler contents) to different (more elaborated and nontrivial contents adapted to meet their expectations).

This paper studies the complexity of computing a representation of a simple game as the intersection (union) of weighted majority games, as well as, the dimension or the codimension. We also present some examples with linear dimension and exponential codimension with respect to the number of players.

An opinion leader-follower model (OLF) is a two-action collective decision-making model for societies, in which three kinds of actors are considered: "opinion leaders", "followers", and "independent actors". In OLF the initial decision of the followers can be modified by the influence of the leaders. Once the final decision is set, a collective decision is taken applying the simple majority rule. We consider a generalization of \OLF, the gOLF models which allow collective decision taken by rules different from the single majority rule. Inspired in this model we define two new families of collective decision-making models associated with cooperative influence games. We define the "oblivious" and "non-oblivious" influence models. We show that gOLF models are non-oblivious influence models played on a two layered bipartite influence graph.
Together with OLF models, the Satisfaction measure was introduced and studied. We analyze the computational complexity of the satisfaction measure for gOLF models and the other collective decision-making models introduced in the paper. We show that computing the satisfaction measure is #P-hard in all the considered models except for the basic OLF inwhich the complexity remains open. On the other hand, we provide two subfamilies of decision models in which the satisfaction measure can be computed in polynomial time.
Exploiting the relationship with influence games, we can relate the satisfaction measure with the Rae index of an associated simple game. The Rae index is closely related to the Banzhaf value. Thus, our results also extend the families of simple games for which computing the Rae index and the Banzhaf value is computationally hard.

An opinion leader-follower model (OLF) is a two-action collective decision-making model for societies, in which three kinds of actors are considered:

Consider the following geometric infection model: Individuals are randomly placed points according to a Poisson point process in some appropriate metric space. Each individual has two states, infected or healthy. Any infected individual passes the infection to any other at distance d according to a Poisson process, whose rate is a function f(d) of d that decays as d increases. Any infected individual heals at rate 1. An epidemic is said to occur when, starting from one infected individual placed at some point, the infection has positive probability of lasting forever. Otherwise, we say extinction occurs. We investigate conditions on and under which the function f(d) = (d + 1)¿ is epidemic.
Joint work with Xavier Perez and Nick Wormald.

We introduce the quad-kd tree: a general purpose and hierarchical data structure for the storage of multidimensional points. Quad-kd trees include point quad trees and kd trees as particular cases and therefore they could constitute a general framework for the study of fundamental properties of trees similar to them. Besides, quad-kd trees can be tuned by means of insertion heuristics and bucketing techniques to obtain trade-offs between their costs in time and space. We propose three such heuristics and we show analytically and experimentally their competitive performance. Our analytical results back the experimental outcomes and suggest that the quad-kd tree is a flexible data structure that can be tailored to the resource requirements of a given application.

This paper studies the complexity of computing a representation of a simple game as the intersection (union) of weighted majority games, as well as, the dimension or the codimension. We also present some examples with linear dimension and exponential codimension with respect to the number of players.

We analyze the computational complexity of the problem of deciding whether, for a given simple game, there exists the possibility of rearranging the participants in a set of j given losing coalitions into a set of j winning coalitions. We also look at the problem of turning winning coalitions into losing coalitions. We analyze the problem when the simple game is represented by a list of wining, losing, minimal winning or maximal loosing coalitions.

The first programming course (Programming-1, CS1) in the Informatics Engineering Degree of the Facultat d'Informàtica de Barcelona was completely redesigned in 2006 in order to reinforce the learn-by-
doing methodology. Along the following eight years several pedagogical measures - mostly related with continous assessment - were introduced with the aim of increasing the pass rate of the course without lowering its high quality standards. This paper analyzes to what extent the added workload on faculty entailed by these measures affects the pass rate. We use a classical marginal cost-benefit approach - from Economics - to compare these two values along time. This process allows us to relate the evolution of the pass rate of students with the workload of the faculty through a productivity curve, as well as to assess the impact of each pedagogical measure. We conclude that, for this course, continuous assessment is expensive. In fact, abstracting from short term oscillations, the slope of the productivity curve is close to zero.

We have created and evaluated an algorithm capable of deduplicating and clustering exact- and near-duplicate media items of type photo and video that get shared on multiple social networks in the context of events. This algorithm works in an entirely ad hoc manner without requiring any pre-calculation. When people attend events, they more and more share event-related media items publicly on social networks to let their social network contacts relive and witness the attended events. In the past, we have worked on methods to accumulate such public user-generated multimedia content in order to summarize events visually, for example, in the form of media galleries or slideshows. In this paper, first, we introduce social-network-specific reasons and challenges that cause near-duplicate media items. Second, we detail an algorithm for the task of deduplicating and clustering exact- and near-duplicate media items stemming from multiple social networks. Finally, we evaluate the algorithm's strengths and weaknesses and thoroughly compare its performance with the state-of-the-art feature detection algorithms SIFT, ASIFT and SURF and show that for the given use case it performs almost equally well accuracy-wise, but strongly outperforms speed-wise.

Uncertainty profiles are used to study the effects of contention within cloud and service-based environments. An uncertainty profile provides a qualitative description of an environment whose quality of service (QoS) may fluctuate unpredictably.
For example, the performance of an application running on a virtual machine can be affected both by the way in which resources are allocated at run-time as well as by hardware contention issues. The aim of this paper is to model the influence that cloud (or service-based) environments can have on an application's performance.
%
Uncertain environments are modelled by strategic games with two agents; a daemon is used to represent overload and high resource contention; an angel is used to represent an idealised resource allocation situation with no underlying contention.
%An assessment of mixed-stress situations is found by finding the Nash equilibria of games constructed from uncertainty profiles.
Assessments of uncertainty profiles are useful in two ways: firstly,
they provide a broad understanding of how environmental stress can effect an application's performance
(and reliability); secondly, they allow the effects of introducing redundancy into a computation to be assessed.

A framework for assessing the robustness of long-duration repetitive orchestrations in uncertain evolving environments is proposed. The model assumes that service-based evaluation environments are stable over short time-frames only; over longer periods service-based environments evolve as demand fluctuates and contention for shared resources varies.
The behaviour of a short-duration orchestration E in a stable environment is assessed by an uncertainty profile U and a corresponding zero-sum angel-daemon game Gamma (U).
Here the angel-daemon approach is extended to assess evolving environments by means of a subfamily of stochastic games. These games are called strategy oblivious because their transition probabilities are strategy independent. It is shown that the value of a strategy oblivious stochastic game is well defined and that it can be computed by solving a linear system. Finally, the proposed stochastic framework is used to assess the evolution of the Gabrmn IT system.

This work is a follow up of results given in [1]. Here we present some computational complexity results for specific problems and simple games. For instance, we consider the complexity of determining trade
robustness for a given simple game in the four natural explicit representations: winning, losing, minimal winning, and maximal losing. Our results show that the problem is solvable in polynomial in some cases but in other it is NP-hard depending on the input and the output.
We also define the j-trade application for a given simple game and we analyze how to find such j-trade application in those natural forms of representation.
We conclude stating some conjectures and open problems. For instance, given a simple game, we consider how to compute the dimension and the co-dimension [2,3], and how to represent such game by a union or an intersection of some weighted games.

An opinion leader-follower (OLF) model is a two-action collective decision-making model for societies, including opinion leaders, exerting certain influence over the decision of other actors; followers, that might be convinced to modify their decisions, and independent actors[1]. We extended the OLF model by introducing two collective decision-making models associated with influence games [2], the oblivious and non-oblivious influence decision models. The difference is that in oblivious influence models the initial decision of the
actors that are neither leaders nor independent is never taken into account while in the non-oblivious it is taken when the leaders cannot exert enough influence to deviate from it.
The satisfaction measure, for an actor in a decision-making model, is the number of society's initial decisions for which the collective decision coincides with the initial decision of the actor. We show that computing the satisfaction measure is #P-hard in both oblivious and non-oblivious influence decision models. We present two subfamilies, fpr both types, of influence decision models in which the satisfaction measure can be
computed in polynomial time.

Simple games are cooperative games in which the benefit that a coalition may have is always binary, i.e., a coalition may either win or loose. This paper surveys different forms of representation of simple games, and those for some of their subfamilies like regular games and weighted games. We analyze the forms of representations that have been proposed in the literature based on different data structures for sets of sets. We provide bounds on the computational resources needed to transform a game from one form of representation to another one. This includes the study of the problem of enumerating the fundamental families of coalitions of a simple game. In particular we prove that several changes of representation that require exponential time can be solved with polynomial-delay and highlight some open problems.

Influence games constitute a new approach to study simple games
based on a model of spread of influence in a social network,
where influence spreads according to the linear threshold model.
Influence games capture the whole class of simple games.
We consider for influence games the complexity of the problems
related to parameters, properties and solution concepts considered
for simple games.
We also study extremal cases with respect to demand of influence,
and we show that, for these subfamilies, several problems become
polynomial.
Some applications inspired on influence games are also considered.
For instance, we introduce collective choice models, we establish a
connection with social network analysis, we apply power indices
of cooperative games as centrality measures of social networks
and we define new centrality measures among other applications.

The first course on programming is fundamental in the Facultat d’Informàtica de Barcelona. After a major redesign of the Programming-1 course in 2006 to give it a more practical flavor, an increasing number of measures have been undertaken over the years to try to increase its pass rate while maintaining a fixed quality level. These measures, that can be roughly summarized as an important increase in assessment, imply an increase in the workload of both students and instructors that does not always correspond to the increase of pass rate they provide. In this paper, and within the context of this course, we analyze quantitatively the amount of work required from faculty to implement the series of measures and we conclude that, within this course, continuous assessment is expensive and has reached its limit.

The 2015 International Conference on Risk Analysis (ICRA6) is an international forum for disseminating recent advances in the field of Risk Analysis, with applications for the risk assessment and the risk management in a variety of fields: Life, Biology, Environmental Sciences, Public Health, Economics, Insurance, Finance, Reliability of Engineering and Technical, Biological & Biomedical Systems. The conference also includes the 6th Workshop on Risk Management and Insurance (RISK2015) which has been taken place biannually in Spain. ICRA6 and RISK2015 were held in Barcelona and the sessions took place at the Universitat Pompeu Fabra on May 26-29, 2015

We introduce a model for uncertainty in the IS-LM linear macroeconomic model with exogenous parameters. An uncertainty profile u is a short and macroscopic description of an stressed situation. We use u to define a strategic game where two agents, the angel and the daemon, act selfishly and have different goals.
Those Nash equilibria provide the stable strategies in stressed situations, giving a natural estimation of risk. We apply this analysis to a linear version of the ISLM model and analyse the structure of the Nash equilibria in some particular
games.

We consider a simple and altruistic multiagent system in which the agents are eager to perform a collective task but where their real engagement depends on the willingness to perform the task of other influential agents. We model this scenario by an influence game, a cooperative simple game in which a team (or coalition) of players succeeds if it is able to convince enough agents to participate in the task (to vote in favor of a decision). We take the linear threshold model as the influence model. We show first the expressiveness of influence games showing that they capture the class of simple games. Then we characterize the computational complexity of various problems on influence games, including measures (length and width), values (Shapley-Shubik and Banzhaf) and properties (of teams and players). Finally, we analyze those problems for some particular extremal cases, with respect to the propagation of influence, showing tighter complexity characterizations.

We analyze the computational complexity of the problem of deciding
whether, for a given simple game, there exists the possibility of rearranging the participants in a set of j given losing coalitions into a set of j winning coalitions. We also look at the problem of turning winning coalitions into losing coalitions. We analyze the problem when the simple game is represented by a list of wining, losing, minimal winning or maximal loosing coalitions.

Celebrity games, a new model of network creation games is introduced. The specific features of this model are that players have different celebrity weights and that a critical distance is taken into consideration. The aim of any player is to be close (at distance less than critical) to the others, mainly to those with high celebrity weights. The cost of each player depends on the cost of establishing direct links to other players and on the sum of the weights of those players at a distance greater than the critical distance. We show that celebrity games always have pure Nash equilibria and we characterize the family of subgames having connected Nash equilibria, the so called star celebrity games. Exact bounds for the PoA of non star celebrity games and a bound of O(n/ß+ß) for star celebrity games are provided.
The upper bound on the PoA can be tightened when restricted to particular classes of Nash equilibria graphs. We show that the upper bound is O(n/ß) in the case of 2-edge-connected graphs and 2 in the case of trees.

In an often retweeted Twitter post, entrepreneur and software architect Inge Henriksen described the relation of Web 1.0 to Web 3.0 as: “Web 1.0 connected humans with machines. Web 2.0 connected humans with humans. Web 3.0 connects machines with machines.”
On the one hand, an incredible amount of valuable data is described by billions of triples, machine-accessible and interconnected thanks to the promises of Linked Data. On the other hand, rest is a scalable, resourceoriented
architectural style that, like the Linked Data vision, recognizes the importance of links between resources. Hypermedia apis are resources, too—albeit dynamic ones—and unfortunately, neither Linked Data principles, nor the rest-implied self-descriptiveness of hypermedia
apis sufficiently describe them to allow for long-envisioned realizations like automatic service discovery and composition. We argue that describing inter-resource links—similarly to what the Linked Data movement
has done for data—is the key to machine-driven consumption of apis. In this paper, we explain how the description format restdesc captures the functionality of apis by explaining the effect of dynamic interactions,
effectively complementing the Linked Data vision.