My research interests lie at the intersection of data privacy and machine learning. I am currently investigating privacy risks stemming from biomedical data, web data, and machine learning as a service (MLaaS). My research expertise includes genomic privacy, privacy in online social networks, location privacy, and probabilistic graphical models.

In my Ph.D. thesis (available here), I have notably analyzed the variety of interdependencies in individuals' data, and their impact on privacy. I have shown the impossibility of protecting privacy individually
and independently of others (stemming from the lack of control over correlated data), and the need for new definitions of privacy, in its social context, and as a common good.

Internet users can download software for their computers from app stores (e.g., Mac App Store and Windows Store) or from other sources, such as the developers' websites. Most Internet users in the US rely on the latter, according to our representative study, which makes them directly responsible for the content they download. To enable users to detect if the downloaded files have been corrupted, developers can publish a checksum together with the link to the program file; users can then manually verify that the checksum matches the one they obtain from the downloaded file. In this paper, we assess the prevalence of such behavior among the general Internet population in the US (N=2,000), and we develop easy-to-use tools for users and developers to automate both the process of checksum verification and generation. Specifically, we propose an extension to the recent W3C specification for sub-resource integrity in order to provide integrity protection for download links. Also, we develop an extension for the popular Chrome browser that computes and verifies checksums of downloaded files automatically, and an extension for the WordPress CMS that developers can use to easily attach checksums to their remote content. Our in situ experiments with 40 participants demonstrate the usability and effectiveness issues of checksums verification, and shows user desirability for our extension.

Hashtags have emerged as a widely used concept of popular culture and campaigns, but their implications on people’s privacy have not been investigated so far. In this paper, we present the first systematic analysis of privacy issues induced by hashtags. We concentrate in particular on location, which is recognized as one of the key privacy concerns in the Internet era. By relying on a random forest model, we show that we can infer a user’s precise location from hashtags with accuracy of 70% to 76%, depending on the city. To remedy this situation, we introduce a system called Tagvisor that systematically suggests alternative hashtags if the user-selected ones constitute a threat to location privacy. Tagvisor realizes this by means of three conceptually different obfuscation techniques and a semantics-based metric for measuring the consequent utility loss. Our findings show that obfuscating as little as two hashtags already provides a near-optimal trade-off between privacy and utility in our dataset. This in particular renders Tagvisor highly time-efficient, and thus, practical in real-world settings.

The decreasing costs of molecular profiling has fueled the biomedical research community with a plethora of new types of biomedical data, enabling a breakthrough towards a more precise and personalized medicine. However, the release of these intrinsically highly sensitive data poses a new severe privacy threat. While biomedical data is largely associated with our health, there also exist various correlations between different types of biomedical data, along the temporal dimension, and also in-between family members. However, so far, the security community has focused on privacy risks stemming from genomic data, largely overlooking the manifold interdependencies between other biomedical data.

In this paper, we present a generic framework for quantifying the privacy risks in biomedical data taking into account the various interdependencies between data (i) of different types, (ii) from different individuals, and (iii) at different time. To this end, we rely on a Bayesian network model that allows us to take all aforementioned dependencies into account and run exact probabilistic inference attacks very efficiently. Furthermore, we introduce a generic algorithm for building the Bayesian network, which encompasses expert knowledge for known dependencies, such as genetic inheritance laws, and learns previously unknown dependencies from the data. Then, we conduct a thorough inference risk evaluation with a very rich dataset containing genomic and epigenomic data of mothers and children over multiple years. Besides effective probabilistic inference, we further demonstrate that our Bayesian network model can also serve as a building block for other attacks. We show that, with our framework, an adversary can efficiently identify the parent-child relationships based on methylation data with a success rate of 95%.

The development of positioning technologies has resulted in an increasing amount of mobility data being available. While bringing a lot of convenience to people’s life, such availability also raises serious concerns about privacy. In this paper, we concentrate on one of the most sensitive information that can be inferred from mobility data, namely social relationships. We propose a novel social relation inference attack that relies on an advanced feature learning technique to automatically summarize users’ mobility features. Compared to existing approaches, our attack is able to predict any two individuals’ social relation, and it does not require the adversary to have any prior knowledge on existing social relations. These advantages signi cantly increase the applicability of our attack and the scope of the privacy assessment. Extensive experiments conducted on a large dataset demonstrate that our inference attack is effective, and achieves between 13% to 20% improvement over the best state-of-the-art scheme. We propose three defense mechanisms – hiding, replacement and generalization – and evaluate their effectiveness for mitigating the social link privacy risks stemming from mobility data sharing. Our experimental results show that both hiding and replacement mechanisms outperform generalization. Moreover, hiding and replacement achieve a comparable trade-off between utility and privacy, the former preserving better utility and the latter providing better privacy.

Since the first whole-genome sequencing, the biomedical research
community has made significant steps towards a more precise,
predictive and personalized medicine. Genomic data is nowadays
widely considered privacy-sensitive and consequently protected by
strict regulations and released only after careful consideration.
Various additional types of biomedical data, however, are not shielded by any dedicated
legal means and consequently disseminated much less thoughtfully. This in
particular holds true for DNA methylation data as one of the most
important and well-understood epigenetic element influencing human
health.

In this paper, we show that, in contrast to the aforementioned belief,
releasing one's DNA methylation data causes privacy issues akin to
releasing one's actual genome. We show that already a small subset
of methylation regions influenced by genomic variants are sufficient
to infer parts of someone's genome, and to further map this DNA
methylation profile to the corresponding genome. Notably, we show that
such re-identification is possible with 97.5\% accuracy, relying on a
dataset of more than 2500 genomes, and that we can reject all
wrongly matched genomes using an appropriate statistical test.
We provide means for countering this threat by proposing a novel cryptographic scheme
for privately classifying tumors that enables a privacy-respecting medical
diagnosis in a common clinical setting. The scheme relies on a
combination of random forests and homomorphic encryption, and it is proven secure
in the honest-but-curious model. We evaluate this scheme on real DNA methylation
data, and show that we can keep the computational overhead to acceptable values
for our application scenario.

Abstract—In a world where traditional notions of privacy are increasingly challenged by the myriad companies that collect and analyze our data, it is important that decision-making entities are held accountable for unfair treatments arising from irresponsible data usage. Unfortunately, a lack of appropriate methodologies and tools means that even identifying unfair or discriminatory effects can be a challenge in practice.
We introduce the unwarranted associations (UA) framework, a principled methodology for the discovery of unfair, discriminatory, or offensive user treatment in data-driven applications. The UA framework unifies and rationalizes a number of prior attempts at formalizing algorithmic fairness. It uniquely combines multiple investigative primitives and fairness metrics with broad applicability, granular exploration of unfair treatment in user subgroups, and incorporation of natural notions of utility that may account for observed disparities.
We instantiate the UA framework in FairTest, the first comprehensive tool that helps developers check data-driven applications for unfair user treatment. It enables scalable and statistically rigorous investigation of associations between application outcomes (such as prices or premiums) and sensitive user attributes (such as race or gender). Furthermore, FairTest provides debugging capabilities that let programmers rule out potential confounders for observed unfair effects.
We report on use of FairTest to investigate and in some cases address disparate impact, offensive labeling, and uneven rates of algorithmic error in four data-driven applications. As examples, our results reveal subtle biases against older populations in the distribution of error in a predictive health application and offensive racial labeling in an image tagger.

The rapid progress in human-genome sequencing is leading to a high availability of genomic data. These data are notoriously very sensitive and stable in time, and highly correlated among relatives. In this paper, we study the implications of these familial correlations on kin genomic privacy. We formalize the problem and detail efficient reconstruction attacks based on graphical models and belief propagation. With our approach, an attacker can infer the genomes of the relatives of an individual whose genome or phenotype are observed, by notably relying on Mendel’s Laws, statistical relationships between the genomic variants, and between the genome and the phenotype. We evaluate the effect of these dependencies on privacy with respect to the amount of observed variants and the relatives sharing them. We also study how the algorithmic performance evolves when we take these various relationships into account. Furthermore, to quantify the level of genomic privacy as a result of the proposed inference attack, we discuss possible definitions of genomic privacy metrics, and compare their values and evolution. Genomic data reveals Mendelian disorders and the likelihood of developing severe diseases such as Alzheimer’s. We also introduce the quantification of health privacy, specifically the measure of how well the predisposition to a disease is concealed from an attacker. We evaluate our approach on actual genomic data from a pedigree and show the threat extent by combining data gathered from a genome-sharing website and from an online social network.

Co-location information about users is increasingly available online. For instance, mobile users more and more frequently report their co-locations with other users in the messages and in the pictures they post on social networking websites by tagging the names of the friends they are with. The users’ IP addresses also constitute a source of co-location information. Combined with (possibly obfuscated) location information, such co-locations can be used to improve the inference of the users’ locations, thus further threatening their location privacy: As co-location information is taken into account, not only a user’s reported locations and mobility patterns can be used to localize her, but also those of her friends (and the friends of their friends and so on). In this paper, we study this problem by quantifying the effect of co-location information on location privacy, considering an adversary such as a social network operator that has access to such information. We formalize the problem and derive an optimal inference algorithm that incorporates such co-location information, yet at the cost of high complexity. We propose some approximate inference algorithms, including a solution that relies on the belief propagation algorithm executed on a general Bayesian network model, and we extensively evaluate their performance. Our experimental results show that, even in the case where the adversary considers co-locations of the targeted user with a single friend, the median location privacy of the user is decreased by up to 62% in a typical setting. We also study the effect of the different parameters (e.g., the settings of the location-privacy protection mechanisms) in different scenarios.

The dramatically decreasing costs of DNA sequencing have triggered more than a million humans to date to have their genotypes sequenced. Moreover, these individuals increasingly make their genomic data publicly available, and thereby create unique privacy threats not only for themselves, but also for their relatives because of their DNA similarities. More generally, an entity that gains access to a significant fraction of sequenced genotypes from a given population might be able to infer even the genomes of unsequenced individuals by relying on available data.

In this paper, we propose a simulation-based model for quantifying the impact of continuously sequencing and publicizing personal genomic data on a population’s genomic privacy. Our simulation probabilistically models data sharing by individuals and additionally takes into account the influence on genomic privacy of geopolitical events such as migration, and sociological trends such as interracial marriage. We exemplarily instantiate our simulation with a sample population of 1,000 individuals, and evaluate the evolution of privacy under different settings over either thousands of genomic variants or a subset of variants influencing the phenotype. Our findings notably demonstrate that an increasing sharing rate of genomic data in the future entails a substantial negative effect on the privacy of all older generations. Moreover, we find that mixed populations, due to their large genomic diversity, face a less severe erosion of genomic privacy over time than more homogeneous populations. However, even when no data is shared, the genomic privacy averaged over a large number of variants is already very low since mere population allele frequencies already reveal a lot of information about the values of the genomic variants. By focusing on a subset of sensitive variants, we observe a higher genetic diversity in the population. Thus, genomic-data sharing can be much more detrimental for the privacy of the most sensitive variants.

The continuous decrease in cost of molecular profiling tests is revolutionizing medical research and practice, but it also raises new privacy concerns. One of the first attacks against privacy of biological data, proposed by Homer et al. in 2008, showed that, by knowing parts of the genome of a given individual and summary statistics of a genome-based study, it is possible to detect if this individual participated in the study. Since then, a lot of work has been carried out to further study the theoretical limits and to counter the genome-based membership inference attack. However, genomic data are by no means the only or the most influential biological data threatening personal privacy. For instance, whereas the genome informs us about the risk of developing some diseases in the future, epigenetic biomarkers, such as microRNAs, are directly and deterministically affected by our health condition including most common severe diseases.

In this paper, we show that the membership inference attack also threatens the privacy of individuals contributing their microRNA expressions to scientific studies. Our results on real and public microRNA expression data demonstrate that disease-specific datasets are especially prone to membership detection, offering a true-positive rate of up to 77% at a false-negative rate of less than 1%. We present two attacks: one relying on the L1 distance and the other based on the likelihood-ratio test. We show that the likelihood-ratio test provides the highest adversarial success and we derive a theoretical limit on this success. In order to mitigate the membership inference, we propose and evaluate both a differentially private mechanism and a hiding mechanism. We also consider two types of adversarial prior knowledge for the differentially private mechanism and show that, for relatively large datasets, this mechanism can protect the privacy of participants in miRNA-based studies against strong adversaries without degrading the data utility too much. Based on our findings and given the current number of miRNAs, we recommend to only release summary statistics of datasets containing at least a couple of hundred individuals.

The decreasing cost of molecular profiling tests, such as DNA sequencing, and the consequent increasing availability of biological data are revolutionizing medicine, but at the same time create novel privacy risks. The research community has already proposed a plethora of methods for protecting genomic data against these risks.
However, the privacy risks stemming from epigenetics, which bridges the gap between the genome and our health characteristics, have been largely overlooked so far, even though epigenetic data such as microRNAs (miRNAs) are no less privacy sensitive.
This lack of investigation is attributed to the common belief that the inherent temporal variability of miRNAs shields them from being tracked and linked over time.

In this paper, we show that, contrary to this belief, miRNA expression profiles can be successfully tracked over time, despite their variability.
Specifically, we show that two blood-based miRNA expression profiles taken with a time difference of one week from the same person can be matched with a success rate of 90%. We furthermore observe that this success rate stays almost constant when the time difference is increased from one week to one year.
In order to mitigate the linkability threat, we propose and thoroughly evaluate two countermeasures: (i) hiding a subset of disease-irrelevant miRNA expressions, and (ii) probabilistically sanitizing the miRNA expression profiles. Our experiments show that the second mechanism provides a better trade-off between privacy and disease-prediction accuracy.

People increasingly have their genomes sequenced and some of them share their genomic data online. They do so for various purposes, including to find relatives and to help advance genomic research. An individual's genome carries very sensitive, private information such as its owner's susceptibility to diseases, which could be used for discrimination. Therefore, genomic databases are often anonymized. However, an individual's genotype is also linked to visible phenotypic traits, such as eye or hair color, which can be used to re-identify users in anonymized public genomic databases, thus raising severe privacy issues. For instance, an adversary can identify a target's genome using known her phenotypic traits and subsequently infer her susceptibility to Alzheimer's disease.

In this paper, we quantify, based on various phenotypic traits, the extent of this threat in several scenarios by implementing de-anonymization attacks on a genomic database of OpenSNP users sequenced by 23andMe. Our experimental results show that the proportion of correct matches reaches 23% with a supervised approach in a database of 50 participants. Our approach outperforms the baseline by a factor of four, in terms of the proportion of correct matches, in most scenarios. We also evaluate the adversary's ability to predict individuals' predisposition to Alzheimer's disease, and we observe that the inference error can be halved compared to the baseline. We also analyze the effect of the number of known phenotypic traits on the success rate of the attack. As progress is made in genomic research, especially for genotype-phenotype associations, the threat presented in this paper will become more serious.

Privacy is defined as the right to control, edit, manage, and delete information about oneself and decide when, how, and to what extent this information is communicated to others. Therefore, every person should ideally be empowered to manage and protect his own data, individually and independently of others. This assumption, however, barely holds in practice, because people are by nature biologically and socially interconnected. An individual's identity is essentially determined at the biological and social levels. First, a person is biologically determined by his DNA, his genes, that fully encode his physical characteristics. Second, human beings are social animals, with a strong need to create ties and interact with their peers. Interdependence is present at both levels. At the biological level, interdependence stems from genetic inheritance. At the social level, interdependence emerges from social ties. In this thesis, we investigate whether, in today's highly connected world, individual privacy is in fact achievable, or if it is almost impossible due to the inherent interdependence between people.

First, we study interdependent privacy risks at the social level, focusing on online social networks (OSNs), the digital counterpart of our social lives. We show that, even if an OSN user carefully tunes his privacy settings in order to not be present in any search directory, it is possible for an adversary to find him by using publicly visible attributes of other OSN users. We demonstrate that, in OSNs where privacy settings are not aligned between users and where some users reveal a (even limited) set of attributes, it is almost impossible for a specific user to hide in the crowd. Our navigation attack complements existing work on inference attacks in OSNs by showing how we can efficiently find targeted profiles in OSNs, which is a necessary precondition for any targeted attack. Our attack also demonstrates the threat on OSN-membership privacy.

Second, we investigate upcoming interdependent privacy risks at the biological level. More precisely, due to the recent drop in costs of genome sequencing, an increasing number of people are having their genomes sequenced and share them online and/or with third parties for various purposes. However, familial genetic dependencies induce indirect genomic privacy risks for the relatives of the individuals who share their genomes. We propose a probabilistic framework that relies upon graphical models and Bayesian inference in order to formally quantify genomic privacy risks. Then, we study the interplay between rational family members with potentially conflicting interests regarding the storage security and disclosure of their genomic data. We consider both purely selfish and altruistic behaviors, and we make use of multi-agent influence diagrams to efficiently derive equilibria in the general case where more than two relatives interact with each other. We also propose an obfuscation mechanism in order to reconcile utility with privacy in genomics, in the context where all family members are cooperative and care about each other's privacy.

Third, we study privacy-enhancing systems, such as anonymity networks, where users do not damage other users' privacy but are actually needed in order to protect privacy. In this context, we show how incentives based on virtual currency can be used and their amount optimized in order to foster cooperation between users and eventually improve everyone's privacy. We derive our analytical findings by relying upon Markov chains,
game theory, and Markov decision processes. This last part demonstrates that other people can also play a beneficial role in privacy.

We conclude that the quest for online privacy is chimerical because of the lack of individual
control over data. As a consequence, unless cooperation between people quickly
expands, we should consider that online privacy is steadily vanishing, and start designing
novel mechanisms for the upcoming post-privacy era. We should finally redefine privacy,
which is, beyond an individual right, now part of the commons.

Over the last few years, the vast progress in genome sequencing has highly increased the availability of genomic data. Today, individuals can obtain their digital genomic sequences at reasonable prices from many online service providers. Individuals can store their data on personal devices, reveal it on public online databases, or share it with third parties. Yet, it has been shown that genomic data is very privacy-sensitive and highly correlated between relatives. Therefore, individuals' decisions about how to manage and secure their genomic data are crucial. People of the same family might have very different opinions about (i) how to protect and (ii) whether or not to reveal their genome. We study this tension by using a game-theoretic approach. First, we model the interplay between two purely-selfish family members. We also analyze how the game evolves when relatives behave altruistically. We define closed-form Nash equilibria in different settings. We then extend the game to N players by means of multi-agent influence diagrams that enable us to efficiently compute Nash equilibria. Our results notably demonstrate that altruism does not always lead to a more efficient outcome in genomic-privacy games. They also show that, if the discrepancy between the genome-sharing benefits that players perceive is too high, they will follow opposite sharing strategies, which has a negative impact on the familial utility.

@inproceedings{humbert2014reconciling,
title={Reconciling {U}tility with {P}rivacy in {G}enomics},
author={Humbert, Mathias and Ayday, Erman and Hubaux, Jean-Pierre and Telenti, Amalio},
booktitle={Proceedings of the 13th Workshop on Privacy in the Electronic Society},
pages={11--20},
year={2014},
organization={ACM}
}

Direct-to-consumer genetic testing makes it possible for everyone to learn their genome sequences. In order to contribute to medical research, a growing number of people publish their genomic data on the Web, sometimes under their real identities. However, this is at odds not only with their own privacy but also with the privacy of their relatives. The genomes of relatives being highly correlated, some family members might be opposed to revealing any of the family's genomic data. In this paper, we study the trade-off between utility and privacy in genomics. We focus on the most relevant kind of variants, namely single nucleotide polymorphisms (SNPs). We take into account the fact that the SNPs of an individual contain information about the SNPs of his family members and that SNPs are correlated with each other. Furthermore, we assume that SNPs can have different utilities in medical research and different levels of sensitivity for individuals. We propose an obfuscation mechanism that enables the genomic data to be publicly available for research, while protecting the genomic privacy of the individuals in a family. Our genomic-privacy preserving mechanism relies upon combinatorial optimization and graphical models to optimize utility and meet privacy requirements. We also present an extension of the optimization algorithm to cope with the non-linear constraints induced by the correlations between SNPs. Our results on real data show that our proposed technique maximizes the utility for genomic research and satisfies family members' privacy constraints.

The rapid progress in human-genome sequencing is leading to a high availability of genomic data. This data is notoriously very sensitive and stable in time. It is also highly correlated among relatives. A growing number of genomes are becoming accessible online (e.g., because of leakage, or after their posting on genome-sharing websites). What are then the implications for kin genomic privacy? We formalize the problem and detail an efficient reconstruction attack based on graphical models and belief propagation. With this approach, an attacker can infer the genomes of the relatives of an individual whose genome is observed, relying notably on Mendel's Laws and statistical relationships between the nucleotides (on the DNA sequence). Then, to quantify the level of genomic privacy as a result of the proposed inference attack, we discuss possible definitions of genomic privacy metrics. Genomic data reveals Mendelian diseases and the likelihood of developing degenerative diseases such as Alzheimer's. We also introduce the quantification of health privacy, specifically the measure of how well the predisposition to a disease is concealed from an attacker. We evaluate our approach on actual genomic data from a pedigree and show the threat extent by combining data gathered from a genome-sharing website and from an online social network.

Mathias Humbert, Théophile Studer, Matthias Grossglauser, Jean-Pierre Hubaux.
Nowhere to Hide: Navigating around Privacy in Online Social Networks.
In Proceedings of the 18th European Symposium on Research in Computer Security (ESORICS), 2013.

In this paper, we introduce a navigation privacy attack, where an external adversary attempts to find a target user by exploiting publicly visible attributes of intermediate users. If such an attack is successful, it implies that a user cannot hide simply by excluding himself from a central directory or search function. The attack exploits the fact that most attributes (such as place of residence, age, or alma mater) tend to correlate with social proximity, which can be exploited as navigational cues while crawling the network. The problem is exacerbated by privacy policies where a user who keeps his profile private remains nevertheless visible in his friends' ''friend lists''; such a user is still vulnerable to our navigation attack. Experiments with Facebook and Google+ show that the majority of users can be found efficiently using our attack, if a small set of attributes are known about the target as side information. Our results suggest that, in an online social network where many users reveal a (even limited) set of attributes, it is nearly impossible for a specific user to ''hide in the crowd''.

Recent smartphones incorporate embedded GPS devices that enable users to obtain geographic information about their surroundings by providing a location-based service (LBS) with their current coordinates. However, LBS providers collect a significant amount of data from mobile users and could be tempted to misuse it, by compromising a customer's location privacy (her ability to control the information about her past and present location). Many solutions to mitigate this privacy threat focus on changing both the architecture of location-based systems and the business models of LBS providers. MobiCrowd does not introduce changes to the existing business practices of LBS providers, rather it requires mobile devices to communicate wirelessly in a peer-to-peer fashion. To lessen the privacy loss, users seeking geographic information try to obtain this data by querying neighboring nodes, instead of connecting to the LBS. However, such a solution will only function if users are willing to share regional data obtained from the LBS provider. We model this collaborative location-data sharing problem with rational agents following threshold strategies. Initially, we study agent cooperation by using pure game theory and then by combining game theory with an epidemic model that is enhanced to support threshold strategies to address a complex multi-agent scenario. From our game-theoretic analysis, we derive cooperative and non-cooperative Nash equilibria and the optimal threshold that maximizes agents' expected utility.

Scrip is a generic term for any substitute for real currency; it can be converted into goods or services sold by the issuer. In the classic scrip system model, one agent is helped by another in return for one unit of scrip. In this paper, we present an upgraded model, the one-to-n scrip system, where users need to find n agents to accomplish a single task. We provide a detailed analytical evaluation of this system based on a game-theoretic approach. We establish that a nontrivial Nash equilibrium exists in such systems under certain conditions. We study the effect of n on the equilibrium, on the distribution of scrip in the system and on its performance. Among other results, we show that the system designer should increase the average amount of scrip in the system when n increases in order to optimize its efficiency. We also explain how our new one-to-n scrip system can be applied to foster cooperation in two privacy-enhancing applications.

Mathias Humbert, Mohammad Hossein Manshaei, Julien Freudiger, Jean-Pierre Hubaux
Tracking Games in Mobile Networks.
In Proceedings of the 1st Conference on Decision and Game Theory for Security (GameSec), 2010.

Users of mobile networks can change their identifiers in regions called mix zones in order to defeat the attempt of third parties to track their location. Mix zones must be deployed carefully in the network to reduce the cost they induce on mobile users and to provide high location privacy. Unlike most previous works that assume a global adversary, we consider a local adversary equipped with multiple eavesdropping stations. We study the interaction between the local adversary deploying eavesdropping stations to track mobile users and mobile users deploying mix zones to protect their location privacy. We use a game-theoretic model to predict the strategies of both players. We derive the strategies at equilibrium in complete and incomplete information scenarios and propose an algorithm to compute the equilibrium in a large network. Finally, based on real road-traffic information, we numerically quantify the effect of complete and incomplete information on the strategy selection of mobile users and of the adversary. Our results enable system designers to predict the best response of mobile users with respect to a local adversary strategy, and thus to select the best deployment of countermeasures.