Post navigation

14 thoughts on “PETS 2019”

Christiane Kuhn started PETS with a talk on Privacy Notions in Anonymous Communication. She’s interested in how the indistinguishability games used in modern crypto proofs can be extended to anonymity games with multiple participants, some of them collusive. Sender message unobservability foils the adversary trying to distinguish Alice sending Bob a message with Charlie sending Bob a message; for sender unlinkablity, you have either Alice or Charlie sending a further message to Dave, and this is strictly stronger. She counts a total 51 separate privacy notions depending on whether messages can be leaked, and whether receivers, senders or both are unobservable or unlinkable, and has worked out a hierarchy of which properties imply others.

Next was Hans Hanley talking on using differential privacy for guard relay selection in Tor. His problem is how to get the advantages of location-based path selection while minimising the information leaked about your location. His proposal, DPselect, uses differential privacy and is designed to improve on Counter Raptor which defends against attacks based on BGP hijacking (as when Indosat leaked 320k routes in 2014, affecting 44 Tor relays and 38 guard relays). Hans measured the privacy loss over time and found a max-divergence worst-case bound for a set of guards. He then crafted a DP defence that bounds the max-divergence.

Sajin Sasy has been working on Scaling Anonymous Communications Networks with Trusted Execution Environments. Before a Tor client can build a circuit, it needs a copy of the network consensus, which tells it where the relays are. For R relays and C clients this costs bandwidth of RC. People have tried to build peer-to-peer models, but they’ve all been attacked; PIR-Tor tried using private information retrieval but the variants were weak or impractical. Sajin has been trying to solve the problem via SGX, which he uses to create oblivious RAM to store network descriptors with which a client may build circuits using a bandwidth-weighted sampling mechanism but without opening up to an epistemic attack. His mechanism, ConsenSGX, can work well with about 2% of Tor relays using it.

Gerry Wan talked last, about Guard Placement Attacks on Path Selection Algorithms for Tor”>Guard Placement Attacks on Tor. Currently, an adversary who observes the Tor client-guard link can do website fingerprinting attacks. If you counter this with location-aware path selection, the adversary can insert malicious nodes close to you. Gerry has demonstrated attacks on Counter-Raptor and two other algorithms, showing they don’t provide very good protection. He found 10-20 guards were needed to get a probability of about one percent of being picked across 368 ASes with significant numbers of Tor nodes. He proposes a defence that flattens the path selection probability among reasonable guard candidates.

The keynote was given by Simson Garfinkel, explaining how the US Census Bureau plans to use differential privacy in the 2020 census of people and housing. (He has a Science article on this here.) The census is not allowed to publish anything that identifies the data of any individual or establishment; collected data must be kept confidential for 72 years and used only for statistical purposes until then. Disclosure avoidance has evolved with technology. In 2010, the aggregated “census edited file” (CEF) with 44 bits (1.7Gb) of confidential data on each resident was preprocessed into a “hundred-percent detail file” (HDF) that’s still confidential but used to produce pre-specified tabular summaries; people could also pay for special tabulations. The problem was too much public microdata: you get billions of simultaneous equations and can in theory solve for the private data. In 2003 Kobbi Nissim and Irit Dinur had explained this; the practical solution was differential privacy, worked out in 2006 by Cynthia Dwork, Frank McSherry, Kobbi Nissim and Adam Smith, where you add noise. Simson and colleagues have actually done database reconstruction on the 2010 microdata, and found it takes four servers a month to reconstruct the 2010 HDF from microdata; they get all variables right about 38% of the time, covering a bit under 20% of the population. So the 2010 approach just didn’t really work. It did retain some protection, because it swapped very identifiable households with other blocks, so not everyone was compromised. If they’d swapped all the households it would have been OK, but the users wouldn’t have put up with that; the fact that they gave exact population counts for a block was a real vulnerability. Dealing with database reconstruction piecemeal is hard; that’s the value of differential privacy.

It’s not a magic bullet though. How do you set epsilon? Where on the scale between no accuracy and no privacy do you want to sit? Given that, you can add the smallest amount of noise necessary for a given privacy outcome, and — more to the point — you can structure the noise so as to protect the statistics you value. Also, the noise affects small blocks more, which is what you need. In 2018 they did an end-to-end test reporting four tables. In 2020 they will roll out a full system where the CEF will be processed into a “microdata details file” (MDF) from which the tabulations will be derived. Foreseeable issues include that numbers won’t add up; so the number of members of the separate Native American tribes won’t add up to the total of Native Americans, and that will have to be explained to the public. This will protect everyone, while the old system only protected people who were swapped, and it has to be done all at once. Every record may be modified subject to an overall privacy budget, so there’s no exact mapping between the CEF and the MDF. The first effort was block-by-block, an analogue of local-mode differential privacy; at district, county, state and national level the trade-offs are the same, and with an epsilon of 1 you get 0.997 accuracy (the nation isn’t more accurate as the errors add up).

The new top-down algorithm generates a national histogram without geographic identifiers, then sets out to build a geographic histogram top-down, such that the state figures add up to the national figures (this is needed for Congressional redistricting). The construction is then done recursively down through state, county, tract, block group and block, after which they generate the microdata. This can be done in parallel and enables sparsity discovery (e.g. there are very few people over 100 belonging to 5 or more races). The top-down approach turns out to be much more accurate, in that county data have less error than blocks, and national data have essentially no error. Simson has built a simulator you can play with; this leads to a suggested epsilon value between four and six, when you trade off the marginal social benefit of better stats with the marginal social costs of identity theft.

In questions, Simson discussed the techniques used to minimise error by imputing returns for households that didn’t file a return, whether from tax returns or “hot deck imputation” whereby you just pick a random card from the local deck and copy it. There’s now pushback against differential privacy, with criticism from scholars such as Steven Ruggles, who argue that quality will suffer, the law isn’t being satisfied, or the attacker model is too strong; he had a debate with one opponent who was aghast at the idea that a dozen people all aged 50 would have their ages perturbed, rather than just be left aged 50 (which opponents claim preserves privacy). There are limits; among group quarters, a prison won’t be turned into a college dorm, but if there are five dorms you might report four or six. Person-household joins are also hard; you can do the number of men on a block, or the number of households, but the number of children in households headed by a single man is more sensitive. An open problem is quality metrics; do you with L1 accuracy, or with application-driven limits such as one-person-one-vote, the need for majority-minority districts under certain conditions, and equitable welfare distribution? And then there are the policy implications of using detailed race, ethnicity and citizenship data. There has also been a lot of work over the decades on other noise sources such as imputation, substitution, and citizen entry errors; they contribute more noise than differential privacy, but not in as useful ways. Restrictions include that all desired queries on the MDF (and their required accuracy) must be known in advance; they published a call for submissions but got none that make any sense. Also, you can’t test the system by re-running queries on raw data. Many users want highly accurate data on small areas, like a few blocks, and may feel they got better results with the 2010 data; they’re wrong – because of the swapping – but they don’t know the detail of that. Minority statistics are less accurate (but it’s a state secret whether they are better or worse than before.) So can we tune epsilon given all the other noise? We don’t know; epsilon hasn’t been chosen yet for 2020. But there will be published error metrics, which we haven’t had before, and lots of things that previously had to be suppressed now don’t have to be. We also don’t know whether there might be any leakage between 2020 and 2030 but historically people are not consistent even about race and they move on average every seven years. As for side information, one of the advantages of moving to differential privacy is that they don’t have to enumerate available side information. And despite all the online stuff, 20-40% of people will ignore the request to go to the website; so they will hire 500,000 people, go door to door, knock and ask twice, then ask the neighbours, then look for data elsewhere. By comparison the differential privacy effort is 5 people and should be 15; Simson does programming, as well as outreach.

Thank you for this excellent summary of my talk! There are a few minor corrections that I would like to make. The errors are all mine, and not from the summary.

First, I need to reiterate that the views and opinions expressed in this presentation are those of the author (me), and not the official views and opinions of the US Census Bureau. Although I did appear at PETS in my official capacity, I am not empowered to speak on behalf of the US Census Bureau at official Census functions like the Census Scientific Advisory Committee, and there is a public record of what was said there. In addition there are now official decision memos. Two, dated July 1, 2019 cover the only official statements on the design and application of DP to the 2020 Census that have been made to date. All talks should include pointers to these now:

* With respect to the reconstruction of the 2010 HDF from the SF1 publication, I did not make it clear that we only reconstructed a subset of the variables — specifically geocode to the block, age, race, sex, and ethnicity. Specifically, we did not reconstruct the relationship to the householder, we did not reconstruct households, and we did not reconstruct the “detailed race” or “detailed tribe” codes.

* Although I did state that we would have been okay if we had swapped all of the households, subsequent discussion with the team at Census has led me to revise this statement. The problem with swapping all of the houses is that the sampling zeros would remain, and a lot depends on how pairs of households are chosen for swapping. It is possible to devise a swapping which is reversible. For this reason, swapping alone is demonstrably not sufficient without additional conditions on the swapping algorithm.

* The statement “those noise affects small blocks more, which is what you need,” was sloppy on my part. The noise affects small *counts* proportionally more, because the noise is based on the scaled contribution of a single person. Therefore large blocks with small counts in specific categories (for example, a lone person of a specific age or race) are protected just as if they were the only person on the block.

* Although we are considering using some mechanisms were some “numbers won’t add up,” many numbers will add up still. Inconsistency will be the exception, not the rule.

* The epsilon values of “between four and six” were for a specific study, and not for the decennial census. The final global privacy-loss budget, as well as its allocation to major data products, will be determined by the Data Stewardship Executive Policy Committee, which consists of career senior executive staff at the Census Bureau. No decisions have been made to date, and none are hard-wired into the TopDown algorithm.

* One of my collaborators took issue with the statement that we should have 15 people on the differential privacy effort. That collaborator said we should have 30.

The afternoon session on stylometry was kicked off by Edwin Dauber, talking about Stylistic Authorship Attribution of Small, Incomplete Source Code Fragments. He collected a set of 104 GitHub programmers, randomly selecting files until he got 15-50 samples of their code. He trained a classifier with single-sample accuracy just under 50%, compared with the 1% he’d have got from random chance. If we filter predictions by confidence, we can do a lot better for some authors or samples.

Next was Asad Mahmood, who’s interested in how to avoid being detected by such techniques; his talk was entitled A Girl Has No Name.How can an author of a piece like this escape identification and reprisal? Asad has developed and automated authorship obfuscation tool called Mutant-X. It generates many versions of the input document by replacing words with synonyms in such a way as to preserve sentiment. The problem is transferability: it is not effective against unseen attribution systems.

Kassem Fawaz is working on audio privacy against multi-microphone sensors. There have been lots of papers on using phones as sonar, including gesture sensing via Doppler shifts and chest motion sensing via round-trip delay. Defences include jamming / obfuscation systems like PhyCloak, and to get past such measures the attacker can use multiple microphones to beamform. Kassem has already implemented this, using eight commodity microphones behind a wall, and found that even in the presence of an obfuscator he could get gestures and breathing. His latest proposal is to improve the obfuscation, by transmitting uncorrelated jamming signals in different directions. He evaluated the scheme with 4-6 distributed obfuscation speakers and 8-16 adversary mikes connected to Pixel phones; for details see the paper.

Nisarg Raval also does obfuscation for sensor privacy, and as he wants to stop apps on devices inferring private information, he has to obfuscate data before it reaches an app. Unlike previous researchers, he wants to take account of the app; as many apps use ML and neural networks are universal approximators, his insight is to use a neural network as an obfuscator too; in fact he trains it iteratively with attacker networks. He evaluated the system for utility (against the Cifar10 dataset), privacy (protecting user identification via stylometry while allowing it to still read handwritten digits) and how well it works with real-world apps. In questions, there was an argument about whether such systems assume some white-box knowledge of the app’s classifiers.

Jonathan Rusert started the social-media session discussing Inadvertent Location Privacy Leaks on Twitter. Location leaks in all sorts of ways, such as textual references to events, or hashtags found at only one facility. Jonathan’s built a Bayesian classification tool called Jasoos that creates both spatial and temporal models of features that leak location. Jasoos can also be used to warn users of revealing tweets or delete them; it’s hard to do anything more systematic as you can’t tell in advance which words will become revealing when.

Janith Weerasinghe has been working on Linguistic Indicators of Mental Health Status on Twitter. The Samaritans had a “radar” app looking for suicide risks on twitter, but pulled it after protest; people want agency over who has their mental health information. Janith looked at previous research on using language use to predict depression and PTSD and tried to figure out how they worked. He started from a Johns Hopkins dataset from 2015 that scraped messages reporting diagnoses, and then looked for previous posts by the same user; the early classifiers got 75-85% accuracy. Was there any expectation of privacy then? In many cases, no; many users talked openly about their issues. He analysed this in more detail and found that there are indeed linguistic features that suggest depression, but they have a high false-positive rate, so we must be careful of the base rate fallacy. It’s less clear how you field a mitigation strategy.

Bo Luo would like a way to score private information about to be posed on social networks to stop people disclosing things that they later regret. He used a crawler to collect 7 million candidate tweets, filtered them and got mTurkers to rate 29,000 of them for sensitivity. He then collected a second dataset and had 566 analysed by grad students as ground truth. He found that the extremely sensitive tweets were reliably identified by most annotators. His goal was high recall but low precision, as the idea is simply to give people an “Are you sure?” prompt before releasing the message.

The talk on Lethe has been moved to the statistics session tomorrow, as it has to be given by video for visa reasons.

Paul Schmitt has been working on Oblivious DNS. Recursive DNS servers are a privacy weak point; hence the controversy about DNS Over TLS. Paul has built a prototype, ODNS, that separates identity by encrypting it, for transmission to the authoritative server. In effect, he’s tunnelling DNS over DNS, and uses various tricks to minimise latency. He loaded a whole lot of pages to test page load time; the handful of slower sites were Facebook, Instagram and Craigslist. Live.com was actually a lot faster as they hit a different CDN. Such glitches suggest widespread anycast deployment; but some users might favour policy-based routing to get to resolvers that are less likely to record their traffic.

Georgia Fragkouli has been Morphing Packet Reports for Internet Transparency. What’s the best metric for Tor anonymity facing a less capable attacker than the global passive adversary? She proposes measuring leakage via cross-correlation, which corresponds to anonymity set size and the adversary’s ability to select the correct traffic flow for inspection. CAIDA data from 2018 suggest that 62% of flows are no more than 5-anonymous when observed for 10 minutes. Adding noise, as with differential privacy, would destroy transparency; we couldn’t verify service level agreements. Some things can be done with coarsening the time granularity, which generally works well, and perhaps adaptive binning; coordination among ISPs could also help.

Geoffrey Alexander is has been Detecting TCP/IP Connections via IPID Hash Collisions. Can an off-path attacker detect a TCP connection between a target machine and a Tor node? Yes – using a side-channel he’s found using hash collisions in Linux IPIDs. It detects when Linux switches from one of 2048 global counters to a per-connection TCP counter. The counter depends on the destination IP address. You send a probe packet, then N packets from a spoofed address, than another probe, and if the difference between IPIDs is N+1 you have a collision. The new attack detects connections with a true positive rate of almost 85% and a false positive of almost 11%; it incorrectly reports a connection 5% of the time. With a toy Tor network where connections lasted 10 minutes, the true positive rate jumped to 96%. This vulnerability was patched out of the Linux kernel after responsible disclosure last year. Geoffrey’s coauthor Jed Crandall had found an earlier side-channel attack using the IP fragmentation cache in 2014.

Ghada Arfaoui has investigated The privacy of the TLS 1.3 protocol. Its goals are to authenticate the server, and optionally the client; content confidentiality; and data integrity. But what about privacy? Issues include linking two sessions and learning a party’s session partner; she formalises this in terms of the circumstances in which virtual principals can be identified with previous ones. She presents a model for a full TLS handshake and an extended one for TLS with session resumption. TLS achieves privacy in this model with the exception of some inherent limitations.

Johannes Becker started Wednesday’s sessions with a talk on tracking anonymous bluetooth devices. Bluetooth 4 introduced LE and claims to support privacy; this means that the MAC Address changes from time to time. Johannes used a software radio to record the Bluetooth advertising channel and studied it for tokens that are sufficiently persistent and unique for tracking. As tokens change about every 15 minutes, you need about 40 bits. He found a various issues, starting with address carryover: devices update tokens asynchronously, so you can often track them across changes; Windows 10 makes this easy as it changes address only every 60 minutes but payloads persist longer than that, so joining up addresses is trivial; Microsoft Surface Pen leaks its permanent MAC address on the public advertising channel; iOS and macOS devices leak timing information on device activity in advertising messages. In conclusion, while most devices use randomised addresses to stop tracking, the implementation is usually lousy. On Android and iOS you can get a proper fresh address by switching bluetooth off and on again; with Microsoft kit it’s harder, as you have to go into Device Manager (and not use their pen at all).

Erik Sy was next with A QUIC Look at Web Tracking. Google’s QUIC protocol is designed to replace TLS over TCP in http/3 and as it’s in Chrome, it already accounts for 7% of web traffic. TLS suffers from handshake delay, head-of-line blocking, as well as protocol entrenchment; changes to the protocol or even implementation break all sorts of legacy systems and middleboxes. Fast client identification is important in the context of real-time bidding. QUIC promises to fix these problems and support mobility too, enabling sessions to persist as people roam between access points; this involves cacheing a server hello token which contains the last-seen IP address of the client, encrypted by the server. There’s also a server config that contains a public key, which can also be personalised to track clients. The downsides include making third-party tracking feasible; it takes a browser restart to end a tracking period. Feasible countermeasures might include aligning QUIC tracking with cookie policies; not using user-unique public keys, perhaps using OCSP or certificate transparency to spot servers who issue too many public keys; bounding cache lifetimes, and to much less than the one week in Blink and in TLS session resumption; and disabling third-party tracking by limiting the reuse of third-party QUIC State to revisits to the same first party. He concluded that we should not trust advertising companies to build privacy-friendly transport protocols.

Giridhari Venkatadri has been Investigating sources of PII used in Facebook’s targeted advertising. Facebook advertisers can choose up to 15 attributes for the ad engine to match; it returns size estimates for each audience. Can these mechanisms leak personal information, or be used to discriminate? Well, transparency is limited, so Giridhari set out to identify Facebook’s sources. He uploaded all 2,800 landline numbers from Northeastern University and found that 58% matched Facebook users. Some subtlety is needed as the minimum size estimate Facebook will give out is 20; he circumvents rounding by finding the rounding threshold empirically, deleting one number and then adding the number he’s investigating. He uses other techniques to test for and circumvent noise addition. They disclosed this responsibly to Facebook, who responded that the ability to check whether a single phone number was targetable was not considered to be a vulnerability. One author provided a mobile number to Facebook Messenger for two-factor authentication, and found that this was used for ad targeting (though when it was supplied as a WhatsApp identifier, it wasn’t). They also deduced that Facebook matches PII, e.g. when others upload it as contact information, and they didn’t deny it; they said that the phone numbers were owned by whoever had uploaded them.

Martino Trevisan surveyed 4 Years of EU Cookie Law: Results and Lessons Learned. He built a tool from an instrumented Chrome that checks URLs for trackers that violate the directive, and did 179k website visits, collecting almost 200Gb of data. He checked whether cookie bars were displayed, the impact of varying browser and location, and how things changed over 4 years. He tabulated violations by sector; the great majority of news sites break the law, while government websites mostly comply, as do most sites in science and education. There’s not much variation within the EU. But outside it's much worse. The most pervasive trackers belong to the Internet giants: googleads.g.doubleclick.net is already present in 22% of pages even before the user gives consent, and the top 10 sites account for 40% of violations. Consent is mostly cosmetic; nothing happens if you click on it. Finally, about 60% of websites have been violating the law steadily for the last four years; it's stable across countries and devices too; an enforcement failure as nobody's in charge of chasing violators.

Emiliano de Cristofaro talked on LOGAN: Membership Inference Attacks Against Generative Models. Was a data from a specific target used to train a classifier, or more generally given f(data), is x in data? Emiliano’s approach is to detect overfitting and exploit this to build a model that infers membership. This now goes across to generative models, which can randomly generate realistic samples from a model. In a white-box attack, the adversary inputs all the datapoints from a suspected input into the model, sorts them, takes the top scores and assumes these were used for training. In a black-box attack, he trains a model from the GAN and proceeds as before. The white-box attack works almost all the time, and the black-box one 40% of the time (60% with some side knowledge of 20% of the records used in training).

Balazs Pejo was next on Together or Alone: The Price of Privacy in Collaborative Learning. Two rational participants with lots of data might increase the accuracy of their models if they shared data; but at what cost in privacy? Balazs has a model where you can set privacy as a real number between 0 and 1, and search for Nash equilibria in between. THere’s a trivial Nash equilibrium at (1,1) and there may be another too depending on the parameters; a player with more data benefits only a little from sharing, so collaboration is more likely when the players have about the same amount of data. He demonstrates how the approach could be used in recommender systems.

Martin Haerterich reported work inspired by Emiliano’s paper above, on Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models. Are there other ways of identifying the samples used to train GANs? You can think of a GAN game as a minimax game between two players, as between a forger and the police. He follows Hayes’ model that an attack consists of discriminating between members of the training set and the test set with accuracy greater than 50%; he’s interested not just in single-member inference but also in the m-member case of interest to regulators. He has several attacks: if the model learned the data well enough it will output points that are close enough and we can exploit this with Monte Carlo integration, especially if the model was overfitted. He attacks variational autoencoders by rewarding reconstructions close to the current training data record. Inputs with a small reconstruction error were typically inputs. Also, if the learning task is easy, membership inference is not much of a challenge anyway, so it’s only an issue with fairly hard data. The moral is to evaluate overfitting with generative models and quantify model leakage.

The first usability talk was on Skip, Skip, Skip Accept!!! A study on the Usability of Smartphone Manufacturer Provided Default Features and User Privacy; it was given by Barney Craggs as the authors had visa issues. Smartphone defaults such as location tracking have privacy implications, and can be hard to change; are users even aware? A study got 27 student respondents to set up and configure a smartphone so as to disable location features, restrict an app default feature, limit ad tracking, restrict diagnostic report for Android or disable analytics data sharing for iOS. Four users were both aware of privacy and technically proficient; some were totally hopeless, and do what apps tell them; a third group try to make their lives easy by not caring too much; and a fourth were privacy aware but resigned and bumbled their way through things. Digging deeper, there’s a diversity of cognitive styles: head-in-the-sand, trial-and-error, help seekers who see problem solving as social, survivalists, people who try to work things through from basics, and people who try once and give up. Users want unambiguous information, clear user policies, a standard privacy interface and ethical sincerity from developers. iOS users trusted their phones more than Android users (even before the recent publicity) and were more likely to believe that sharing data with Apple would improve the platform.

Nina Gerber is Investigating People’s Privacy Risk Perception, and asked 938 German subjects to evaluate new technologies (smart home and smart health) against existing ones (social networks) across nine comparable risk scenarios. Specific risk scenarios were considered to be more severe than abstract characteristics; the worst were stalking and burglary, while data analysis is the least severe. A warning of “possible harm” had the same effect as risk specificity. The risks were broadly comparable across scenarios, although the less likely but more severe formed a separate cluster.

Nathan Malkin has been studying Privacy Attitudes of Smart Speaker Users. What do users know and feel about the data smart speakers collect? Would they delete it? How should it be used? He designed a browser extension that would download all their devices’ interactions, asking questions about five of them in a survey. 116 US mTurkers were recruited, 70% Alexa and 30% Google, who had owned the device for 14 months on average, doing 2000 transactions. Users had done half the transactions, family members a third, about 10% by accidental activations (the remainder were done by guests and others). Accidental activations appear to be the main privacy hazard (On one occasion, a conversation transcript was mailed to a contact). Most people accepted a week’s retention but most didn’t accept a year, let alone forever (the current policy). The median desired retention period was 28 days; over 75% most would enable auto-deletion if offered by the vendor (only just over 50% if offered by a third-party browser extension). Most participants didn’t know they could view and delete old recordings; most wouldn’t do so, claiming it’s not worth it (although they wanted a quarter of the recordings they heard to be deleted). People were also much less comfortable with recordings of their children being stored. Most would improve use cases such as improving device functionality, but not ads. A third of participants don’t approve of Google hiring people to annotate their recordings to improve functionality, which happens. Overall, trust in assistants appears to be al or nothing.

Nata Barbosa talked on Predicting Individual Users’ Smart Home Privacy Preferences and Their Changes; this was given remotely for visa reasons. Nata set out to predict allow/deny preference for smart home devices, identify preference-changing circumstances, and put a dollar value on privacy. He got survey participants to evaluate a number of scenarios, and to bid for privacy as an extra fee, or for its absence in return for a discount or cashback. Most people resisted paying more; one said he wouldn’t buy a product if we had to pay more for privacy. Secondary users are generally not OK, but some exceptions apply. Consumers are loss averse in that they value privacy more when they have it. More details of the models are in the paper; it seems that models trained on attitudes rather than on actual behaviour are tricky, perhaps because of the privacy paradox. In questions, he remarked that for devices in general privacy attitudes vary: surveillance cameras are the most sensitive and lightbulbs the least.

Alexandros Mittos presented a paper on Systematizing Genome Privacy Research which tries to figure out where the genomic privacy community is heading, the limitations and technical challenges, and the field’s relevance to real concerns. He uses nine criteria: the type of data input, genomic assumptions, storage location, use of third parties, security assumptions, privacy mechanisms, metrics for privacy-preserving and vanilla versions, utility loss, and whether they support long-term security. He started with the 197 papers linked from from genomeprivacy.org in 2017, excluded attacks, and grouped the rest into six thematic groups. These were personal genomic testing, relatedness, access and storage control, genomic data sharing, papers about outsourcing to the cloud, and statistics. He reduced the library to a set of representative papers, and worked through to a set of open problems. These are the lack of long-term security; assumptions of semi-honest adversaries and no entity collusion; assumptions about data representations; lack of attention to utility loss; inability to scale to thousands of researchers; and irrelevance to the private market. He also surveyed the 262 paper authors, finding 92 with a technical background and more than one publication, and asked them about these ten problems. 21 answered anonymously; they saw the hardest as long-term security but none had really plausible answers. Utility loss was a more divisive problem, as was relevance to consumer genetic testing. On the plus side, Genome PETS can alleviate some legal / policy restrictions.

Alexandra-Mihaela Olteanu is working on The (Co-)Location Sharing Game. Users and their devices leak each others’ location via co-location information, and people have different private preferences. Only half of users (from an n=250 mTurk study) thought their posts could affect their friends’ privacy, but 57.6% were aware that their friends’ posts could affect them; overall a quarter were very concerned about location privacy and a half were somewhat concerned. She designed a game between social-network friends sharing and tagging with some myopic players; they can share location, co-location, both or nothing, with parameters from real preference factors. There’s a vicious circle effect: A user can be forced into sharing if they know their friend likes to share. This behaviour is also seen in the real world, leading to over-sharing of both locations and co-locations.

Anja Lehmann‘s talk was on Oblivious Pseudonymization-as-a-Service. The idea is to provide consistent and secure pseudonyms where data get pushed by diverse entities. Rather than giving everyone the same secret key or an instance of the same HSM, she wants everyone to use the same service; but how can that be trusted? The idea is an oblivious service, so the service operator doesn’t learn inputs or outputs. However a nym that’s stable across apps creates a growing linkability risk. Hence the chameleon psudonym: unlinkable when stored, and controllable linkability in use. There are separate pseudonyms for each attribute of the subject, a shuffled table of attribute pairs, and a cryptographic mechanism for transforming pseudonyms for joins. In questions, the scheme could double the size of a data lake, as each uploaded attribute would have a corresponding pseudoym.

Benjamin Kuykendall is working on Cryptography for #MeToo. How do you match complaints of sexual harassment that name the same perpetrator, in a world where principals are only partially trusted? In #MeToo, accusers worked with an anonymously-editable google spreadsheet, which protected accusers to some extent but gave alleged perpetrators no privacy. Can we do better? Benjamin’s idea is to use a private-forum testing algorithm to detect when a user has been accused five times, logs accusers in case of abuse, and computes the quorum in a distributed way.

Sami Zhioua has been Finding a Needle in a Haystack. Website fingerprinting research makes assumptions that often don’t hold in the field, such as that page loads can be split. His approach is to use a rank table of statistically improbably features to generate a similarity score, and a Bloom filter for efficiency. Looking at VPN traffic, he gets a true positive rate of 80-90% and a false positive of 10-20% on various tests on web pages. He got very poor results for malware samples though as they tend to be rather similar.

Noah Apthorpe was next, Keeping the Smart Home Private with Smart(er) Traffic Shaping. Many smart home devices from toys to medical devices don’t even use TLS; but once they do, what can an eavesdropper infer? Lots – even at the gross level, traffic spikes highlight occupant activity, while tools exist to look for particular devices. Home device fingerprinting and website fingerprinting aren’t the same; an attacker can learn a lot from activity bursts within a single trace, such as waking up or the arrival of visitors. Noah’s idea is stochastic traffic padding, which inserts extra periods of mimic activity at random. Constant rate padding should not be necessary as activity is pretty sparse. So he trains a generative Markov model. His implementation uses a raspberry pi as a router; he proposes a security metric of correct to attempted activity inferences.

Se Eun Oh discussed Extraction, Classification, and Prediction of Website Fingerprints with Deep Learning. She’s been evaluating website fingerprinting techniques that use deep learning under various conditions, starting with an autoencoder followed by compression, feeding into k-NN, SVM and k-FP. This combination can give a training time of minutes rather than tens of minutes, with TPR Around 90% and FPR around 10%. She’s learned that websites are getting complex, and the hard ones to fingerprint are the simple ones and the dynamic ones.

Sanjit Bhat has developed A Data-Efficient Website Fingerprinting Attack Based on Deep Learning. He uses a number of low-power metadata with a high-power dilated causal ResNet which combines direction information, timing information and human-designed features. He used the Rimmer et al dataset and evaluated his system against Deep Fingerprinting and gets a false-positive rate of 2.5% less while the true-positive rate is 8% higher. In the closed-world model, he needs five times less data to get 95% accuracy.

Guevara Noubir started Thursday afternoon with a talk on Mitigating Location Privacy Attacks on Mobile Devices using Dynamic App Sandboxing. He’s been working on side-channel attacks by malicious apps since 2013, studying keylogging, tracking and exfiltration. He has a framework called Matrix that sandboxes apps; it’s extensible to all sensitive APIs and allows auditing, for which his service PrivoScope lets a user visualise an app’s past behaviour. The idea is enable users to explore and set their location privacy preferences for different apps, which can be given synthetic location data consistent with phone movement and with user policy.

Sebastian Zimmeck is working on Scaling Privacy Compliance Analysis to a Million Apps. Since the SnapChat disclosure. He crawled the Play Store for about a million apps and started with static analysis, extraction of text related to location, and feature engineering: the analysis tries to check that the code does what the text says (which they can do about 70% of the time). He uncovered a number of compliance issues; unrated apps are particularly bad in their practices, while many issues arise around policy silences and opaque third-party code.

Ravi Borgaonkar has found a New Privacy Threat on 3G, 4G, and Upcoming 5G AKA Protocols. IMSI-catchers work because of protocol flaws in the authentication and key agreement (AKA) protocols in 2g, 3g and 4g protocols (Ravi found the flaw in 4g). These have been fixed in 5g but new ones open up. In 5g (ETSI TS 133.102), there’s a leak if the sequence number is based on a counter rather than time; attackers can learn sequences of least significant bits of sequence numbers. This enables him to track users. It takes about a minute to learn ten bits of sequence number. There is now ongoing work in the standards body to fix this, but it’s complex as systems have to continue to support 3g and 4g.

Sam Teplov has been looking at Apple’s continuity protocol; his talk title is Handoff All Your Privacy. You can randomise the MAC address all you want, but if there’s still linkable content there, people can be tracked. Apple lets you continue a browsing session from your Macbook on your iPhone, and the protocol’s proprietary so the only way to evaluate it was to reverse it. It runs over Bluetooth low energy (versions of which are rated to 100-400m). Nearby messages tell whether you’re in screen lock, wearing a watch, or using Facetime, and let you correlate two random MAC addresses to the same device. When you navigate to the wifi settings page, there’s another set of bluetooth messages triggered to your other devices with battery status, signal strength and so on; these also defeat randomisation. The same happens when you join a closed wifi network or use handoff-enabled apps. In aggregate, all these little leaks can be used to track devices. There’s also a MacOS bug, where they disclose the global MAC address, which has been disclosed for fixing. Sam has tested the tracking attacks out for a week with real users, and they work. There’s a question about whether cleartext sequence numbers should be used at all, and this has also been reported as a bug.

Mohammad Naseri has been Investigating Privacy Leaks Exposed by the Android Accessibility Service. This service is for users with disabilities, but lots of apps use it to do haptics for everyone, such as LastPass. He’s discovered a vulnerability in that a malicious accessibility service can be swapped in. He extended APIsense’s Bee app with a new experimental flavour, added a trojan accessibility service; he found that 3650 finance apps and 40/50 social apps were vulnerable. He has developed a tool to scan apps and evaluate them. A more detailed study of the app zoo found over 2,815 apps using accessibility many of which had dangerous permissions set.

The rump session started with Rob Jansen of Navy Labs talking of Shadow, a new anonymous communication experiment simulator that emulates the Internet, allowing anonymous apps to hang off network sockets and interact. It will be available as a community tool, and anything you can run on linux you’ll be able to run on Shadow. Micah Sherr and Robert Dingledine are also involved.

Sylvan Besencon (email: s.bes@pm.me) is an anthropologist interested in information security, studying how crypto protocols are built, standardised, implemented and maintained over time.

George from the Tor project is interested in DoS attacks against onion services. Many attacks are just low-grade spam which exploit the fact that Tor nodes do a lot of work on session establishment. What sort of things can be done? Perhaps anonymous credentials for rate limiting.

Other Tor open research topic, from Tor’s new research director Mike Perry, include better traffic analysis defences, and work on reproducibility. There’s a trilemma: anonymity, bandwidth or delay; a circuit padding framework will be out for comment in August, with an emphasis on onion services so people can get better performance by setting up such a service. Reproducibility will mean standardised simulators, traffic-analysis datasets and metrics, and side-channel evaluation. We need to know whether small-scale lab studies tell us enough about the interaction between classifiers and defences.

Chelsea Komlo discussed inappropriate uses of blockchains, including the US House minority leader leader suggesting the use of blockchains in lieu of government regulation (of privacy of all things). Some such arguments assume that stuff can be deleted from blockchains, and that people can “just leave” to deal with a data breach.

Ian Goldberg wants to hire another faculty member at Waterloo; the job ad should be up in September.

Natalia Bielova announced the CNIL-Inria privacy award for privacy papers published between 1/1/2017 and 30/6/2019. She’s also doing research into whether we can exercise our subject access rights in respect of tracking cookies; the results are not good.

Imane Fouad was next, with more discussion of the Inria tracking cookie project. She collected 20,218 third-party cookies from 10,000 websites and tried to get hold of the purposes and privacy policies.

David @david415 is working on the katzenpost mix network which builds on previous work such as sphinx, xolotl and loopix.

Susan Landau has been studying the disclosed NSA statistics of two-hop traffic analysis, working out what they’re probably collecting and under what legal authority. Why did they announce they were purging three years’ worth of call metadata? Susan thinks it’s because of the way SS7 switching deals with mobile roaming. Also, Al-Qaida controlled its operatives tightly, while Daesh doesn’t; and comms are moving from phones to IP. She concludes that CDR collection is no longer efficaceous.

Cecilia, or the Tor project, announced a pluggable transport called Snowflake which will let clients go through domain-fronted proxies and then funnel traffic to bridges through WebRTC proxies. She’s looking for volunteers to run snowflakes.

Paul Syverson has done some work on self-authenticating traditional domain names: the idea is to embed a public onion key in your domain, so people who forge certs for a domain can’t get in there. He has a tool that highlights mismatches. This leads to “dirt simple trust: if I give you a URL and you trust me, you’re done. So it scales down to individuals, as well as up to organisations. The paper will appear at Secdev.

Steven Murdoch is annoyed that the London Underground monitors our wifi MAC addresses to track mobility. Could they do the analysis better? Truncating the MACs to 16 bits naively loses accuracy; however we can re-use the work from a decade ago about using Bayesian probability to de-anonymise traffic in Mixes. Isn’t it curious that a surveillance trick can be used to improve privacy? Oh, and UCL is hiring.

Claudia Diaz is also hiring at Leuven, both PhD students and postdocs. She also has a startup working with mix-nets and would like feedback on their designs; contact her if interested.

Will Morland works for privacy at Facebook and wants to hire other people crazy enough to do that. He has a data portability project. It’s an open-source coding project that also involves Google, Microsoft and others. At present you can download stuff but nobody’s written code to import it. The idea is to fix this with shared data models and a task management library that deals with authentication. Data export affects not just the user but other affected parties such as people they’ve talked to or tagged or whatever; they’re looking for ways to understand and deal with the implications. How can you reconstruct friendship graphs without facilitating attacks people who’ve opted out by creating shadow profiles? How can people revoke data that’s been transferred to other platforms, and provide evidence that it’s been done? There are many fascinating problems to tackle.