Privacy Threat Model for Mobile

Evaluating privacy vulnerabilities in the mobile space can be a difficult and ad hoc process for developers, publishers, regulators, and researchers. This is due, in significant part, to the absence of a well-developed and widely accepted privacy threat model. With 1 million UDIDs posted on the Internet this past week, there is an urgent need for such a model to identify privacy vulnerabilities, assess compliance, scope potential solutions, and drive disclosure. This is not to say that there aren’t a number of excellent resources that provide lists of normative best practices for mobile app development. Several such resources come readily to mind: the EFF’s Mobile Bill of Rights, Future of Privacy Forum’s Best Practices for Mobile App Developers, and Via Forensics’ 42 Best Practices.
What seems to be lacking, however, is a logical and complete picture of the privacy characteristics and vulnerabilities—i.e. a model or models—of the mobile ecosystem and, more specifically, its component platforms. The idea that privacy threat models generally—not just for mobile—haven’t received adequate attention is an observation that has also come up in the literature. In 2010, a group of researchers from the Interdisciplinary Institute for Broadband Technology (IBBT) noted that the absence of such a model contrasted vividly with the security space where such models (i.e. security threat models) are widely used. M. Deng, K. Wuyts, R. Scandariato, B. Preneel and W. Joosen, A Privacy Threat Analysis Framework: Supporting the Elicitation and Fulfillment of Privacy Requirements, IBBT: 2010 Belgium. That observation, with respect to mobile, is as true today as it was in 2010.

The IBBT researchers put forth a useful jumping off point for modeling privacy threats. Specifically, the researchers depicted systems using aspects of Microsoft’s STRIDE framework (such as data flow diagrams, threat trees and trust boundaries) and then mapped onto that framework the privacy properties of unlinkability, anonymity, pseudonymity, undetectability, confidentiality, content awareness, and policy consent (drawing on the privacy taxonomies of Solove and Pfitzmann). They then used the resulting privacy threat model to help determine the nature of potential alternative solutions: disclosures, modification of features, and privacy enhancing technologies. There has also been some excellent work appearing in various IETF Internet Drafts, where there has been further development and expansion of Solove’s privacy properties, tying them to a discussion of network protocols. See, e.g., “Privacy Considerations for Internet Protocols,” Cooper et al., IETF Draft, June 2012. Separately, others have done privacy modeling specific to GPS (Ashkan Soltani, NYU-CITP working group, 2012, and at Defcon 20). An excellent foundation is therefore in place to incorporate/expand these efforts to encompass mobile platforms holistically. The resulting models would create a common and practical baseline for design, internal auditing, compliance, and dialogue between stakeholders.

The logical place to start developing the privacy threat model for mobile is iOS, given its current dominance in the consumer space and its growing prevalence in enterprise. The iOS privacy threat model should encompass, among other things, the Apple developer API’s, device storage and databases, the iOS pasteboard, and platform analytics, push notification services, and geo-location functions.

Freedom to Tinker is hosted by Princeton's Center for Information Technology Policy, a research center that studies digital technologies in public life. Here you'll find comment and analysis from the digital frontier, written by the Center's faculty, students, and friends.