What makes a hot topic? Is it that researchers are inspired by some new idea or approach? Or is it driven by funding from external organizations? And what role does industry play in this? For example, at one point applying machine learning to IDS’s was hot, but now, while still researched, the topic itself does not inspire the same kind of fervor that it once did within the research community. Yet it is currently a hot topic within industry, but using the phrase security analytics instead to describe the same underlying techniques. Another example is that continuous authentication / mobile authentication is currently a hot topic. Why? And what role should funding play in developing or encouraging hot topics, versus supporting more basic research? For example, should funding go towards continuous authentication, or should more basic research (e.g., in passwords) be supported?

What makes a hot topic? Is it that researchers are inspired by some new idea or approach? Or is it driven by funding from external organizations? And what role does industry play in this? For example, at one point applying machine learning to IDS’s was hot, but now, while still researched, the topic itself does not inspire the same kind of fervor that it once did within the research community. Yet it is currently a hot topic within industry, but using the phrase security analytics instead to describe the same underlying techniques. Another example is that continuous authentication / mobile authentication is currently a hot topic. Why? And what role should funding play in developing or encouraging hot topics, versus supporting more basic research? For example, should funding go towards continuous authentication, or should more basic research (e.g., in passwords) be supported?

We will encourage discussion on deciding what makes a topic in security "hot," and if having hot topics is good, or if it does a disservice to the security community in general by not supporting the not-hot, yet still unsolved, security research issues.

Break with Refreshments

Today, nearly every major policy debate has a technical aspect to it: healthcare, energy, consumer protection, national security, etc. Yet, relatively few technologists are engaged in informing policy outside of the major “science agencies” such as the National Science Foundation and the Department of Energy. This needs to change. In addition to technical knowledge, all that is required to participate productively in policy is a basic understanding of the law and policy, and the ability to communicate in a way policymakers can understand.

Come hear from Ed Felten, Deputy U.S. CTO at the White House Office of Technology Policy and Ashkan Soltani, Chief Technologist at the Federal Trade Commission, on how technologists can contribute to making policy that is more technically sound.

We’ll discuss opportunities across government, effective communication, and ultimately, the need for greater participation from the technical community.

Today, nearly every major policy debate has a technical aspect to it: healthcare, energy, consumer protection, national security, etc. Yet, relatively few technologists are engaged in informing policy outside of the major “science agencies” such as the National Science Foundation and the Department of Energy. This needs to change. In addition to technical knowledge, all that is required to participate productively in policy is a basic understanding of the law and policy, and the ability to communicate in a way policymakers can understand.

Come hear from Ed Felten, Deputy U.S. CTO at the White House Office of Technology Policy and Ashkan Soltani, Chief Technologist at the Federal Trade Commission, on how technologists can contribute to making policy that is more technically sound.

We’ll discuss opportunities across government, effective communication, and ultimately, the need for greater participation from the technical community.

Over the past several years a number of new cryptographic libraries and APIs have become available to developers. These libraries promise to greatly increase the use of cryptography on the web and in the cloud, but they often do so at a cost. In this workshop we will attempt to outline a new paradigm for cryptographic API development that treats normal developers, rather than cryptographers, as the primary consumer -- and treat developer use as a critical failure mode, rather than a regrettable failure.

Over the past several years a number of new cryptographic libraries and APIs have become available to developers. These libraries promise to greatly increase the use of cryptography on the web and in the cloud, but they often do so at a cost. In this workshop we will attempt to outline a new paradigm for cryptographic API development that treats normal developers, rather than cryptographers, as the primary consumer -- and treat developer use as a critical failure mode, rather than a regrettable failure.

To begin this discussion, we will present several case studies of existing APIs that have seen widespread real world misuse, and we will attempt to characterize the key failings that created these situations. We will also discuss the contributing factors that led to these conditions, including: standards bodies, lack of formal testing requirements, and the expressiveness/safety tradeoff. We will then consider the base requirements for a "developer safe" regime of library and API development that reduce the possibility of misuse. Towards this end we will also consider a number of APIs that have been successful in this regard, and work to distill these lessons into formal recommendations. Finally, we will discuss if we as a community need to adopt techniques from the HCI community when designing cryptographic APIs and libraries.

Janne Lindqvist, Rutgers University, and Robert Biddle, Carleton University

Smartphone authentication has received considerable attention of the research community for the past several years. Every year, in diverse set of top conferences, such as, CHI, MobiSyS, MobiCom, UbiComp, USENIX Security, CCS, NDSS, you can find some new alternative authentication mechanism proposal. This is not surprising given how important smartphones have become for people’s daily lives.

Smartphone authentication is also of particular interest because there have been also deployments by the industry beyond the usual PINs and passwords. Two prominent examples include the iOS's fingerprint authentication mechanism TouchID and Android’s 3x3 grid-based graphical password. De Luca and Lindqvist have recently summarized related literature and issues for these authentication methods [1].

Smartphone authentication has received considerable attention of the research community for the past several years. Every year, in diverse set of top conferences, such as, CHI, MobiSyS, MobiCom, UbiComp, USENIX Security, CCS, NDSS, you can find some new alternative authentication mechanism proposal. This is not surprising given how important smartphones have become for people’s daily lives.

Smartphone authentication is also of particular interest because there have been also deployments by the industry beyond the usual PINs and passwords. Two prominent examples include the iOS's fingerprint authentication mechanism TouchID and Android’s 3x3 grid-based graphical password. De Luca and Lindqvist have recently summarized related literature and issues for these authentication methods [1].

Although one could argue that humanity is making some progress that new proposals are being published, the unfortunate situation is that a lot of the work is not comparable to each other. We either do not have or do not require using comparable metrics with papers. Proponents of so called "behavioral biometrics", for example, have opted for using Equivalent Error Rates (EER) as their metric, following the legacy of biometrics, despite many reviewers disliking and distrusting the approach. One of the obvious problems with EER is that people do not have access to same datasets [2], and no comparisons are nevertheless made. (We note that EER is not necessarily a bad metric even though some in the community want to push forward this mem.) Clark and Lindqvist have used Bonneau et al.'s [3] comparative framework for web authentication as one approach to analyze a particular subdomain: gesture recognizers. Sherman et al. [4] have proposed and implemented an information-theoretic metric based on mutual information to compute complexity and memorability of gestures. Given the lack of deployment for a lot of (all) proposals, we are long way from applying statistical approaches such as \alpha-guesswork [5] to smartphone authentication.

What can and should be done when a reasonable number of participants for a new authentication method is perhaps tens or hundreds of volunteers, and how should we evaluate new proposals?

Break with Refreshments

Tudor Dumitras and Michelle Mazurek, University of Maryland, College Park

In recent years, papers at top security conferences increasingly rely on non-public data, such as passwords, telemetry, or other confidential data from inside universities and corporations. This model has both important risks and important benefits, including:

access to real-world data that could not be obtained any other way,

larger-scale experiments than would be otherwise possible,

risk of disclosure of users' private data,

difficulty of reproduction,

limitations on who has access and connections to conduct this kind of work,

and many others.

Despite the risks, this kind of research is not going away anytime soon.

In this session, we will discuss (as case studies) several recent examples of research on proprietary data and how the data was obtained and protected. We will discuss when this model is or is not appropriate, how proprietary data can be properly protected, and whether and how we can promote as much reproducibility as possible in this situation. We will discuss what are (or should be) best practices for researchers considering a study of non-public data. Our hope is to spark a broader discussion in the community about sharing data in a responsible manner and utilizing non-public data sets in security research.

Note: Please fill out this short survey before the session. This will allow us incorporate your responses into the discussion in advance.

In recent years, papers at top security conferences increasingly rely on non-public data, such as passwords, telemetry, or other confidential data from inside universities and corporations. This model has both important risks and important benefits, including:

access to real-world data that could not be obtained any other way,

larger-scale experiments than would be otherwise possible,

risk of disclosure of users' private data,

difficulty of reproduction,

limitations on who has access and connections to conduct this kind of work,

and many others.

Despite the risks, this kind of research is not going away anytime soon.

In this session, we will discuss (as case studies) several recent examples of research on proprietary data and how the data was obtained and protected. We will discuss when this model is or is not appropriate, how proprietary data can be properly protected, and whether and how we can promote as much reproducibility as possible in this situation. We will discuss what are (or should be) best practices for researchers considering a study of non-public data. Our hope is to spark a broader discussion in the community about sharing data in a responsible manner and utilizing non-public data sets in security research.

Note: Please fill out this short survey before the session. This will allow us incorporate your responses into the discussion in advance.