Over the last six years, a range of research has transformed our understanding of automobiles. What we traditionally envisioned as mere mechanical conveyances are now more widely appreciated as complex distributed systems "with wheels". A car purchased today has virtually all aspects of its physical behavior mediated through dozens of microprocessors, themselves networked internally, and connected to a range of external digital channels. As a result, software vulnerabilities in automotive firmware potentially allow an adversary to obtain arbitrary control over the vehicle. Indeed, multiple research groups have been able to demonstrate such remote control of unmodified automobiles from a variety of manufacturers. In this talk, I'll highlight how our understanding of automotive security vulnerabilities has changed over time, how unique challenges in the automotive sector give rise to these problems and create non-intuitive constraints on their solutions and the key role played by the research community driving industry and government response.

Bio:

Stefan Savage is a professor of Computer Science and Engineering at the University of California, San Diego. He received his Ph.D. in Computer Science and Engineering from the University of Washington and a B.S. in Applied History from Carnegie Mellon University. Savage is a full-time empiricist, whose research interests lie at the intersection of computer security, distributed systems and networking. He currently serves as co-director of UCSD's Center for Network Systems (CNS) and for the Center for Evidence based Security Research (CESR). Savage is a MacArthur Fellow, a Sloan Fellow, an ACM Fellow, and is a recipient of the ACM Prize in Computing and the ACM SIGOPS Weiser Award.Â He currently holds the Irwin and Joan Jacobs Chair in Information and Computer Science, but is a fairly down-to-earth guy and only writes about himself in the third person when asked.

With an estimated gap of 3.5 million workers by the year 2021, global cybersecurity workforce needs are acute, broad and growing. To address this need, workforce developers must leverage scalable initiatives that can accelerate workforce development. Cybersecurity Curricula 2017, the first set of global cybersecurity curricular guidelines, is one such initiative. Developed with the assistance of more than 325 contributors across 35 countries, these recommendations are based on a comprehensive view of the cybersecurity field and specific disciplinary demands. In addition, they provide a structure for linking cybersecurity curricula to workforce frameworks. Diana Burley, co-chair of the task force that developed these guidelines will discuss the recommendations and how they can accelerate cybersecurity workforce development.

Speaker

Diana L. Burley, Ph.D. is Executive Director and Chair of the Institute for Information Infrastructure Protection (I3P); Associate Dean for Research and External Relations (Interim) and Full Professor of Human & Organizational Learning at The George Washington University (GW) Graduate School of Education and Human Development. She is a widely sought after cybersecurity thought leader, educator, researcher and strategist. Prior to GW, she managed a multi-million dollar computer science education and research portfolio for the US National Science Foundation. She has written more than 80 publications on cybersecurity, information sharing, and IT-enabled change; testified before the US Congress; conducted international cybersecurity awareness training on behalf of the US State Department; and advised global corporations on cybersecurity strategy. Dr. Burley is a member of the US National Academies Board on Human-Systems Integration and she recently co-chaired the effort to develop the first set of global cybersecurity curricular guidelines on behalf of the ACM and published in 2018. Her honors include: 2017 SC Magazine 8 Women in IT Security to Watch and 2017 SC Magazine ReBoot Award, Educational Leadership in IT Security; 2016 Woman of Influence by the Executive Women’s Forum in Information Security, Risk Management and Privacy; 2014 Cybersecurity Educator of the Year; and a 2014 Top Ten Influencer in information security careers. She is the sole recipient of both educator and government leader of the year from the Colloquium for Information Systems Security Education and has been honored by the US Federal CIO Council for her work in developing the federal cybersecurity workforce. She holds a BA in Economics from the Catholic University of America; M.S., Public Management and Policy, M.S., Organization Science, and Ph.D., Organization Science and Information Technology from Carnegie Mellon University where she studied as a Woodrow Wilson Foundation Fellow.

More info
Most studies on human editing focus merely on syntactic revision operations, failing to capture the intentions behind revision changes, which are essential for facilitating the single and collaborative writing process. In this work, we develop in collaboration with Wikipedia editors a 13-category taxonomy of the semantic in- tention behind edits in Wikipedia articles. Using labeled article edits, we build a computational classifier of intentions that achieved a micro-averaged F1 score of 0.621. We use this model to investigate edit intention effectiveness: how differ- ent types of edits predict the retention of newcomers and changes in the quality of articles, two key concerns for Wikipedia today. Our analysis shows that the types of edits that users make in their first ses- sion predict their subsequent survival as Wikipedia editors, and articles in different stages need different types of edits.

More info
Passwords are still a mainstay of various security systems, as wellas the cause of many usability issues. For end-users, many of theseissues have been studied extensively, highlighting problems andinforming design decisions for better policies and motivating researchinto alternatives. However, end-users are not the only oneswho have usability problems with passwords! Developers who aretasked with writing the code by which passwords are stored mustdo so securely. Yet history has shown that this complex task oftenfails due to human error with catastrophic results. While anend-user who selects a bad password can have dire consequences,the consequences of a developer who forgets to hash and salt apassword database can lead to far larger problems. In this paper wepresent a first qualitative usability study with 20 computer sciencestudents to discover how developers deal with password storageand to inform research into aiding developers in the creation ofsecure password systems.

More info
Despite security advice in the official documentation and an extensive body of security research about vulnerabilities and exploits, many developers still fail to write secure Android applications. Frequently, Android developers fail to adhere to security best practices, leaving applications vulnerable to a multitude of attacks. We point out the advantage of a low-time-cost tool both to teach better secure coding and to improve app security. Using the FixDroid™ IDE plug-in, we show that professional and hobby app developers can work with and learn from an in-environment tool without it impacting their normal work; and by performing studies with both students and professional developers, we identify key UI requirements and demonstrate that code delivered with such a tool by developers previously inexperienced in security contains significantly less security problems. Perfecting and adding such tools to the Android development environment is an essential step in getting both security and privacy for the next generation of apps.

Overview

The Spring 2018 offering of CS 7936 will focus on reading and discussing recent papers in security and privacy research from conferences such as:

Credit

Students may enroll for one (1) credit. Although the University lists the course as “variable credit,” the two- and three-credit options are not currently available.

Students enrolled in the seminar are expected to read the papers prior to the seminar. Additionally, students are expected to sign up to lead the discussion on one or more seminar meeting. Leading the disucssion means:

Tips on Reading Papers

Some tips that might help on reading, understanding, and analyzing papers:

It can be useful to look up the video of the presentation (for USENIX and some other conferences, the video was recorded and is available online) and/or the slides (which may be available on the presenting author's page).

The following questions (some of which are pulled from Writing for Computer Science) can be useful to keep in mind when reading a paper (although not all questions will apply to all papers):

What phenomena or properties are being investigated? Why are they of interest?

Has the aim of the research been articulated? What are the specific hypotheses and research questions? Are these elements convincingly connected to each other?

To what extent is the work innovative? How does it differ from past work?

What are the underlying assumptions? Are they sensible?

What forms of evidence are used?

How is the evidence measured? Are the chosen methods of measurement objective, appropriate, and reasonable?

What compromises or simplifications are inherent in the choice of measure?

To what extent do the results persuasively confirm the hypothesis?

What are the likely weaknesses of or limitations to the approach?

Which results are the most surprising?

What is the main contribution of the work?

Are appropriate conclusions drawn from the results, or are there other possible interpretations?

Could the results be verified?

Do the results have applicability to other problems or domains?

Do the title, abstract, and introduction appropriately set the context for the work?

Is there anything unusual about the organization of the write-up, and, if so, is there a reason for this organization?

Are the Tables and Figures clear and useful?

Are the results of practical applicability, or are they more theoretical in nature?

What are the main strengths of the paper? What are its weaknesses?

If you were to cite this paper, what kinds of things might you be citing it for?

Are there interesting future directions for work that the authors have not discussed?

Are there particular steps in the methodology or presentation that you would have done differently?

Are there any methodological decisions that seem to have been motivated by restrictions on time or resources, rather than absolute feasibility?

Are there any ethical issues associated with the paper, and if so, how were they (or how weren't they) dealt with?

Writing for Computer Science, by Justin Zobel, is available online at the library and contains a fair amount on other aspects of research such as how to review papers, how to select a research topic, etc. The table of contents:

Introduction

Getting Started

Reading and Reviewing

Hypotheses, Questions, and Evidence

Writing a Paper

Good Style

Style Specifics

Punctuation

Mathematics

Algorithms

Graphs, Figures, and Tables

Other Professional Writing

Editing

Experimentation

Statistical Principles

Presentations

Ethics

How to Access Papers

Some papers are free to access, while others are behind paywalls. The university has a paid subscription to most of the libraries where those papers can be found. There are several ways to access those papers:

Access the paper listing from on campus (wired or wireless) in order to have access to the paper.

Search for the paper title (and authors) to find a PDF of the author's copy posted on their web site.