I will talk about what phishing, fake accounts, self xss, malware toolbars, .exe malware, and shared secret stealing are and give some examples and only a limited number of Facebook's countermeasures againstsuch attacks. These are the types of attacks where the hacker doesn't gain control of your website, but only control of a user's account. Unfortunately,Facebook has to keep some of our protections secret as they'd lose effectiveness if they were known. I will talk about the threats in details, the solutions will be more light weight.

Malware-containing emails can be sent to anyone. Single malware variants can be sent to tens of thousands of recipients without distinction. However, a small proportion of email malware is sent in low copy number to a small set of recipients that have apparently been specifically selected by the attacker. These targeted attacks are challenging to detect and if successful, may be particularly damaging for the recipient. The vast majority of Internet users will never be sent a targeted attack. The few users to which such attacks are sent, presumably possess features that have brought them to the attention of attackers, and have caused them to be selected for attack. Applying epidemiological techniques to calculate the odds ratio for features of malware recipients, both targeted and non-targeted, allows the identification of putative factors that are associated with targeted attack recipients. In this paper we show that it is possible to identify putative risk factors that are associated with individuals subjected to targeted attacks, by considering the threat akin to a public health issue. These risk factors may be used to identify those at risk of being subject to future targeted attacks, so that these individuals can take additional steps to secure their systems and data.

Since Needham and Schroeder introduced the idea of an active attacker, a lot of research has been made in the protocol design and analysis area in order to verify protocols' claims against this type of attacker. Nowadays, the Dolev-Yao threat model is the most widely accepted attacker model in the analysis of security protocols. Consequently, there are several security protocols considered secure against an attacker under Dolev-Yao's assumptions. With the introduction of the concept of ceremonies, which extends protocol design and analysis to include human peers, we can potentially find and solve security flaws that were previously not detectable. In this presentation, we discuss that, even though Dolev-Yao's threat model can represent the most powerful attacker possible in a ceremony, the attacker in this model is not realistic in certain scenarios, specially those related to the human peers. We propose a dynamic threat model that can be adjusted according to each ceremony, and consequently adapt the model and the ceremony analysis to realistic scenarios without degrading security and improving usability.

Recent years have seen an explosion in the industry adoption ofreverse engineeringfor security purposes. Between the late 90's and today, a nicheendeavor turned into industrypractice - both for the analysis of malicious software and for thesecurity review of closed-sourcesoftware components. In 2011, Google acquired zynamics GmbH, a smallcompany focused ondeveloping software for (security-minded) reverse engineers. This talkwill give an overview of thedifferent areas in which zynamics worked prior to joining Google, andsome of the directions inwhich we're moving now.

On the technical level, the talk will give an overview over ourstructural / graph-centric algorithmsfor executable comparison, how we used these algorithms for malwareclassification and byte-signaturegeneration, and over our reverse-engineering IDE which permits fullycollaborative disassemblyanalysis for teams of reverse engineers.

Application compartmentalisation decomposes software into sandboxed components in order to mitigate security vulnerabilities, and has proven effective in limiting the impact of compromise. However, experience has shown that adapting existing C-language software is difﬁcult, often leading to problems with correctness, performance, complexity, and most critically, security. Security-Oriented Analysis of Application Programs (SOAAP) is an in-progress research project into new semi-automated techniques to support compartmentalisation. SOAAP employs a variety of static and dynamic approaches, driven by source code annotations termed compartmentalisation hypotheses, to help programmers evaluate strategies for compartmentalising existing software.

18 October 16:00From geek-dream to mass-market: Will privacy-preserving technologies ever be adopted? / Hamed Haddadi (Queen Mary, University of London)

FW26, Computer Laboratory, William Gates Builiding

We have been working on privacy preserving profiling, advertising, data mining, and user monitoring systems for a decade now, but we are yet to see a real world deployment. In this talk I will discuss some of the players in this ecosystem, their strengths and strategies, and the shortcomings of computer science solutions in this space. The talk is based on a number of recent papers and studies.

With the launch of Mac OS X 10.7 (Lion), Apple has introduced a volume encryption mechanism known as FileVault 2. Apple only disclosed marketing aspects of the closed-source software, e.g. its use of the AES-XTS tweakable encryption, but a publicly available security evaluation and detailed description was unavailable until recently.

We have performed an extensive analysis of FileVault 2 and we have been able to find all the algorithms and parameters needed to successfully read an encrypted volume. This allows us to perform forensic investigations on encrypted volumes using our own tools.

In this presentation I will present the architecture of FileVault 2, giving details of the key derivation, encryption process and metadata structures needed to perform the volume decryption. I will also comment on the security of the system and the analysis we have performed.

Besides the analysis of the system, we have also built a library that can mount a volume encrypted with FileVault 2. As a contribution to the research and forensic communities we have made this library open source.

Knowledge-based security policies are those which specify a threshold on an adversary's knowledge about secret data. The data owner initially estimates what an adversary might know about his secret, and with each interaction, defined in terms of a query made by the adversary over his secret data, he updates his estimate. If a query response could lead the adversary's knowledge to exceed a given threshold, the query is denied.

In this talk I will discuss how we implement query analysis and belief tracking via abstract interpretation using a novel probabilistic polyhedral domain, whose design permits trading off precision with performance while ensuring estimates of a querier's knowledge are sound. I will present examples of our technique that might apply to personal data. I will also show how our technique can be generalized to reason about knowledge increase in secure multiparty computation (SMC), which is a protocol that allows a set of mutually distrusting parties to compute a function f of their private inputs while revealing nothing about their inputs beyond what is implied by the result. Our technique permits reasoning about what can be inferred by each participant from the result. Finally, I will sketch how we are working to apply our technique to securing sensor data streams.

This is joint work with Piotr Mardziel (Maryland), Jonathan Katz (Maryland), Stephen Magill (formerly at Maryland), and Mudhakar Srivatsa (IBM). For more details see our papers at CSF'11 and PLAS'12:

With the increasing popularity and growing market share of Google's mobile platform Android, it has become the top target of latest mobile malware. Previous work on Android security and privacy control produced solutions that require modification to the operating system itself. This requires the user to root his phone to install custom firmware due to software, hardware, and policy choices by Google, the phone manufacturers, and cellular providers. There is no guarantee that these solutions will ever make their way to consumers unless Google implements them in the main Android OS source code repository.

We developed a novel approach named Aurasium that bypasses the need to change the firmware. We automatically rewrite arbitrary apps by attaching interposition code to closely watch the application's behaviour for security and privacy violations, such as attempts to retrieve a user's sensitive information, send SMS covertly to premium numbers, or access malicious IP addresses. Aurasium can also detect and prevent cases of privilege escalation attacks. Experiments show that we can apply Aurasium to a large corpus of benign and malicious applications with over 99% success rate.

The technology landscape is constantly shifting, evolving and advancing. Major technological disruptions through innovations and new discoveries come around every decade or so. This decade is no different. The current technological trends emerging are related to mobile banking and payments, cloud computing, big data analytics, core banking systems upgrades, migration to chip card, dynamic authentication and IT security. Trust and confidence, safety and soundness should be the cornerstone of these developments and the initiatives taken in the financial industry.

The proliferation of technology in banking is pervasive and far-reaching. Technology and customer demand are driving a huge transformation as to how banking is done. Bank senior management will have to navigate the murky waters of over-hyped and under-delivered performance of some new technologies. IT governance and technology risk management play a very important role here. Regulators will need to see that due diligence practices and safety and soundness requirements are not impeded, impaired or undermined when new technologies are deployed.

Speaker's Bio

Tony joined the Monetary Authority of Singapore in 1999 to head up the Technology Risk Supervision Division. His responsibilities included the development of strategies, programmes, standards and guidelines for the purpose of regulating and supervising financial institutions in respect of technology risk management requirements and information security processes. Tony has held the appointment of Director (Specialist Advisor) for information technology security and risk management since 1 May 2011. He has been actively engaged in conducting seminars and workshops on banking systems security and technology risk management in America, Asia, Australia, China and Europe.

A scientific perspective on cyber security (a “science of cyber security”) is growing as a sound and respected area of research. In this talk we discuss how an empirical perspective enhances our understanding of how to create efficiently secure cyber infrastructure. In particular we discuss four questions that reflect “delusions” that we at the CERT Program see as endemic in the practice of cyber security.

# If code correctness is improving, why do exploits continue to rely on known avoidable programming mistakes? # If policies are effective, why do unimplemented or ineffective policies continue to be an enabling element of major incidents? # If monitoring provides useful situational awareness, why do so many significant intrusions remain undetected for weeks? months? years? # If proficient response capabilities exist, why are even sophisticated victims challenged to quickly and effectively investigate, mitigate and recover?

We discuss our recent work in synthetic data generation and other work at CERT that strives to take sound scientific approaches to understanding and solving the challenges of creating and operation efficiently secure cyber infrastructure.

Some of the publicly available cyber security information and tools from the CERT Program include:

Secure Coding, http://www.cert.org/secureRcoding

Resiliency, http://www.cert.org/resilience

Cyber Training, http://www.cert.org/work/training.html

Insider Threats, http://www.cert.org/insider_threat

Forensics, http://www.cert.org/forensics

Network Monitoring, http://tools.netsa.cert.org

Fuzz Testing, http://www.cert.org/download/bff

Additional information is available at www.cert.org and in the 2010 CERT Research Report, www.cert.org/research/2010researchRreport.pdf.

27 September 16:15Protecting Distributed Applications Through Software Diversity and Renewability / Christian Collberg, University of Arizona

Lecture Theatre 2, Computer Laboratory, William Gates Building

Remote Man-at-the-end (R-MATE) attacks occur in distributed applications where an adversary has physical access to an untrusted client device and can obtain an advantage from inspecting, reverse engineering, or tampering with the hardware itself or the software it contains.

In this talk we give an overview of R-MATE scenarios and present a system for protecting against attacks on untrusted clients. In our system the trusted server overwhelms the client's analytical abilities by continuously and automatically generating and pushing to him diverse variants of the client code. The diversity subsystem employs a set of primitive code transformations that provide temporal, spatial, and semantic diversity in order to generate an ever-changing attack target for the adversary, making tampering difficult without this being detected by the server.

Speaker's Bio

Christian Collberg received a BSc in Computer Science and Numerical Analysis and a Ph.D. in Computer Science from Lund University, Sweden. He is currently an Associate Professor in the Department of Computer Science at the University of Arizona and has also worked at the University of Auckland, New Zealand, and holds a position at the Chinese Academy of Sciences in Beijing, China.

Prof. Collberg is the author of the first comprehensive textbook on software protection, "Surreptitious Software: Obfuscation, Watermarking, and Tamperproofing for Software Protection," published in Addison-Wesley's computer security series.

Prof. Collberg is a leading researcher in the intellectual property protection of software, and also maintains an interest in compiler and programming language research. In his spare time he writes songs, sings, and plays guitar for The Undecidables and hopes one day to finish up his Great Swedish Novel.

Software running on modern client systems has become too large and complex to secure via conventional means, making it an easy target for malware. This talk discusses how hardware-assisted virtualization can be used to retrofit robust isolation and protection to client systems, resulting in a much more defensible platform with much greater resistance to malware and user error, while operating transparently to the end user.

The talk will examine the architectural progression which led from the development of XenClient XT (an MILS system designed for the US intelligence and defence communities) to the Bromium platform, that draws on much of the same technology but is designed for a far moremainstream use case.

About the speaker:

Ian Pratt leads the product team at Bromium, a startup focussed on making computer systems more trustworthy. He was formerly a member of faculty at the University of Cambridge Computer Laboratory, where he led the systems research group before leaving to found XenSource, which was acquired by Citrix in 2007. He co-founded Bromium early last year, which now employs over 40 researchers and developers across its offices in Cambridge UK and Cupertino, California.

Industrial Control Systems (ICS), often referred to as SCADA (Supervisory Control And Data Acquisition) Systems, have gained the increasing attention of IT-Security researchers. This talk introduces the terminology and background of ICS and exposes the reasons why it is difficult to secure ICS. Moreover, the talk will present security analysis guidelines for ICS devices. These guidelines can be applied to many ICS devices and are mostly vendor-independent. Furthermore, based on Scapy, a Modbus/TCP interactive packet manipulation program was developed for assessing critical infrastructures and ICS devices.

In the second half of the talk, I will describe a security analysis performed on a real device - an ICS democase containing current products in use in ICS. Besides known security issues, the analysis shows how the data visualized by the Human Machine Interface (HMI) can be altered and modified without limit. Secondly, physical values read by sensors, such as temperatures, can be altered within the Programmable Logic Controller (PLC). Thirdly, input validation also represent critical security issues in the ICS world. Lastly, existing security solutions for securing current ICS are briefly presented.

NICTA has completed the machine-checked, code-level formal verification of the full functional correctness of the seL4 operating system microkernel. This outcome confirms that it is feasible to perform this kind of detailed formal verification in real software engineering projects.However, although seL4 is complex, it is not a very large system (8700 lines of C code).

Our next broad challenge is to make it feasible to complete the code-level formal verification of key security and safety properties of very large highly-critical software-intensive systems. We expect that seL4 will provide a foundation for this. In this talk I will give an overview of three areas of recent ongoing research that I am involved with that help to address this broad challenge.

The first area is on better understanding of the software process and management for large-scale formal methods projects. The second area is on approaches to define and analyse software architectures for large trustworthy systems built using trusted and untrusted components. The final area is more methodological and philosophical:how should we establish the empirical validity of the formal models used in formal verification?

Bio: Mark Staples is a Principal Researcher in the Software Systems Research Group at NICTA, and a Conjoint Senior Lecturer at the University of New South Wales. He is conducting research at the borders between software engineering, formal methods, and systems.

Earlier at NICTA he was a member of, then led, NICTA's empirical software engineering group. He was the founding leader of the Fraunhofer Project Centre in Transport and Logistics at NICTA, a strategic collaboration between NICTA and Fraunhofer IESE. In conjunction with Fraunhofer IESE and SAP Research, he led the creation of the Future Logistics Living Lab facility and industry network.

Prior to joining NICTA, he worked in the software industry for several years, first on a safety-critical SCADA system, and then on a business-critical web payments infrastructure product. He completed undergraduate degrees in computer science and cognitive science at the University of Queensland, and a PhD on theorem proving and formal methods at the University of Cambridge.

Embedded systems are increasingly used in circumstances where people's lives or valuable assets are at stake, hence they should be trustworthy - safe, secure, reliable. True trustworthiness can only be achieved through mathematical proof of the relevant properties. Yet, real-world software systems are far too complex to make their formal verification tractable in the foreseeable future. The Trustworthy Systems project at NICTA has formally proved the functional correctness as well as other security-relevant properties of the seL4 microkernel. This talk will provide an overview of the principles underlying seL4, and the approach taken in its design, implementation and formal verification. It will also discuss on-going activities and our strategy for achieving the ultimate goal of system-wide security guarantees.

07 June 16:00Lock Inference in the Presence of Large Libraries / Khilan Gudka (University of Cambridge)

FW26, Computer Laboratory, William Gates Builiding

Atomic sections can be implemented using lock inference. For lock inferenceto be practically useful, it is crucial that large libraries be analysed.However, libraries are challenging for static analysis, due to theircyclomatic complexity.

Existing approaches either ignore libraries, require library implementers toannotate which locks to take or only consider accesses performed upto onelevel deep in library call chains. Thus, some library accesses may gounprotected, leading to atomicity violations that atomic sections aresupposed to eliminate.

As corporations, agencies, and individuals continue to invest in national infrastructure trusting it to withstand cyber-attacks, it is important to ensure that the this trust is warranted. In this talk, I will present ISP level countermeasures that localise bots based on the unique communication patterns arising from their overlay topologies used for command and control. I will also present schemes that allow ISPs to cooperatively detect botnet attacks and other network anomalies without leaking private traffic information. Experimental results on synthetic topologies embedded within Internet traffic traces from an ISP's backbone network indicate that our techniques (i) can localize the majority of bots with low false positive rate, (ii) are resilient to the partial visibility arising from partial deployment of monitoring systems, and measurement inaccuracies arising from partial visibility and dynamics of background traffic, and (iii) are scalable enough to show good promise as a key element of a wider network anomaly detection framework.

Bio: Shishir Nagaraja is a researcher in network security and privacy. He holds the position of a Lecturer at the University of Birmingham, as well as concurrent appointments as Adjunct Professor at the University of Illinois at Urbana-Champaign, USA and Assistant Professor at IIITD, India. He holds a PhD in Computer Security from the University of Cambridge. He has worked in the software industry for several years as a Software Engineer at Novell Bangalore. He holds several patents in the area of trust and security.

TLS is the de facto protocol of choice for securing Internet communications, while DTLS is an increasingly important variant of TLS that was designed for use in lightweight applications. In this talk, I will provide an overview of some recent results - both positive and negative - about the security of the TLS and DTLS protocols.

17 May 16:00Facebook and Privacy: The Balancing Act of Personality, Gender, and Relationship Currency / Daniele Quercia (University of Cambridge)

FW26, Computer Laboratory, William Gates Builiding

Social media profiles are telling examples of the everyday need for disclosure and concealment. The balance between concealment and disclosure varies across individuals, and personality traits might partly explain this variability. Experimental findings on the relationship between information disclosure and personality have been so far inconsistent. We thus study this relationship anew with 1,313 Facebook users in the United States using two personality tests: the big five personality test and the self-monitoring test. We model the process of information disclosure in a principled way using Item Response Theory and correlate the resulting user disclosure scores with personality traits. We find a correlation with the trait of Openness and observe gender effects, in that, men and women share equal amount of private information, but men tend to make it more publicly available, well beyond their social circles. Interestingly, geographic (e.g., residence, hometown) and work-related information is used as relationship currency, in that, it is selectively shared with social contacts and is rarely shared with the Facebook community at large.

Recent years have seen an explosion in the industry adoption ofreverse engineeringfor security purposes. Between the late 90's and today, a nicheendeavor turned into industrypractice - both for the analysis of malicious software and for thesecurity review of closed-sourcesoftware components. In 2011, Google acquired zynamics GmbH, a smallcompany focused ondeveloping software for (security-minded) reverse engineers. This talkwill give an overview of thedifferent areas in which zynamics worked prior to joining Google, andsome of the directions inwhich we're moving now.

On the technical level, the talk will give an overview over ourstructural / graph-centric algorithmsfor executable comparison, how we used these algorithms for malwareclassification and byte-signaturegeneration, and over our reverse-engineering IDE which permits fullycollaborative disassemblyanalysis for teams of reverse engineers.

08 May 16:15Building Bankomat: Cash dispensers and the development of on-line, real-time networks in Britain and Sweden, c.1965-1985 / Bernardo Batiz-Lazo (Bangor University)

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk explores the technological choices made at the dawn of the massification of retail finance and specifically how ideas that computers could enable a cash-free society appeared concurrently to cash dispenser technology. To describe and analyse the development of electronic banking and its entanglement with wider historical processes, we document how the deployment of cash dispenser networks and later on a fleet of automated teller machines (ATM), interweaved with the adoption of on-line real-time (OLRT) computing in Sweden and the UK. British savings banks started their computerisation rather ‘late’ and benefited from adopting ‘tried and tested’ technology. Meanwhile, Swedish savings banks spearheaded technological change in Europe. In documenting the sequence of events in the networking of Swedish and British banking, we depart from the predominant view that holds the development of OLRT in a single move. Instead we propose there are specific conditions inside banking organisations requiring to consider on-line (OL) or asynchronous and on-line real-time (OLRT) or synchronous communication as two distinct stages of development in the adoption of computer technology. As a result, we show how delivering on a cashless society proved more difficult than anticipated.

How do the differences in data collection and processing between competing online retailers influence consumers’ purchasing decisions? Are online shoppers willing to pay extra for better privacy and can companies monetise good privacy practices? I will report on evidence from the largest experiment to date into behavioural privacy economics.

Reducing the security of a complex construction to that of a simpler primitive is one of the central methods of cryptography.Rather recently, in the domain of cryptographic hashing, such constructions as Merkle-Damgard and sponge based on a fixed-length random oracle (compression function or permutation) have been proven indifferentiable from a finite-length random oracle. Moreover, Feistel based on a fixed-length random oracle has been shown indifferentiable from a wider random oracle. In this talk we address the fundamental question of constructing an ideal cipher (consisting of exponentially many random oracles) from a small number of fixed-length random oracles.

In this talk, we show that the multiple Even-Mansour construction with4 rounds, randomly drawn fixed underlying permutations and a bijective key schedule, is indifferentiable from ideal cipher. Our proof is accompanied by an efficient differentiability attack on multiple Even-Mansour with 3 rounds.

Practically speaking, we provide a construction of an ideal cipher as a set of exponentially many permutations from just as few as 4 permutations. On the theoretical side, this is result confirms the equivalence between ideal cipher and random oracle models.

18 April 10:30Confining the Ghost in the Machine: Using Types to Secure JavaScript Sandboxing / Shriram Krishnamurthi, Brown University

The commercial Web depends on combining content, especially advertisements, from sites that do not trust one another. Because this content can contain malicious code, several corporations and researchers have designed JavaScript sandboxing techniques (e.g., ADsafe, Caja, and Facebook JavaScript). These sandboxes depend on static restrictions, transformations, and libraries that perform dynamic checks. How can we be sure that they work?

We tackle the problem of proving the security of these sandboxes. Our technique depends on creating specialized types to characterize the properties of the sandboxes, exploiting the structure of the checks contained in the libraries. The resulting checkers work on actual JavaScript code that is effectively unaltered; I will focus on our application to Yahoo!'s ADsafe. We establish soundness using our semantics for JavaScript, which has been tested for conformity against real implementations.

Joint work with Arjun Guha and Joe Politz.

17 April 16:00Efficient Cryptography for the Next Generation Secure Cloud / Alptekin Küpçü, Koç University

Peer-to-peer (P2P) systems, and client-server type storage and computation outsourcing constitute some of the major applications that the next generation cloud schemes will address. Since these applications are just emerging, it is the perfect time to design them with security and privacy in mind. Furthermore, considering the high-churn characteristics of such systems, the cryptographic protocols employed must be efficient and scalable.

In this talk, I will focus on an efficient and scalable fair exchange protocol that can be used for exchanging files between participants of a P2P file sharing system. It has been shown that fair exchange cannot be done without a trusted third party (called the Arbiter). Yet, even with a trusted Arbiter, it is still non-trivial to come up with an efficient solution, especially one that can be used in a P2P file sharing system with a high volume of data exchanged. Our protocol is optimistic, removing the need for the Arbiter's involvement unless a dispute occurs. While the previous solutions employ costly cryptographic primitives for every file or block exchanged, our protocol employs them only once per peer, therefore achieving O(n) efficiency improvement when n blocks are exchanged between two peers.In practice, this corresponds to one-two orders of magnitude improvement in terms of both computation and communication (42 minutes vs. 40 seconds, 225 MB vs. 1.8 MB). Thus, for the first time, a provably secure (and privacy respecting when payments are made usinge-cash) fair exchange protocol is being used in real bartering applications (e.g., BitTorrent) without sacrificing performance.

Finally, if time permits, I will briefly mention some of our other results on cloud security including ways to securely outsource computation and storage to untrusted entities, official arbitration in the cloud, impossibility results on distributing the Arbiter, keeping the user passwords safe, and the Brownie Cashlib cryptographic library including ZKPDL zero-knowledge proof description language we have developed. I will also be available to talk on these other projects after the presentation.

Despite decades of efforts to improve authentication, the world still relies heavily on secrets chosen (and memorized) by humans: passwords, PINs, personal knowledge questions and the occasional graphical password scheme. While everybody think these are possible for attackers to guess, our understanding of just how difficult is vague. Are passwords or PINs harder and by how much? How can we accurately the difficulty of guessing passwords chosen by older users to those chosen by younger users, or those chosen by English speakers to those chosen by Spanish speakers? This talk will address these questions, presenting the speaker's dissertation research and upcoming IEEE Security & Privacy Symposium publication. To do so, the talk will introduce the right statistical metrics for measuring guessing resistance, discuss how to collect large password datasets in a privacy-friendly and secure manner, and discuss some findings from analyzing 70 M passwords from Yahoo! users, perhaps the largest corpus ever studied.

Today the method du jour for statistical analysis of user behavior is to gather lots of user data, anonymize it (more-or-less), and then analyze that data. The need for doing statistical analysis drives many companies to gather large amounts of user data, often without the users' awareness. My research group at MPI-SWS has been exploring approaches for doing statistical analysis without gathering user data. Rather, user data is kept on user devices, and queries are pushed to these devices. The resulting answers are anonymized and fuzzed such that 1) no single party can associate data with individual users, and 2) the aggregate answers are differentially private. In this talk, I will present a general approach that we will present in NSDI this year. I will outline the shortcomings of this approach, and follow with some enhancements that scale better in specific applications domains, namely web analytics and behavioral advertising.

Bio: Paul Francis is a tenured faculty at the Max Planck Institute for SoftwareSystems in Germany. Paul has held research positions at CornellUniversity, ACIRI, NTT Software Labs, Bellcore,and MITRE, and was ChiefScientist at two Silicon Valley startups. Paul’s research centers aroundrouting and addressing problems in the Internet and P2P networks. Paul’sinnovations include NAT, shared-tree multicast, the first P2P multicastsystem, the first DHT (as part of landmark routing), and VirtualAggregation. Recently Paul has become interested in designing advertisingsystems that protect user privacy while allowing for effective targeting.

The growth of ``cloud computing'' and the proliferation of mobile devices contribute to a desire to outsource computing from a client device to an online service. However, in these applications, how can the client verify that the result returned is correct, without redoing the computation herself? We formalize this setting by introducing the notion of verifiable computation, and we provide a protocol that achieves asymptotically optimal performance (amortized over multiple inputs). We then extend the definition of verifiable computation in two important directions: public delegation and public verifiability, which have important applications in many practical delegation scenarios. To achieve these new properties, we establish an important (and somewhat surprising) connection between verifiable computation and attribute-based encryption. Finally, we introduce a new characterization of NP that lends itself to very efficient cryptographic applications, including verifiable computation, succinct non-interactive arguments, and non-interactive zero knowledge proofs.

Bryan Parno is a researcher in the Security and Privacy Group within Microsoft Research, Redmond. His interests span a broad range of security topics, including network and system security, applied cryptography, usable security, and data privacy. Currently, he is investigating next-generation application models, privacy-preserving online services, and cryptographic techniques for securely outsourcing computation. He completed his PhD at Carnegie Mellon University, where he was advised by Adrian Perrig. His dissertation received the 2010 ACM Doctoral Dissertation Award, and he recently co-authored the book “Bootstrapping Trust in Modern Computers”.

The U.S. Institute of Medicine commissioned my 2011 report on the role of trustworthy software in the context of U.S. medical device regulation. This talk will provide a glimpse into the risks, benefits, and regulatory issues for innovation of trustworthy medical device software.

Today, it would be difficult to find medical device technology that does not critically depend on computer software. The technology enables patients to lead more normal and healthy lives. However, medical devices that rely on software (e.g., drug infusion pumps, linear accelerators) continue to injure or kill patients in preventable ways--despite the lessons learned from the tragic radiation incidents of the Therac-25 era. The lack of trustworthy medical device software leads to shortfalls in properties such as safety, effectiveness, dependability, reliability, usability, security, and privacy.

Come learn a bit about the science, technology, and policy that shapes medical device software.

Bio:

Kevin Fu is an Associate Professor of Computer Science and adjunct Associate Professor of Electrical & Computer Engineering at the University of Massachusetts Amherst. Prof. Fu makes embedded computer systems smarter: better security and safety, reduced energy consumption, faster performance. His most recent contributions on trustworthy medical devices and computational RFIDs appear in computer science and medical conferences and journals. The research is featured in critical articles by the NYT, WSJ, and NPR.

Prof. Fu served as a visiting scientist at the Food & Drug Administration, the Beth Israel Deaconess Medical Center of Harvard Medical School, and MIT CSAIL. He is a member of the NIST Information Security and Privacy Advisory Board. Prof. Fu received a Sloan Research Fellowship, NSF CAREER award, and best paper awards from various academic silos of computing. He was named MIT Technology Review TR35 Innovator of the Year. Prof. Fu received his Ph.D. in EECS from MIT when his research pertained to secure storage and web authentication. He also holds a certificate of achievement in artisanal bread making from the French Culinary Institute. He has a doppelganger who works on energy-aware embedded systems.

20 March 14:00Insecurity Engineering in Locks / Marc Weber Tobias

Lecture Theatre 2, Computer Laboratory, William Gates Building

Insecure designs in physical security locks, safes, and other products have consequences in terms of security, liability, and even loss of life. Marc Weber Tobias and his colleague, Tobias Bluzmanis will discuss a number of cases involving design issues that allow locks and safes to be opened in seconds. In one instance, the insecurity of a gun safe led to the death of a three year old child in the United States. Marc will demonstrate different products that appear secure but in fact are not. A case example will also be presented that involved a lock from Finland that is a perfect example of insecurity engineering. This patented and award winning design appears quite secure, utilizing electronic credentials and yet is seriously flawed.

Speaker's Bio:Marc is a physical security expert in the United States who is an investigative attorney and leads a team of specialists who analyze locks and security hardware for many of the largest lock manufacturers in the world.

14 March 10:00Malleability in Modern Cryptography / Markulf Kohlweiss, MSRC

In recent years, malleable cryptographic primitives have advanced from being seen as a weakness allowing for attacks, to being considered a potentially useful feature. Malleable primitives are cryptographic objects that allow for meaningful computations, as most notably in the example of fully homomorphic encryption. Malleability is, however, a notion that is difficult to capture both in the hand-written and the formal security analysis of protocols.

In my work, I look at malleability from both angles. On one hand, it is a source of worrying attacks that have, e.g., to be mitigated in a verified implementation of the transport layer security (TLS) standard used for securing the Internet. On the other hand, malleability is a feature that helps to build efficient protocols, such as delegatable anonymous credentials and fast and resource friendly proofs of computations for smart metering. We are building a zero-knowledge compiler for a high-level relational language (ZQL), that systematically optimizes and verifies the use of such cryptographic evidence.

We recently discovered that malleability is also applicable to verifiable shuffles, an important building block for universally verifiable, multi-authority election schemes. We construct a publicly verifiable shuffle that for the first time uses one compact proof to prove the correctness of an entire multi-step shuffle. In our work, we examine notions of malleability for non-interactive zero-knowledge (NIZK) proofs. We start by defining a malleable proof system, and then consider ways to meaningfully ‘control’ the malleability of the proof system. In our shuffle application controlled-malleable proofs allow each mixing authority to take as input a set of encrypted votes and a controlled-malleable NIZK proof that these are a shuffle of the original encrypted votes submitted by the voters; it then permutes and re-randomizes these votes and updates the proof by exploiting its controlled malleability.

The concept of ceremony as an extension to network and security protocols was introduced by Ellison. No methods or tools to check correctness or the properties in such ceremonies are currently available. The applications for security ceremonies are vast and *ll gaps left by strong assumptions in security protocols, like provisioning of cryptographic keys or correct human interaction. Moreover, no tools are available to check how knowledge is distributed among human peers and in their interaction with other humans and computers in these scenarios. The key component in this paper is the formalisation of human knowledge distribution in security ceremonies. By properly enlisting human expectations and interactions in security protocols, we can minimise the ill-described assumptions we usually see failing. Taking such issues into accountwhen designing or verifying protocols can help us to better understand where protocols are more prone to break due to human constraints.

Application markets have revolutionized the software downloadmodel of mobile phones: third-party application developers offersoftware on the market that users can effortlessly install ontheir phones. This great step forward, however, also imposes somethreats to user privacy: applications often ask for permissions thatreveal private information such as the user’s location, contacts andmessages. While some mechanisms to prevent leaks of user privacyto applications have been proposed by the research community,these solutions fail to consider that application markets areprimarily driven by advertisements that rely on accurately profilingthe user. In this paper we take into account that there are two partieswith conflicting interests: the user, interested in maintainingtheir privacy and the developer who would like to maximize theiradvertisement revenue through user profiling. We have conductedan extensive analysis of more than 250,000 applications in the Androidmarket. Our results indicate that the current privacy protectionmechanisms are not effective as developers and advert companiesare not deterred. Therefore, we designed and implementeda market-aware privacy protection framework that aims to achievean equilibrium between the developer’s revenue and the user’s privacy.The proposed framework is based on the establishment of afeedback control loop that adjusts the level of privacy protection onmobile phones, in response to advertisement generated revenue.

Through the prevalence of interconnected embedded systems, the vision of pervasive computing has become reality over the last few years. As part of this development, embedded security has become an increasingly important issue in a multitude of applications. Examples include the Stuxnet virus, which has allegedly delayed the Iranian nuclear program, killer applications in the consumer area like iTunes or Amazon's Kindle, the business models of which rely heavily on IP protection, and even medical implants like pace makers and insulin pumps that allow remote configuration. These examples show the destructive and constructive aspects of modern embedded security. For us embedded security researchers, the following definition of yin and yang can be useful for resolving this seemingly conflict: "The concept of yin yang is used to describe how polar opposites or seemingly contrary forces are interconnected and interdependent in the natural world, and how they give rise to each other in turn." (OK, the "natural world" part is not a 100% fit here.) In this presentation I will talk about some of our research projects over the last few years which dealt with both the yin and yang aspect of embedded security.

In 1-2 generations of automobiles, car2car and car2infrastructure communication will be available for driver-assistance and comfort applications. The emerging car2x standards call for strong security features. The large number of data from up to several 1000 incoming messages per second, the strict cost constraints, and the embedded environment makes this a challenging task. We show how an extremely high-performance digital signature engine was realized using low-cost FPGAs. Our signature engine is currently widely used in field trials in the USA. The next case study addresses the other end of the performance spectrum, namely lightweight cryptography. PRESENT is one of the smallest known ciphers which can be realized with as few as 1000 gates. The cipher was designed for extremely cost and power constrained applications such as RFID tags which can be used, e.g., as a tool for anti-counterfeiting of spare parts, or for other low-power applications. PRESENT is currently being standardized by ISO.

As "yang examples" of our research we will show how two devices with very large deployment in the real world can be broken using physical attacks. First, we show a recent attack against a modern contactless smart card equipped with 3DES. The card is widely used in authentication and payment systems. The second attack breaks the bit stream encryption of current FPGAs. These are reconfigurable hardware devices which are popular in many digital systems. We were able to extract AES and 3DES key from a single power-up of the reconfiguration process. Once the key has been recovered, an attacker can clone, reverse engineer and alter a presumingly secure hardware design.

At Eurocrypt'11, we presented an attack framework on RC4 with applications to the analysis of WEP and WPA. We obtained an efficient distinguisher for WPA and the best theoretical key recovery attack on WPA so far. In this presentation we revisit these work and give new results. We identify several flaws in the analysis and correct them. This is joint work with Pouyan Sepehrdad and Martin Vuagnoux.

In addition to its usual complexity assumptions, cryptography silently assumes that information can be physically protected in a single location. As we now know, real-life devices are not ideal and confidential information leaks through different physical channels. Whilst most aspects of side channel leakage (cryptophthora) are now well understood, no attacks on totally unknown algorithms are known to date. This paper describes such an attack. By _totally unknown_ we mean that no information on the algorithm's mathematical description (including the plaintext size), the microprocessor or the chip's power consumption model is available to the attacker.

This talk will highlight work from two upcoming papers at Financial Cryptography and USEC which includes the first empirical data on how humans choose numerical PINs or multi-word passphrases. Combined with the increasing amount of data on password choice, we can introduce new statistical metrics to evaluate the security provided by human-chosen distributions.

The prospect of outsourcing an increasing amount of data storage and management to cloud services raises many new privacy concerns that can be satisfactorily addressed if users encrypt the data they send to the cloud. If the encryption scheme is homomorphic, the cloud can still perform meaningful computations on the data, even though it is encrypted.In fact, we now know a number of constructions of fully homomorphic encryption schemes that allow arbitrary computation on encrypted data.In the last two years, solutions for fully homomorphic encryption have been proposed and improved upon, but all currently available options seem to be too inefficient to be used in practice. However, for many applications it is sufficient to implement somewhat homomorphic encryption schemes, which support a limited number of homomorphic operations. They can be much faster, and more compact than fully homomorphic schemes.

This talk will focus on describing the recent somewhat homomor- phic encryption scheme of Brakerski and Vaikuntanathan, whose security relies on the ring learning with errors (RLWE) problem.