Courses

This is a tentative list of topics to be covered in the school. These are subject to change without notice. Note that in addition to
main courses we will also have guest lecturers, many of whom will share valuable industry practices.

Below is a tentative list of guest lecturers for the school. Unlike the lecturers for the main courses listed above, guest lecturers will
usually only give several lectures. However, we feel that having lecturers, many of whom are prectitioners in the field of verification,
program analysis, etc. gives students a unique breadth of knowledge.
Unlike the rest of the School, guest lecturers are open to the general public.

Language-based Security

Language-based security gains increasing popularity because it provides natural means for specifying and enforcing application and language-level security policies. Popular highlights include Java stack inspection, to enforce stack-based access control, Java bytecode verification, to verify bytecode type safety, and web language-based mechanisms such as Caja, ADsafe, and FBJS, to enforce sandboxing and separation by program transformation and language subsets.

In these lectures, we overview the state of the art in language-based information-flow security, particularly focusing on information release, or declassification, policies and on tradeoffs between static and dynamic techniques to enforce information-flow policies. We pay particular attention to the fundamental challenges and opportunities in web application security.

Realization of solver based techniques for dynamic software verification

One of the approaches to SW testing is based on checking (explicit and/or implicit) assertions in program code in order to detect bug conditions. This implies numerous attempts to execute a program under varying input stimuli to try cover 'bad' regions of code. Combining random & directed approaches is a key to high bug detection rate for real code blocks. We will describe underlying techniques of a working system that implements instrumentation and execution of a program under test for dynamic verification driven by symbolic analysis.

Practical instrumentation of C/C++ code is the key stage. Even a simple optimization of code structure than saves much time in test running. This mostly means creation of as linear as possible code that is free from side effects and uses clear and symbolically articulated (when possible) branching condition and assignments. Also, a number of tracing metadata should be attached.

We will concern various optimization approaches such as using data dependencies for parallelized path enumeration, branch count reduction, taking an advantage from hierarchical code structure etc.

Proving Absence of Deadlocks in Hardware

Modern digital circuit is a network of multiple processing elements, where computational workload is shared by processor cores, hardware accelerators, decoders, etc. Distributed computation relies upon efficient communication between system components. Tricky algorithms for ordering, resource sharing and arbitration are now widely used in chip interconnect, and often contain subtle bugs. Deadlocks, if found late in the design cycle, can be particularly devastating.

We propose a framework and a language called xMAS for high-level modeling of communication microarchitectures. The language is graphical (model = diagram) and permits capture of design ideas in a natural way. Models have synchronous semantics similar to Verilog, but much more amenable to formal analysis, e.g. automatic generation of invariants and quick proofs of liveness. In this talk we introduce xMAS modeling language and describe several formal verification techniques which have been succesfully applied for liveness verification of industrial hardware blocks at Intel Corp.

Introduction to Metacomputation

Metacomputation is a field of computer science comprising partial evaluation, supercompilation and related theories, methods and tools. Prefix 'meta' indicates that metacomputation regards programs as objects of analysis and/or transformation. A gentle introduction into metacomputation and its methods (including supercompilation) will be given in the first part of the talk.

In the second part of the talk we discuss the ways to apply a particular metacomputation technique - neighborhood analysis - to program testing. Testing based on neighborhood analysis is a sound and complete testing of a given program against specification (preconditions, postconditions). When this testing over a finite set of tests is completed, partial correctness of a program is guaranteed.

Further Topics in Metacomputation

In this talk we discuss applications of one of the most sophisticated metacomputation technique - supercompilation - to program analysis. Supercompilation is source-to-source program transformation technique, being primarily considered as a tool for program optimization. In the first part of the talk we discuss how an optimizing supercompiler may be applicable to program verification and review some results.

In the second part we consider applications of supercompilation to program analysis by transformation. We review following advanced topics: proving program equivalence and inferring lemmas about programs by transformation.

Time Abstraction in Simulation-Based Hardware Verification

The main distinction of hardware designs from most software systems is the explicit presence of time. There is a system clock that drives all hardware activities and synchronizes concurrent multi-cycle operations. It is obvious that the presence of time should be taken into account when developing reference models for simulation-based verification. However, creation of time-accurate models, which enable cycle-by-cycle behavior checking, is very labor-intensive and, what is even more important, such models are unstable to code modification. To reduce verification expenses and increase tests reusability, one has to abstract away from some timing properties of the design. The lecture considers the issues and challanges relating to development of time-approximate reference models and their application to simulation-based hardware verification.

Detecting Races and Memory Corruption Bugs

Data races and memory corruption bugs are among the hardest issues to debug.
Many static and dynamic tools are able to detect these bugs, but there is no silver bullet yet -- most of the tools are either slow, or inaccurate, or both.
In this talk you will learn about two dynamic testing tools we implemented and deployed at Google.

ThreadSanitizer, a detector of data races, is in active use at Google during last three years.
You will learn about the algorithms and different instrumentation techniques used in this tool.
We will share our experience in race hunting: what races were found, what problems were observed by users, how we deployed regular testing process.
AddressSanitizer, which finds use-after-free and buffer overflow, is a very young tool (3 months old).
It repeats the functionality of existing (commercial and opensource) tools, but it has a huge advantage in speed -- the average slowdown is less than 2x.
We will describe the algorithm behind AddressSanitizer and show how the great speed creates new opportunities in testing.

Verification As a Matter Of Course

Verification is only one of the concerns of software development. Most projects are not willing to renounce, for the sake of producing verified software, the other advances of programming technology, or to adopt a radically altered software process. I will present an ongoing development, the Eiffel Verification Environment (EVE) which makes verification a "matter of course" by leveraging on Eiffel's contracts and integrating various verification technologies, from tests to proofs, in a modern IDE and a modern software development process.

The first lecture will describe the overall principles and structure of the EVE effort, particularly its techniques for integrating complementary verification techniques. The second lecture will present a specific aspect of the research: the alias calculus, which addresses a core issue in the verification of object-oriented programs: handling pointer aliasing.

Equivalence checking for various models of computation has been always regarded as one of the most fundamental problems in computer science. It may be viewed as a variant of program verification in the case when programs are checked against specifications that are programs themselves. Equivalence checking algorithms found applications in many branches of hardware and software engineering, including software verification, code optimization, VLSI designing, malware detection, code refactoring, program (de)obfuscation.

In this lecture we study propositional and first-order models of sequential programs, consider the equivalence checking problem in these models, find a border between tractable and untractable cases of this problem, and finally look at some techniques that yield polynomial time equivalence checking procedures.

Analyzing TDL4 - the most sophisticated malware.

Last version of TDSS aka TDL4 is installed on more than 4.5 M infected PCs around the world.
It is one of the most sophisticated malware witch use high-end technologies hiding its activity. Some techniques are already well known like MBR infection or x64 rootkit.
But still there are some features which are new for whole malware landscape: P2P botnet connection thru public KAD network, double malware authentication over HTTPS and more. Cybercriminals are using TDSS for selling black SEO, anonymizing and malware install.
In this lecture we will look closer to this malware and its C&C servers while it's possible.