Registration

Objectives

It has been understood since the late 1960s that tools and structures arising in mathematical logic and proof theory can usefully be applied to the design of high level programming languages, and to the development of reasoning principles for such languages. Yet low level languages, such as machine code, and the compilation of high level languages into a low level ones have traditionally been seen as having little or no essential connection to logic.

However, a fundamental discovery of this past decade has been that low level languages are also governed by logical principles. From this key observation has emerged an active and fascinating new research area at the frontier of logic and computer science. The practically-motivated design of logics reflecting the structure of low level languages (such as heaps, registers and code pointers) and low level properties of programs (such as resource usage) goes hand in hand with the some of the most advanced contemporary researches in semantics and proof theory, including classical realizability and forcing, double orthogonality, parametricity, linear logic, game semantics, uniformity, categorical semantics, explicit substitutions, abstract machines, implicit complexity and sublinear programming.

Topics

The LOLA workshop, affiliated with LICS, will bring together researchers interested in many aspects of the relationship between logic and low level languages and programs. Topics of interest include, but are not limited to:

Typed assembly languages,

Certified assembly programming,

Certified and certifying compilation,

Proof-carrying code,

Program optimization,

Modal logic and realizability in machine code,

Realizability and double orthogonality in assembly code,

Parametricity, modules and existential types,

General references, Kripke models and recursive types,

Continuations and concurrency,

Implicit complexity, sublinear programming and Turing machines,

Closures and explicit substitutions,

Linear logic and separation logic,

Game semantics, abstract machines and hardware synthesis,

Monoidal and premonoidal categories, traces and effects.

Invited Talks

Vellvm: Verifying Transformations of the LLVM IR

Steve Zdancewic (University of Pennsylvania)

The Low-Level Virtual Machine (LLVM) compiler provides a modern, industrial-strength SSA-based intermediate representation (IR) along with infrastructure support for many source languages and target platforms. Much of the LLVM compiler is structured as IR to IR translation passes that apply various optimizations and analyses.

In this talk, I will describe the Vellvm project, which seeks to provide a formal framework for developing machine-checkable proofs about LLVM IR programs and translation passes. I'll discuss some of the subtleties of modeling the LLVM IR semantics, including nondeterminism and its use of SSA representation. I'll also describe some of the proof techniques that we have used for reasoning about LLVM IR transformations and give two example applications: (1) formal verification of the SoftBound pass, which hardens C programs against memory safety errors, and (2) the mem2reg transformation, which promotes stack-allocated temporaries to registers and converts "trivial" SSA programs to "minimal, pruned" SSA programs.

Vellvm, which is implemented in the Coq theorem prover, provides facilities for extracting LLVM IR transformation passes and plugging them into the LLVM compiler, thus enabling us to create verified optimization passes for LLVM and evaluate them against their unverified counterparts. Our experimental results show that fully verified and automatically extracted implementations can yield competitive performance.

This is joint work with Jianzhou Zhao and Milo Martin (both at Penn) and Santosh Nagarakatte (at Rutgers University).

Machine code, formal verification and functional programming

Magnus Myreen (University of Cambridge)

All programs run in the form of machine code. Ultimately, all formal software verification ought to reach down to this concrete form of execution and prove theorems about the actual ARM, x86 or PowerPC machine instructions that pass through the processor.

This talk presents techniques and tools that aid both verification of existing machine code (e.g. as produced by gcc) and tools that synthesise correct-by-construction machine code from high-level specifications (functions in logic). All of these tools have been implemented and used within the HOL4 theorem prover.

For post hoc verification, I describe a proof-producing decompiler that, given concrete machine code, returns a clean logic function describing the effect of executing that machine code. The decompiler is proof producing, i.e. for each run it proves that the generated function is correct w.r.t. specification of the underlying machine language (instruction set architecture). I will briefly comment on a significant case study, namely, decompilation of the ARM output from gcc for the seL4 microkernel (10k lines of C developed at NICTA).

For synthesis, I turn the decompiler around into a tool that generates correct machine code from functions in logic. This technique has been successfully applied in projects that construct verified implementations of Lisp (and soon ML). The most sophisticated verified Lisp implementation has decent efficiency due to its ability to perform verified dynamic compilation at runtime. Support for the dynamic compilation requires reasoning about self-modifying x86 code.

The talk includes a number of short demos and is meant to be accessible to people unfamiliar with machine code or theorem proving.