This volume contains the papers accepted at the 7th Systems Software
Verification Conference (SSV 2012), held in Sydney, November 28-30,
2012.

Industrial-strength software analysis and verification has
advanced in recent years through the intro- duction of model checking,
automated and interactive theorem proving, and static analysis
techniques, as well as correctness by design, correctness by contract,
and model-driven development. However, many techniques are working
under restrictive assumptions that are invalidated by complex embedded
systems software such as operating system kernels, low-level device
drivers, or micro-controller code.

The aim of SSV workshops and
conference series is to bring together researchers and developers from
both academia and industry who are facing real software and real
problems with the goal of finding real, applicable solutions.

This year we received 25 submissions and the Program Committee
selected 13 submissions for pre- sentation at the conference. Each
submission was reviewed by 3 Program Committee members. We wish to
thank the Program Committee members and their sub-reviewers for their
competent and timely reviews in the short allotted time-frame. We
also thank the two invited speakers, Andreas Podelski (Uni- versity
of Freiburg, Germany) and Lee Pike (Galois, Inc., USA) for accepting
our invitation to give a presentation at SSV 2012.

SSV 2012 used the
EasyChair conference system to manage the reviewing process. We are
indebted to the EPTCS staff who provided flawless support in the
preparation of this EPTCS volume. Finally, the SSV program chairs
and organizers gratefully acknowledge the sponsorship of National ICT
Australia Ltd (NICTA), Australia's Information and Communications
Technology Research Centre of Excellence, and Red Lizard Software
(www.redlizards.com).

The "do-it-yourself" (DIY) culture encourages individuals to design and craft objects on their own, without relying on outside experts. DIY construction should be inexpensive with easy-to-access ma- terials. Ranging from hobbyist electronics to urban farming to fashion, DIY is making somewhat of a resurgence across the United States. We see no reason why DIY culture should not also extend to compilers, and in particular, to high-assurance compilers.

From 2009-2011, NASA contracted Galois, Inc. to research the possibility of augmenting complex aerospace software systems with runtime verification. Our answer to the contract goals was Copilot, an embedded domain-specific language (EDSL) to generate embedded monitorsThe Copilot language itself, focusing on its RV uses for NASA, has been described [1, 2, 3].

Our assurance challenge in the project was phrased as, "Who watches the watchmen?" meaning that if the RV monitor is the last line of defense, then it must not fail or worse, introduce unintended faults itself. Nonetheless, because the primary goal of the project was to implement an RV system and to field- test it, few resources were available for assuring the correctness of the Copilot compiler. Our approach was born out of necessity.
Specifically, we employ three not-so-secret weapons from the functional languages and formal meth- ods communities in our work including building EDSLs, building sub-Turing complete languages, and taking a verifying compiler approach to assurance. This talk summarizes our experiences on this work and provides recommendations for lightweight assurance methods for domain-specific compiler design.

A recent approach to the verification of programs constructs a
correctness proof in the form of a finite automaton. The automaton
recognizes a set of traces. Here, a trace is any sequence of
statements (not necessarily feasible and not necessarily on a path
in the control flow graph of the program).

A trace can be formalized as a word over the alphabet of
statements. A trace can also be viewed as as special case of a
program. Applying static analysis or a symbolic method (e.g., SMT
solving with interpolant generation) to a single trace, a
correctness proof for the trace can be obtained in the form of a
sequence of consecutive Hoare triples (or, phrased differently, an
inductive sequence of assertions).

Given a program and n traces of the program, we can
construct an automaton from the n different correctness proofs for
the traces. The automaton recognizes a
set of correct traces. We still need to check whether this set
includes all the traces on a path in the control flow graph of the
program. The check is an automata-theoretic operation (which is
reducible to non-reachability in a finite graph). That is, the two
steps of constructing and checking a proof neatly separate the two
concerns of data and control in program verification. The
construction of a proof in the form of an automaton accounts for the
interpretation of statements in data domains. The automaton,
however, has a meaning that is oblivious to the interpretation of
statements: a set of words over a particular alphabet. The check of
the proof uses this meaning of the automaton and accounts for the
control flow of the program. The implementation of the check of the
proof as an automata-theoretic inclusion check is reminiscent of
model checking (the finite control flow graph defines the model, the
automaton defines the property).

The resulting verification method
is not compositional in the syntax of the program; it is
compositional in a new, semantics-directed sense where modules are
sets of traces; the sets of traces are constructed from mutually
independent correctness proofs and intuitively correspond to
different cases of program executions. Depending on the
verification problem (the correctness property being safety or
termination for sequential, recursive, or concurrent programs), the
approach uses non-deterministic automata, nested-word automata,
Büchi automata, or alternating automata as proofs.