► Designing the software architecture of a system is an important step in creating a system that will meet both the functional and non-functional requirements. Bass,…
(more)

▼ Designing the software architecture of a system is an important step in creating a system
that will meet both the functional and non-functional requirements. Bass, Clements and
Kazman in "Software Architecture in Practice" propose a method to use design tactics to
create the Software Architecture of a system. For each quality attribute a set of design
tactics were developed. Security is one such quality attribute.
The security requirements of a system must be taken into account from the start. Adding
security adversely affects, the other quality attributes so it is difficult to add security to a
product after it has been designed. When defining the security tactics of a system, there
is currently no way to formally prove the implementation of the tactics. There is a
semantic gap between the software architecture, high level design and implementation of
a system. One way to bridge this gap is to use formal specifications. The formal
specifications can be used as a template when designing new systems and to analyze the
architecture more rigorously.
This project provides a Z specification for the Software Architectural Tactics for the
Security Quality Attribute. A model of a system is created and each tactic is defined with
respect to the model. Each tactic is independent however, the system encompasses all the
required functionality for all the tactics.
Advisors/Committee Members: Zhang, Cui.

Improving the performance and functionality of contemporary debugging tools is essential to alleviate the debugging task. This dissertation aims at narrowing the gap between current…
(more)

▼

Improving the performance and functionality of contemporary debugging tools is essential to alleviate the debugging task. This dissertation aims at narrowing the gap between current capabilities of debugging tools and industry requirements by improving two important debugging techniques: error trace compaction and automated debugging. Error trace compaction leverages incremental SAT and heuristics to reduce the number of clock cycles required to observe a failure in an error trace.
The technique presented reduces the length of the error trace to a minimum while
improving performance by 8× compared to a previous technique. The second contribution uses maximum satisfiability to enhance the
functionality and performance of automated debuggers. The method proposed can identify where in the design the bug is located and when in the error trace the bug is excited.
Compared to a competitive SAT-based approach, our formulation produces problems that are 80% smaller and that can be solved 4.5x faster.

► As robots become increasingly capable and general-purpose, it is desirable for them to be easily controllable by a wide variety of users. The near future…
(more)

▼ As robots become increasingly capable and general-purpose, it is desirable for them to be easily controllable by a wide variety of users. The near future will likely see robots in homes and offices, performing everyday tasks such as fetching coffee and tidying rooms. There is therefore a growing need for non-expert users to be able to easily program robots performing complex high-level tasks. Such high-level tasks include behaviors comprising non-trivial sequences of actions, reacting to external events, and achieving repeated goals. Recent advances in the application of formalmethods to robot control have enabled automated synthesis of correct-by-construction hybrid controllers for complex highlevel tasks. These approaches use a discrete abstraction of the robot workspace and a temporal logic specification of the environment assumptions and desired robot behavior, and yield controllers that are provably correct with respect to this abstraction and specification. However, there are many remaining challenges in ensuring that a user-defined specification yields a robot controller that achieves the desired high-level behavior. This dissertation addresses several causes of failure resulting from logical implications of the specification itself, as well as those arising because of inconsistencies between the discrete abstraction and the continuous execution domain. Work on three main challenges is described. The first is an algorithm for the analysis of logic specifications, which provides a high-level cause of failure for specifications that have no implementation, or unsynthesizable specifications. An interactive game is also introduced, allowing users to explore the cause of unsynthesizability. The second is the identification of a minimal explanation of failure: several techniques are presented to identify a core subset of the specification that causes unsynthesizability. The third problem addressed is the definition of an appropriate timing semantics for ion and execution of hybrid controllers synthesized from high-level specifications. Several controller-synthesis frameworks are compared, and their suitability to different problem domains discussed, based on their underlying assumptions and properties of the resulting continuous behaviors.
Advisors/Committee Members: Selman, Bart (committeeMember), Easley, David Alan (committeeMember), Halpern, Joseph Yehuda (committeeMember).

► This dissertation presents a survey of the theoretical and practical techniques necessary to provably eliminate side-channel leakage through known mechanisms in component-based secure systems.We cover…
(more)

▼ This dissertation presents a survey of the theoretical and practical techniques necessary to provably eliminate side-channel leakage through known mechanisms in component-based secure systems.We cover the state of the art in leakage measures, including both Shannon and min entropy, concluding that Shannon entropy models the observed behaviour of our example systems closely, and can be used to give a safe bound on vulnerability in practical scenarios.We comprehensively analyse several channel-mitigation strategies: cache colouring and instruction-based scheduling, showing that effectiveness and ease of implementation depend strongly on subtle hardware features. We also demonstrate that real-time scheduling can be employed to effectively mitigate remote chan nels at minimal cost.Finally, we demonstrate that we can reason formally (and mechanically) about probabilistic non-functional properties, by formalising the probabilistic language pGCL in the Isabelle/HOL theorem prover, and using it to verify an implementation of lattice scheduling, a well-known cache-channel countermeasure. We prove that a correspondence exists between standard vulnerability bounds, in a channel-centric view, and the refinement lattice on programs in pGCL, used to model a guessing attack on a vulnerable system—a process-centric view.
Advisors/Committee Members: Heiser, Gernot, Computer Science & Engineering, Faculty of Engineering, UNSW.

▼ Answer set programming (ASP) is a declarative programming paradigm for the
design and implementation of knowledge-intensive applications, particularly
useful for modeling problems involving combinatorial search. The input
languages of the first ASP software systems had the attractive property of a
simple, fully specified declarative semantics, making it possible to use
formalmethods to analyze ASP programs – to verify correctness, for example,
or to show that two programs were equivalent. Since that time, many useful new
constructs have been added to input languages. The increase in usability,
however, has come at the expense of a fully specified semantics, as the
semantics of newer constructs has not quite kept pace with the most general
syntax that solvers can handle.
In this thesis, we will describe one approach to bridging the gap between
mathematical formulations of the semantics of ASP languages and the current
state of the languages themselves. Our approach is to view ASP programs as
corresponding to infinitary formulas (formulas with infinitely long
conjunctions and disjunctions).
Advisors/Committee Members: Lifschitz, Vladimir (advisor), Boyer, Robert S. (committee member), Dillig, Isil (committee member), Hunt, Jr. , Warren A. (committee member), Schaub, Torsten (committee member).

►Formalmethods have been shown to be beneficial in increasing the quality of, and confidence in software systems. Despite the advantages of using formalmethods…
(more)

▼Formalmethods have been shown to be beneficial in increasing the quality of, and
confidence in software systems. Despite the advantages of using formalmethods
in software development, the uptake in the commercial industry has been limited
where the use of informal and semi-formal notations is favoured. To bridge the gap
between the ease-of-use of semi-formal notation and correctness of formalmethods,
a number of approaches to the formalisation of informal and semi-formal notation
have been researched and documented. Two of these approaches are discussed in
this dissertation using a medium-sized case study to demonstrate the approaches.
It was shown that each approach offered results that differed in terms of levels of
abstraction, requisite knowledge of the formal target specification language and
potential for automation.
Advisors/Committee Members: Van der Poll, J A (advisor), Venter, Lucas (advisor).

►Formal languages use mathematical notations to capture the software specifications precisely. Design by Contract is a technique used during software implementation to ensure that the…
(more)

▼Formal languages use mathematical notations to capture the software specifications precisely. Design by Contract is a technique used during software implementation to ensure that the software conforms to the specifications. However, there is a semantic gap in the development process between the specifications written in a formal language and the design contracts written in an implementation language.
Automated conversion of formal specifications to the design contracts in object-oriented implementation languages can help bridge this gap. Prior work has been done by Sowmiya Ramkarthik and Sherri Sanders to develop such tools for the automated conversions from the Object-Z formal language to the Java and Object PERL implementation languages. However, each tool was custom built to work with only one or two such OO languages. Moreover, there has been significant redundant efforts in the development of each of these tools.
FOZCIL (Framework for Object Z Conversion to Implementation Language) is a framework that captures and implements these redundant language-independent features as frozen spots (i.e., the fixed part of the framework) and the language-dependent properties as hot spots (i.e., the extensible part of the framework). When the framework accepts the language-dependent features for a target object-oriented language, it generates a FOZCIL tool instance which, in turn, is capable of accepting Object-Z specifications and converting the specifications to skeletal code with dynamically-checkable design contracts written in that target language.
Advisors/Committee Members: Zhang, Cui.

The multiagent systems approach have been increasingly used for the development of complex systems, which aroused the interest of research in Agent Oriented Software Engineering (AOSE) and organizational models. In this context, this thesis studies the applicability of some traditional formalmethods of software engineering for the formal specification of multiagent systems organizations, analyzing the use of RSL formal specification language to represent the PopOrg organizational model. The choice of RSL language occurred because it is a formal specification language that covers a wide spectrum of formal specification methods (models-based and properties-based, applicative and imperative, sequential and concurrent) and the PopOrg model was chosen because it is a minimal model of multiagent systems organization, designed to represent the minimum set of structural and operational aspects that such organizations should have. The use of RSL language was evaluated both for specifying the structural aspect of PopOrg systems and the operational specification for these systems. A preliminary study carried out with the CSP language for the operational specification of PopOrg model is also presented, as was the basis for the specification in RSL. In the end, a suggestion is given for an extension of the RSL language, to allow for its wider applicability to the specification of multiagent systems.

Bodies or Books of Knowledge (BoKs) have only been transcribed in mature fields where practices and rules have been well established (settled) and are gathered…
(more)

▼

Bodies or Books of Knowledge (BoKs) have only been transcribed in mature fields
where practices and rules have been well established (settled) and are gathered for
any prospective or current practitioner to refer to. As a precursor to creating a BoK,
it is first important to know if the domain contains settled knowledge and how this
knowledge can be isolated? One approach, as described in this work, is to use Formal
Concept Analysis (FCA) to structure the knowledge (or parts of it) and construct
a pruned concept lattice to highlight patterns of use and filter out the common and
established practices that best suit the solving of a problem within the domain.
In the railway domain, formalmethods have been applied for a number of years to
solve various modelling and verification problems. Their common use and straightforward
application (with some refinement) makes them easy to identify and therefore a
prime candidate to test for settled knowledge within the railway domain. They also
provide other assurances of settled knowledge along the way.

Kumar, A. (2015). A Preparatory Study Towards a Body of Knowledge in the Field of Formal Methods for the Railway Domain. (Masters Thesis). McMaster University. Retrieved from http://hdl.handle.net/11375/18416

Chicago Manual of Style (16th Edition):

Kumar, Apurva. “A Preparatory Study Towards a Body of Knowledge in the Field of Formal Methods for the Railway Domain.” 2015. Masters Thesis, McMaster University. Accessed May 25, 2019.
http://hdl.handle.net/11375/18416.

MLA Handbook (7th Edition):

Kumar, Apurva. “A Preparatory Study Towards a Body of Knowledge in the Field of Formal Methods for the Railway Domain.” 2015. Web. 25 May 2019.

Vancouver:

Kumar A. A Preparatory Study Towards a Body of Knowledge in the Field of Formal Methods for the Railway Domain. [Internet] [Masters thesis]. McMaster University; 2015. [cited 2019 May 25].
Available from: http://hdl.handle.net/11375/18416.

Council of Science Editors:

Kumar A. A Preparatory Study Towards a Body of Knowledge in the Field of Formal Methods for the Railway Domain. [Masters Thesis]. McMaster University; 2015. Available from: http://hdl.handle.net/11375/18416

► Current tools used to verify software in languages with reference semantics, such as Java, are hampered by the possibility of aliases. Existing approaches to addressing…
(more)

▼ Current tools used to verify software in languages
with reference semantics, such as Java, are hampered by the
possibility of aliases. Existing approaches to addressing this
long-standing verification problem try not to sacrifice modularity
because modular reasoning is what makes verification tractable. To
achieve this, these approaches treat the value of a reference
variable as a memory address in the heap. A serious problem with
this decision is that it severely limits the usefulness of generic
collections because they must be specified as collections of
references, and components of this kind are fundamentally flawed in
design and implementation. The limitations become clear when
attempting to verify clients of generic collections.The first step
in rectifying the situation is to redefine the "value" of a
reference variable in terms of the abstract value of the object it
references. A careful analysis of what the "value" of a reference
variable might mean leads inevitably to this conclusion, which is
consistent with the denotation of a variable in languages with
value semantics, such as RESOLVE. Verification in languages with
value semantics is straightforward compared to verification in
languages with reference semantics precisely because of the lack of
concern with heap properties. However, there is still a nagging
problem: aliasing can occur in legal programs in languages with
reference semantics, such as Java, and it must be handled when it
does occur. The crux of the issue is not in-your-face assignment
statements that copy references but rather aliasing arising within
(hidden) method bodies. The reason is that the effects of calls to
these methods in client code must be summarized in their
specifications in order to preserve modularity.So, the second step
is the introduction of a discipline restricting what a client can
do with a reference that is aliased within a method. The discipline
advertises the creation of such aliases in method specifications
and prevents a client from engaging in behavior that would break
the abstractions of the objects being referenced, as this would
also prevent modular verification. These restrictions allow code to
be specified in terms of the abstract values of objects instead of
treating the values of references as memory addresses in the heap.
Even though the discipline prevents some programming idioms, it
remains flexible enough to allow for most common programs,
including the use of iterators, without the need for workarounds.A
tool can verify that a program satisfies the provisions of this
discipline. Further, it can generate verification conditions that
rely only on abstract object values to demonstrate a program's
correctness. These verification conditions can be discharged by the
theorem-proving tools currently used to verify RESOLVE
programs.
Advisors/Committee Members: Weide, Bruce W. (Advisor).

► This thesis presents the development of high-level guaranteed robot control for reactive behaviours by relaxing two assumptions: 1) the workspace is well-known in advance; 2)…
(more)

▼ This thesis presents the development of high-level guaranteed robot control for reactive behaviours by relaxing two assumptions: 1) the workspace is well-known in advance; 2) the temporal logic based formalisms encode the specification by expressing the properties of future paths. This paper addresses the challenges of relaxing both of the assumptions by presenting an approach for automatically re-synthesizing a hybrid controller that guarantees user-defined high-level robot behaviour while exploring and updating a partially unknown workspace, and providing a concise grammar for specifying highlevel tasks that require memory of past events. In the first challenge, this thesis introduces an approach that includes dynamically adding new regions into the workspace during execution, automatically rewriting the specification, and re-synthesizing the controller while preserving the robot state and its history of task completion. The approach is implemented within the LTLMoP toolkit and is demonstrated in an experiment. For the second challenge, this thesis introduces an innovative structured English grammar for specifying high-level behaviour that automatically performs memory operations without requiring explicit definition from the specification designer. This grammar admits intuitive, unambiguous specifications for tasks that implicitly use memory for purposes including non-repeated goals, strictly ordered requirements, etc. The approach is also implemented within the LTLMoP toolkit.
Advisors/Committee Members: Hencey, Brandon M. (committeeMember).

► Modern interactive systems can be incredibly complex, with a variety of screens, menus, widgets, etc. available to the user. Due to this, modelling these interactive…
(more)

▼ Modern interactive systems can be incredibly complex, with a variety of screens, menus, widgets, etc. available to the user. Due to this, modelling these interactive systems can also be incredibly complex and while there are techniques to help overcome this, it can often lead to radically different models depending on the modeller.
This thesis explores the use of two design patterns created in order to help simplify modelling interactive systems. In the process of doing this, we first explore µ-Charts and its semantics, which are defined in Z, in order to understand its capabilities. We then discover a feature we name the Return feature, which we break down into two parts, Return Home and Return Back, and create patterns in order to concisely model them. Finally, we test these patterns and evaluate the tools we used to create the µ-charts, translate them into their Z semantics and test them.
Advisors/Committee Members: Reeves, Steve (advisor).

► Developing a verifying compiler – a compiler that proves that components are correct with respect to their specifications – is a grand challenge for the computing community.…
(more)

▼ Developing a verifying compiler – a compiler that proves that components are correct with respect to their specifications – is a grand challenge for the computing community. The prevailing view of the software engineering community is that this problem is necessarily difficult because, to-date, the resultant verification conditions necessary to prove software correctness have required powerful automated provers and deep mathematical insight to dispatch. This seems counter-intuitive, however, since human programmers only rarely resort to deep mathematics to assure themselves that their code works. In this work, we show that well-specified and well-engineered software supported by a flexible, extensible mathematical system can yield verification conditions that are sufficiently straightforward to be dispatched by a minimalist rewrite prover. We explore techniques for making such a system both powerful and user-friendly, and for tuning an automated prover to the verification task. In addition to influencing the design of our own system, RESOLVE, this exploration benefits the greater verification community by informing specification and theory design and encouraging greater integration between nuanced mathematical systems and powerful programming languages. This dissertation presents the design and an implementation of a minimalist rewrite prover tuned for the software verification task. It supports the prover with the design and implementation of a flexible, extensible mathematical system suitable for use with an industrial-strength programming language. This system is employed within a well-integrated specification framework. The dissertation validates the central thesis that verification conditions for well-engineered software can be dispatched easily by applying these tools and methods to a number of benchmark components and collecting and analyzing the resulting data.
Advisors/Committee Members: Sitaraman, Murali, Dean , Brian C, Hallstrom , Jason O, Pargas , Roy P.

The model-checking problem for software product lines is harder than for single systems. Indeed, one has to verify all the software variants of a product…
(more)

▼

The model-checking problem for software product lines is harder than for single systems. Indeed, one has to verify all the software variants of a product line, whose number grows exponentially in the number of their differences. Techniques intended for single systems are inefficient in this case, for these can only be applied to all products separately. Variability-aware modelling formalisms and algorithms were designed as a more appropriate answer to that problem. Their strength lies in their ability to check a behaviour common to several products only once, which leads to substantial performance gains. Although they constitute a major step toward the efficient verification of product lines, these techniques are not yet mature enough to truly achieve this objective. They still lack optimisations to verify large models, the expressiveness to represent complex forms of variable behaviour, as well as usable languages and tools required for industrial transfer. This thesis aims at improving product-line model checking in order to provide formalisms, algorithms, and tools that together hold the potential to verify real-world software product lines. We first study the state-of-the-art model-checking approaches for software product lines. We compare them in terms of available formalisms and algorithms, and we determine that the approach based on Featured Transition Systems (FTS) is the most suitable to act as a basis for the design of novel techniques. We then extend its theory along two axes: efficiency and expressiveness. For the former, we propose abstraction methods that can reduce the size of the model to check while maintaining correctness and completeness; they consist of extensions of single-system abstraction techniques applied to FTS, and of new techniques that abstract away from their variability. As for expressiveness, on the one hand we propose featured timed automata as a combination of FTS and real-time behaviour. On the other hand, we analyse how to support in FTS complex forms of variability such as numeric features and multi-features. We implemented our theoretical results into a new model-checking tool named ProVeLines. As all our extensions are based on FTS, they share many commonalities. ProVeLines was therefore designed as a product line whose each variant implements a distinct combination of verification formalisms and algorithms. Finally, we show that the principles we designed for product-line model checking can also be applied to other formalmethods, namely the verification of adaptive systems and the synthesis of controllers for product lines.

► Software spends a significant portion of its life-cycle in the maintenance phase and over 20% of the maintenance effort is fixing defects. Formalmethods, including…
(more)

▼ Software spends a significant portion of its life-cycle in the maintenance phase and over 20% of the maintenance effort is fixing defects. Formalmethods, including verification, can reduce the number of defects in software and lower corrective maintenance, but their industrial adoption has been underwhelming. A significant barrier to adoption is the overhead of converting imperative programming languages, which are common in industry, into the declarative programming languages that are used by formalmethods tools. In comparison, the verification of software written in declarative programming languages is much easier because the conversion into a formalmethods tool is easier. The growing popularity of declarative programming – evident from the rise of multi-paradigm languages such as Javascript, Ruby, and Scala – affords us the opportunity to verify the correctness of software more easily.
Clojure is a declarative language released in 2007 that compiles to bytecode that executes on the Java Virtual Machine (JVM). Despite being a newer, declarative programming language, several companies have already used it to develop commercial products. Clojure shares a Lisp syntax with ACL2, an interactive theorem prover that is used to verify the correctness of software. Since both languages are based on Lisp, code written in either Clojure or ACL2 is easily converted to the other. Therefore, Clojure can conceivably be verified by ACL2 with limited overhead assuming that the underlying behavior of Clojure code matches that of ACL2. ACL2 has been previously used to reason about Java programs through the use of formal models of the JVM. Since Clojure compiles to JVM bytecode, a similar approach is taken in this dissertation to verify the underlying implementation of Clojure.
The research presented in this dissertation advances techniques to verify Clojure code in ACL2. Clojure and ACL2 are declarative, but they are specifically functional programming languages so the research focuses on two important concepts in functional programming and verification: arbitrary-precision numbers ("bignums") and lists. For bignums, the correctness of a model of addition is verified that addresses issues that arise from the unique representation of bignums in Clojure. Lists, in Clojure, are implemented as a type of sequence. This dissertation demonstrates an abstraction that equates Clojure sequences to ACL2 lists. In support of the research, an existing ACL2 model of the JVM is modified to address specific aspects of compiled Clojure code and the new model is used to verify the correctness of core Clojure functions with respect to corresponding ACL2 functions. The results support the ideas that ACL2 can be used to reason about Clojure code and that formalmethods can be integrated more easily in industrial software development when the implementation corresponds semantically to the verification model.
Advisors/Committee Members: Page, Rex (advisor), Hougen, Dean (advisor), Miller, David (committee member), Jensen, Matthew (committee member), Weaver, Christopher (committee member).

► Network management will benefit from automated tools based upon formalmethods. In these tools, the algorithm for computing reachability is the core algorithm for verifying…
(more)

▼ Network management will benefit from automated tools based upon formalmethods. In these tools, the algorithm for computing reachability is the core algorithm for verifying network properties in the data plane. This dissertation presents efficient algorithms for computing reachability and verifying network properties for a single network with both packet filters and transformers, and for interconnected networks.
For computing port to port reachability in a network, we present a new formal method for a new tool, Atomic Predicates (AP) Verifier, which is much more time and space efficient than existing tools. Given a set of predicates representing packet filters, AP Verifier computes a set of atomic predicates, which is minimum and unique. The use of atomic predicates dramatically speeds up computation of network reachability. AP Verifier also includes algorithms to process network update events and check compliance with network policies and properties in real time.
Packet transformers are widely used in Internet service provider networks, datacenter infrastructures, and layer-2 networks. Existing network verification tools do not scale to such networks with large numbers of different transformers. We present a new tool, AP+ Verifier, based upon a new algorithm for computing atomic predicates for networks with both packet filters and transformers. For performance evaluation, we use network datasets with different types of transformers (i.e., MPLS tunnels, IP-in-IP tunnels, and NATs). We found that AP+ Verifier is more time and space efficient than prior tools by orders of magnitude.
The Internet consists a large collection of networks. To debug reachability problems, a network operator often asks operators of other networks for help by telephone or email. We present a new protocol, COVE, and an efficient data structure for automating the exchange of data plane reachability information between networks in a business relationship. COVE is designed to improve a network's views of forward and reverse reachability with partial deployment in mind. COVE is scalable to very large networks in the Internet. We illustrate applications of COVE to perform useful network management tasks, which cannot be done effectively using existing methods and tools.
Advisors/Committee Members: Lam, Simon S., 1947- (advisor), Emerson, Ernest A. (committee member), Garg, Vijay K. (committee member), Gouda, Mohamed G. (committee member), Mok, Aloysius K. (committee member).

► High-integrity applications are safety- and security-critical applications developed for a variety of critical tasks. The correctness of these applications must be thoroughly tested or formally…
(more)

▼ High-integrity applications are safety- and
security-critical applications developed for a variety of critical
tasks. The correctness of these applications must be thoroughly
tested or formally verified to ensure their reliability and
robustness. The major properties to be verified for the correctness
of applications include: (1) functional properties, capturing the
expected behaviors of a software, (2) dataflow property, tracking
data dependency and preventing secret data from leaking to the
public, and (3) robustness property, the ability of a program to
deal with errors during execution.
This dissertation presents and
explores formal verification and proof technique, a promising
technique using rigorous mathematical methods, to verify critical
applications from the above three aspects. Our research is carried
out in the context of SPARK, a programming language designed for
development of safety- and security-critical applications.
First,
we have formalized in the Coq proof assistant the dynamic semantics
for a significant subset of the SPARK 2014 language, which includes
run-time checks as an integral part of the language, as any formalmethods for program specification and verification depend on the
unambiguous semantics of the language.
Second, we have formally
defined and proved the correctness of run-time checks generation
and optimization based on SPARK reference semantics, and have built
the certifying tools within the mechanized proof infrastructure to
certify the run-time checks inserted by the GNAT compiler frontend
to guarantee the absence of run-time errors.
Third, we have
proposed a language-based information security policy framework and
the associated enforcement algorithm, which is proved to be sound
with respect to the formalized program semantics. We have shown how
the policy framework can be integrated into SPARK 2014 for more
advanced information security analysis.
Advisors/Committee Members: John M. Hatcliff.

► The field of chip design is characterized by contradictory pressures to reduce time-to-market and maintain a high level of reliability. As a result, module reuse…
(more)

▼ The field of chip design is characterized by contradictory pressures to reduce time-to-market and maintain a high level of reliability. As a result, module reuse has become common practice in chip design. To save time on both design and verification, Systems-on-Chips (SoCs) are composed using pre-designed and pre-verified modules. The integrated modules are often designed by different groups and for different purposes, and are later integrated into a single chip. In the absence of a single interface standard for such modules, "plug-n-play" style integration is not likely, as the subject modules are often designed to comply with different interface protocols. For such modules to communicate correctly there is a need for some glue logic, also called a protocol converter that mediates between them.Though much research has been dedicated to the protocol converter synthesis problem of SoC communication, converter synthesis is still performed manually, consuming development and verification time and risking human error. Current approaches to automatic synthesis of protocol converters mostly lack formal foundations and either employ abstractions far removed from the Hardware Description Language (HDL) implementation level or grossly simplify the structure of the protocols considered.This thesis develops and presents techniques for automatic synthesis of provably correct on-chip protocol converters. Basing the solution on a formal approach, a novel state-machine based formalism is presented for modelling bus-based protocols and formalizing the notions of protocol compatibility and correct protocol conversion. Algorithms for automatic compatibility checking and provably-correct converter synthesis are derived from the formalism, including a systematic exploration of the design space of the protocol converter, the first in the field, which enables generation of various alternative deterministic converters. The work presented is unique in its combination of a completely formal approach and the use of a low abstraction level that enables precise modelling of protocol characteristics and automatic translation of the constructed converter to HDL.
Advisors/Committee Members: Sowmya, Arcot, Computer Science & Engineering, Faculty of Engineering, UNSW, Parameswaran, Sri, Computer Science & Engineering, Faculty of Engineering, UNSW.

► Information flow security concerns how to protect sensitive data in computer systems by avoiding undesirable flow of information between the users of the systems. This…
(more)

▼ Information flow security concerns how to protect sensitive data in computer systems by avoiding undesirable flow of information between the users of the systems. This thesis studies information flow security properties in state-based systems, dealing in particular with modelling and verification methods for asynchronous systems and synchronous systems with schedulers. The aim of this study is to provide a foundational guide to ensure confidentiality in system design and verification.The thesis begins with a study of definitions of security properties in asynchronous models. Two classes of security notions are of particular interest. Trace-based properties disallow deductions of high security level secrets from low level observation traces. Bisimulation-based properties express security as a low-level observational equivalence relation on states. In the literature, several distinct schools have developed frameworks for information flow security properties based on different semantic domains. One of the major contributions of the thesis is a systematic study that compares security notions, using semantic mappings between two state-based models and a particular process algebraic model.An advantage of state-based models is the availability of well-developed verification methods and tools for functional properties in finite state systems. The thesis investigates the application of these methods to the algorithmic verification of the information flow security properties in the asynchronous settings. The complexity bounds for verifying these security properties are given as polynomial time for the bisimulation-based properties and polynomial space complete for the trace-based properties. Two heuristics are presented to benefit the verifications of the properties in practice. Timing channels are one of the major concerns in the computer security community, but are not captured in asynchronous models. In the final part of the thesis, a new system model is defined that deals with timing and scheduling. A group of novel security notions, including both trace-based and bisimulation-based properties, are proposed in this new model. It is further investigated whether these security properties are preserved by refinement of schedulers and scheduler implementations. A case study of a multi- evel secure file server is described, which applies a number of access control rules to enforce a particular bisimulation-based property in the synchronous setting.

► This dissertation perceives a similarity between two activities: that of coordinating the search for simulation traces toward reaching verification closure, and that of coordinating the…
(more)

▼ This dissertation perceives a similarity between two activities: that of coordinating the search for simulation traces toward reaching verification closure, and that of coordinating the search for a proof within a theorem prover. The programmatic coordination of simulation is difficult with existing tools for digital circuit verification because stimuli generation, simulation execution, and analysis of simulation results are all decoupled. A new programming language to address this problem, analogous to the mechanism for orchestrating proof search tactics within a theorem prover, is defined wherein device simulation is made a first-class notion. This meta-language for functional verification is first formalized in a parametric way over hardware description languages using rewriting logic, and subsequently a more richly featured software tool for Verilog designs, implemented as an embedded domain-specific language in Haskell, is described and used to demonstrate the novelty of the programming language and to conduct two case studies. Additionally, three hardware description languages are given formal semantics using rewriting logic and we demonstrate the use of executable rewriting logic tools to formally analyze devices implemented in those languages.
Advisors/Committee Members: Meseguer, Jos?? (advisor), Arvind (committee member), Rosu, Grigore (committee member), Torrellas, Josep (committee member).

Specifying security-critical software urges to develop techniques that allow early bugs detection and prevention. This is aggravated by the fact that massive cost and time are spent during product validation and verification (V&V). There exists a multitude of formal and informal techniques striving to confront the challenge of specifying and validating specifications. Our approach mainly concerns…

►Formal specification has become increasingly important in software engineering, both as a design tool, and as a basis for verified software design. Formalmethods have…
(more)

▼Formal specification has become increasingly important in software engineering, both as a design tool, and as a basis for verified software design. Formalmethods have long been in use in the field of programming language design and implementation, and many formalisms, in both the syntactic and semantic domains, have evolved for this purpose.
In this thesis we examine the possibilities of integrating specifications written in different formalisms used in the description of programming languages within a single framework. We suggest that the theory of institutions provides a suitable background for such integration, and we develop descriptions of several formalisms within this framework. While we do not merge the formalisms themselves, we see that it is possible to relate modules from specifications in each of them, and this is demonstrated in a small example.
Advisors/Committee Members: Moynihan, Tony.

► Writing bug-free code is fraught with difficulty, and existing tools for the formal verification of programs do not scale well to large, complicated codebases such…
(more)

▼ Writing bug-free code is fraught with difficulty, and existing tools for the formal verification of programs do not scale well to large, complicated codebases such as that of systems software (OSes, compilers, and similar programs that have a high level of complexity but work on a lower level than typical user applications such as text editors, image viewers, and the like). This thesis presents USIMPL, a component of the Orca project for formal verification that builds on an existing framework for computer-aided, deductive mathematical proofs (Foster’s Isabelle/UTP) with features inspired by a simple but featureful language used for verification (Schirmer’s Simpl) in order to achieve a modular, scalable framework for proofs of program correctness utilizing the rule-based mathematical representation of program behavior known as Hoare logic and Hoare-style algebraic laws of programming, which provide a formal methodology for transforming programs to equivalent formulations.
Advisors/Committee Members: Ravindran, Binoy (committeechair), Lammich, Peter (committee member), Broadwater, Robert P. (committee member).

► [EN] The availability of new processors with more processing power for embedded systems has raised the development of applications that tackle problems of greater complexity.…
(more)

▼ [EN] The availability of new processors with more processing power for embedded systems has raised
the development of applications that tackle problems of greater complexity. Currently, the
embedded applications have more features, and as a consequence, more complexity. For this
reason, there exists a growing interest in allowing the secure execution of multiple applications
that share a single processor and memory. In this context, partitioned system architectures based
on hypervisors have evolved as an adequate solution to build secure systems.
One of the main challenges in the construction of secure partitioned systems is the verification of
the correct operation of the hypervisor, since, the hypervisor is the critical component on which
rests the security of the partitioned system. Traditional approaches for Validation and Verification
(V&V), such as testing, inspection and analysis, present limitations for the exhaustive validation
and verification of the system operation, due to the fact that the input space to validate grows
exponentially with respect to the number of inputs to validate. Given this limitations, verification
techniques based in formalmethods arise as an alternative to complement the traditional validation
techniques.
This dissertation focuses on the application of formalmethods to validate the correctness of the
partitioned system, with a special focus on the XtratuM hypervisor. The proposed methodology
is evaluated through its application to the hypervisor validation. To this end, we propose a formal
model of the hypervisor based in Finite State Machines (FSM), this model enables the definition
of the correctness properties that the hypervisor design must fulfill. In addition, this dissertation
studies how to ensure the functional correctness of the hypervisor implementation by means of
deductive code verification techniques.
Last, we study the vulnerabilities that result of the loss of confidentiality (CWE-200 [CWE08b]) of
the information managed by the partitioned system. In this context, the vulnerabilities (infoleaks)
are modeled, static code analysis techniques are applied to the detection of the vulnerabilities,
and last the proposed techniques are validated by means of a practical case study on the Linux
kernel that is a component of the partitioned system.; [ES] La disponibilidad de nuevos procesadores más potentes para aplicaciones empotradas ha permitido
el desarrollo de aplicaciones que abordan problemas de mayor complejidad. Debido a esto, las
aplicaciones empotradas actualmente tienen más funciones y prestaciones, y como consecuencia de
esto, una mayor complejidad. Por este motivo, existe un interés creciente en permitir la ejecución
de múltiples aplicaciones de forma segura y sin interferencias en un mismo procesador y memoria.
En este marco surgen las arquitecturas de sistemas particionados basados en hipervisores como
una solución apropiada para construir sistemas seguros.
Uno de los principales retos en la construcción de sistemas particionados, es la verificación del…
Advisors/Committee Members: Crespo Lorente, Alfons (advisor), Masmano Tello, Miguel Ángel (advisor), Simó Ten, José Enrique (advisor).

▼ Mechanized theorem proving is a promising means of formally
establishing facts about complex systems. However, in applying
theorem proving methodologies to industrial-scale hardware and
software systems, a large amount of user interaction is required in
order to prove useful properties. In practice, the human user tasked
with such a verification must gain a deep understanding of the system
to be verified, and prove numerous lemmas in order to allow the
theorem proving program to approach a proof of the desired fact.
Furthermore, proofs that fail during this process are a source of
confusion: the proof may either fail because the conjecture was false,
or because the prover required more help from the user in order to
reach the desired conclusion.
We have implemented a symbolic execution framework inside the ACL2
theorem prover in order to help address these issues on certain
problem domains. Our framework introduces a proof strategy that
applies bit-level symbolic execution using BDDs to finite-domain
problems. This proof strategy is a fully verified decision procedure
for such problems, and on many useful problem domains its capacity
vastly exceeds that of exhaustive testing. Our framework also
produces counterexamples for conjectures that it determines to be
false.
Our framework seeks to reduce the amount of necessary user interaction
in proving theorems about industrial-scale hardware and software
systems. By increasing the automation available in the prover, we
allow the user to complete useful proofs while understanding less of
the detailed implementation of the system. Furthermore, by producing
counterexamples for falsified conjectures, our framework reduces the
time spent by the user in trying to determine why a proof failed.
Advisors/Committee Members: Hunt, Warren A., 1958- (advisor), Baumgartner, Jason R. (committee member), Boyer, Robert S. (committee member), Cook, William (committee member), Moore, J S. (committee member).

Using formalmethods, the developer can increase softwares trustiness and correctness. Furthermore, the developer can concentrate in the functional requirements of the software. However, there are many resistance in adopting this software development approach. The main reason is the scarcity of adequate, easy to use, and useful tools. Developers typically write code and test it. These tests usually consist of executing the program and checking its output against its requirements. This, however, is not always an exhaustive discipline. On the other side, using formalmethods one might be able to investigate the systems properties further. Unfortunately, specification languages do not always have tools like animators or simulators, and sometimes there are no friendly Graphical User Interfaces. On the other hand, specification languages usually have a compiler which normally generates a Labeled Transition System (LTS). This work proposes an application that provides graphical animation for formal specifications using the LTS as input. The application initially supports the languages B, CSP, and Z. However, using a LTS in a specified XML format, it is possible to animate further languages. Additionally, the tool provides traces visualization, the choices the user did, in a graphical tree. The intention is to improve the comprehension of a specification by providing information about errors and animating it, as the developers do for programming languages, such as Java and C++.

► Using formalmethods, the developer can increase software s trustiness and correctness. Furthermore, the developer can concentrate in the functional requirements of the software. However,…
(more)

▼ Using formalmethods, the developer can increase software s trustiness and correctness.
Furthermore, the developer can concentrate in the functional requirements of the
software. However, there are many resistance in adopting this software development
approach. The main reason is the scarcity of adequate, easy to use, and useful tools.
Developers typically write code and test it. These tests usually consist of executing the
program and checking its output against its requirements. This, however, is not always
an exhaustive discipline. On the other side, using formalmethods one might be able
to investigate the system s properties further. Unfortunately, specification languages do
not always have tools like animators or simulators, and sometimes there are no friendly
Graphical User Interfaces. On the other hand, specification languages usually have a compiler
which normally generates a Labeled Transition System (LTS). This work proposes
an application that provides graphical animation for formal specifications using the LTS
as input. The application initially supports the languages B, CSP, and Z. However, using a
LTS in a specified XML format, it is possible to animate further languages. Additionally,
the tool provides traces visualization, the choices the user did, in a graphical tree. The
intention is to improve the comprehension of a specification by providing information
about errors and animating it, as the developers do for programming languages, such as
Java and C++.
Advisors/Committee Members: Oliveira, Marcel Vinicius Medeiros (advisor), CPF:02386943488 (advisor), http://lattes.cnpq.br/1756952696097255 (advisor).

► Using formalmethods, the developer can increase software s trustiness and correctness. Furthermore, the developer can concentrate in the functional requirements of the software. However,…
(more)

▼ Using formalmethods, the developer can increase software s trustiness and correctness.
Furthermore, the developer can concentrate in the functional requirements of the
software. However, there are many resistance in adopting this software development
approach. The main reason is the scarcity of adequate, easy to use, and useful tools.
Developers typically write code and test it. These tests usually consist of executing the
program and checking its output against its requirements. This, however, is not always
an exhaustive discipline. On the other side, using formalmethods one might be able
to investigate the system s properties further. Unfortunately, specification languages do
not always have tools like animators or simulators, and sometimes there are no friendly
Graphical User Interfaces. On the other hand, specification languages usually have a compiler
which normally generates a Labeled Transition System (LTS). This work proposes
an application that provides graphical animation for formal specifications using the LTS
as input. The application initially supports the languages B, CSP, and Z. However, using a
LTS in a specified XML format, it is possible to animate further languages. Additionally,
the tool provides traces visualization, the choices the user did, in a graphical tree. The
intention is to improve the comprehension of a specification by providing information
about errors and animating it, as the developers do for programming languages, such as
Java and C++.
Advisors/Committee Members: Oliveira, Marcel Vinicius Medeiros (advisor), CPF:02386943488 (advisor), http://lattes.cnpq.br/1756952696097255 (advisor).

► In this thesis, we develop and evaluate a formal model and contracting framework for data-centric Web services. The central component of our framework is a…
(more)

▼ In this thesis, we develop and evaluate a formal model and contracting framework for data-centric Web services. The central component of our framework is a formal specification of a common Create-Read-Update-Delete (CRUD) data store. We show how this model can be used in the formal specification and verification of both basic and transactional Web service compositions. We demonstrate through both formal proofs and empirical evaluations that our proposed framework significantly decreases ambiguity about a service, enhances its reuse, and facilitates detection of errors in service-based implementations.
Web Services are reusable software components that make use of standardized interfaces to enable loosely-coupled business-to-business and customer-to-business interactions over the Web. In such environments, service consumers depend heavily on the service interface specification to discover, invoke, and synthesize services over the Web. Data-centric Web services are services whose behavior is determined by their interactions with a repository of stored data. A major challenge in this domain is interpreting the data that must be marshaled between consumer and producer systems. While the Web Services Description Language (WSDL) is currently the de facto standard for Web services, it only specifies a service operation in terms of its syntactical inputs and outputs; it does not provide a means for specifying the underlying data model, nor does it specify how a service invocation affects the data. The lack of data specification potentially leads to erroneous use of the service by a consumer. In this work, we propose a formal contract for data-centric Web services. The goal is to formally and unambiguously specify the service behavior in terms of its underlying data model and data interactions. We address the specification of a single service, a flow of services interacting with a single data store, and also the specification of distributed transactions involving multiple Web services interacting with different autonomous data stores. We use the proposed formal contract to decrease ambiguity about a service behavior, to fully verify a composition of services, and to guarantee correctness and data integrity properties within a transactional composition of services.
Advisors/Committee Members: Kulczycki, Gregory W. (committeechair), Kulczycki, Gregory W. (committeechair), Chen, Ing-Ray (committee member), Egyhazy, Csaba J. (committee member), Frakes, William B. (committee member), Chen, Ing-Ray (committee member), Egyhazy, Csaba J. (committee member), Frakes, William B. (committee member), Blake, M. Brian (committeecochair), Blake, M. Brian (committeecochair).