In order to decide whether a software specification fulfills an implementation, or whether a detailed specification preserves the properties of a more abstract specification, we need an understanding of what it means for one specification to fulfill another specification. Focusing on UML interactions, STAIRS was designed to provide the UML community with this kind of understanding. To provide the required expressiveness, STAIRS distinguishes between potential and mandatory choice. Operators for choice have been studied for more than three decades within the field of formal methods in general, and within methods for action-refinement in particular. A prominent example of action refinement is Event-B. We are therefore particularly interested in the relationship between the operators for choice in Event-B and STAIRS.

Transistor scaling and hence raw technology speed has hit limits. The processor industry is pursuing multi-core systems to provide continuing increases in system performance. To make use of this performance, it is necessary to build parallel software systems. Parallel programming has been around for a long time. However, it is currently successful only for a limited range of applications. Even then, it is complex and potentially error prone and usually requires highly skilled programmers.

Since the 1980s there have been many proposals to use novel approaches based on declarative programming languages. Unfortunately, these have usually required major departures from current knowledge whilst failing to provide both the efficiency and expressiveness of conventional programming approaches. One particular weakness has been the lack of ability to express shared mutable state without reverting to mechanisms which destroy the simplicity of the approach.

This talk will explore compromises which use the simplicity of pure functional composition but in the context of more familiar programming languages. However, the ability to express shared mutable state is achieved by the use of transactions resulting in a high level, efficient and relatively simple way to write parallel programs.

This is a gathering for people interested in Chicken Scheme, a compiler from the Scheme language (a Lisp dialect) to C.
The gathering is hosted by BLDL and is open to anyone interested in the topic.
Of general interest are two presentations Friday 2014-05-02 at 1200:

I introduce a new paradigm for computer programming called DCI - Data, Context, Interaction. DCI brings programming to the level of everyday concepts and activities. The professional programmer can attack complex problems without undue additional complexity. The software maintainer can preserve system integrity by understanding and honoring the system architecture long after the originators have moved on to other projects. DCI can be embedded in different programming languages that are specialized for different purposes. The DCI concepts can become a unifying foundation for programming. DCI specifies a program as seen in two orthogonal projections; the Data projection describes system state and the Context projection system behavior.

High Integrity and Parallelism, February 10 to 12

Software is underpinning more and more of today's society: from our communication and transport systems, down to individual items like cars and medical devices, let alone our entertainment systems such as tablets and smart TVs. High integrity software used to be the realm of avionics and the military. Now more and more devices become computerised and connected, with the consequence that high integrity becomes a requirement for all deployed software. Software failure is not an option for offshore control systems. Software controlling a car must work robustly in unexpected situations like tire punctures, oil spills and collisions. Medical equipment will soon become connected online, and must be tamper proof in addition to correct and robust. Even mundane applications, like internet banking and online travel agencies, face problems requiring high integrity software.

High integrity software meets many criteria, among others

Reliable

Robust

safe

secure

maintainable

Some of these demands require certification, others means following best practice. High integrity software is achieved by a combination of:

High integrity languages for modelling, specification and code.
Such languages support clarity by, e.g., avoid side effects, aliasing and global variables.

Support tools that, e.g.,
prove correctness,
automatically check preconditions,
point out questionable areas of the code.

Traditionally high integrity has meant high cost. As everyday software needs to become high integrity, the cost of developing and maintaining high integrity software must come down to an acceptable level.
The high integrity days at BLDL give a brief taste of high integrity issues, with a discussion on the need for a national high integrity forum on Tuesday.

Registration is by phone to Marta Lopez +47 55 58 42 00 or email her firstname.lastname@ii.uib.no

C++ is a challenging language for high-integrity software.
The complexity of the language and some of its brittle features
conspires against the programmer when he or she tries to verify that
programs behave as specified. The software industry has produced
guidelines, standards, coding conventions and program analyzers to
evade the pitfalls. This talk reviews some of these works,
and points out a few ``gotchas'' that are difficult to
guard against. On the other hand, C++ has powerful abstraction
mechanisms that, when used to one's benefit, can significantly simplify
reasoning about programs. One such example, Sean Parent's ``runtime-polymorphism''
technique that can eliminate unwanted sharing typical to object-oriented
programs, is reviewed.

Ada 2012 is the first ISO Standardized language to incorporate full contract-based programming features, including run-time checked preconditions, postconditions, and type invariants. SPARK 2005 is a subset of Ada 2005 augmented with a formal comment-based annotation syntax, designed to support mathematical proofs of safety and correctness. SPARK 2014 is the latest incarnation of SPARK, which builds upon the Ada 2012 syntax for annotations, thereby providing an avenue for verifying systems by using a combination of static and dynamic techniques, all based on a uniform contract model. Overall this can reduce the cost of program verification, while increasing confidence in the result, by combining unit proof on significant portions of the program, with unit testing focused on the parts where proof is not readily achievable, with all contracts checked appropriately on the border between proved code and tested code.

Similar to program verification, producing the optimal code for a given program is undecidable. However, unlike compiler optimization where ad-hoc rules guides optimization, program verification often rely on general solver routines such as SMT solvers, enabling the programmer to hypothesize about the program's behaviour.
We explore the possibility of incorporating well-known techniques from program verification into compiler optimization, that permit optimizations not only based on rigid program invariants, but also based on assumptions (axioms) made by the programmer.

Dealing with failure and exceptional situations is an important but tricky part of programming.
Run-time errors in software is detected and reported in a multitude of ways:
hardware flags, libraries with return values and flags, throwing exceptions, and more.
The handling mechanism is bound by this choice, even though it may be inconvenient for a particular use.
Additionally, program code easily becomes cluttered with code dealing with exceptional circumstances.

Alerts is an abstraction over failure mechanisms.
It provides independence between the handling and reporting mechanisms,
allowing either side to select the most appropriate mechanism without confining the other.
The goal is to allow systematic handling of exceptional situations with minimal syntactic clutter.

Previously software was used in house, with extensive training of users, i.e., a basically benevolent environment. Now much software is distributed across the internet, with untrained users, i.e., in an increasingly hostile environment. Even software not distributed across the web provides increasingly complex features for users with less specific training.

This puts a demand for high integrity (reliability, robustness, security) on the software.
Yet we still approach software development with mostly the same languages, tools and methodologies as we did before. The high integrity day, Tuesday February 11, gives a brief taste of high integrity issues, ending with a discussion on the need for a national high integrity forum.

Permission-based security models are common in smartphone operating systems. Such models implement access control for sensitive APIs, introducing an additional concern for application developers. It is important for the correct set of permissions to be declared for an application, as too small a set is likely to result in runtime errors, whereas too large a set may needlessly worry users. Unfortunately, not all platform vendors provide tool support to assist in determining the set of permissions that an application requires.

We present a language-based solution for permission management. It entails the specification of permission information within a collection of source code, and allows for the inference of permission requirements for a chosen program composition. Our implementation is based on Magnolia, a programming language demonstrating characteristics that are favorable for this use case. A language with a suitable component system supports permission management also in a cross-platform codebase, allowing abstraction over different platform-specific implementations and concrete permission requirements. When the language also requires any “wiring” of components to be known at compile time, and otherwise makes design tradeoffs that favor ease of static analysis, then accurate inference of permission requirements becomes possible.

Trusted computing (TC) is a concept developed by the Trusted computing Group (TCG) where a hardware-based component takes care of measuring and reporting a platform's state of integrity in a trustworthy manner.
Software only solutions for integrity checking and protection like sandboxes, virtual machines, antiviruses etc. are
ultimately dependent on the integrity of other software to function correctly, i.e., the operative system. The problem is that OS itself is rarely formally verified and often consists of millions of lines of code, and is bound to contain security holes that can be exploited. Microkernels, hypervisors and separation kernels in MILS systems are an attempt to mitigate this problem, but the majority of users will most likely have a monolithic kernel running on their machines.
The idea behind TC is to have an hardware module independent of the running software do the integrity checking: the Trusted Platform Module (TPM).
In this talk we present the TPM and its most common applications: Trusted boot, Remote Attestation, Sealing and the Trusted Execution Environment (TEE).

Amid the rapid growth of mobile network technology and infrastructure throughout the world, especially in low- and middle-income countries, the potential of mobile to support the achievement of health priorities is an area of active exploration and engagement. According to a 2011 World Health Organization report, governments cite issues related to data privacy and security and the protection of individual health information as two of the top barriers to the expansion of Mobile Health (a.k.a mHealth). Protecting personal health information that is collected and transmitted over mobile devices, and stored in remotely located server are essential to bringing mHealth to scale and providing a mature foundation for its continued growth.

In this presentation, we cover the security challenges related to mHealth model in particular to medical data gathering in remote locations and discuss possible solutions. We focus on the Android platform on the client side, secure transmission channel, and cloud based server application and storage.

As programming languages evolve, they almost inevitably grow more complex under the pressure to provide additional features and flexibility. Unfortunately, this added complexity can make program testing and verification significantly harder. At some point, it becomes inordinately expensive to achieve full confidence in the safety, security, and correctness of the software. Attempts to build more safety into the language can in fact add further to the complexity, and certainly makes the compilers more complex and potentially additional issues as far as confidence in the generated code.

An alternative approach to achieving more safety is to attempt to dramatically simplify the programming language, reverting to a simpler, more uniform, more tractable set of basic features, designed to provide a relatively small number of inherently safe, composeable capabilities that can nevertheless efficiently meet the modern programmer's needs. We will discuss the implications of radical simplification, including the elimination of global variables, the global heap, reassignable pointers, parameter aliasing, exception handling, and explicit threads, locks, and signals, and how a notion of pointer-free expandable objects with pervasively parallel execution semantics can provide the combination of inherent safety and flexibility needed to build safe software-intensive systems.

User interfaces are costly to develop, correctly behaving user
interfaces more so. 30–60% of all application code is estimated to
be in user interfaces, and even a higher percentage of all defects are
found in this code. The sources of complexity of user interface code
are many. Programming models based on writing handler functions for
user events are known to lead to unstructured spaghetti code, and
orchestrating asynchronously executed computations using
event-handling code compounds the problem. Further, user interfaces
need to remain responsive and react to changes in the structure of the
data manipulated through the user interface. The complexities of user
interface programming commonly manifest as low quality: as defects,
missing features, inconsistent behavior, unhelpful or non-existent
error messages, and unresponsiveness. High-integrity UIs are
difficult to develop with today's programming models.

This talk discusses a declarative approach for user interface
programming, based on data-flow constraint systems, where
application-specific event-handling code is not necessary. The
programmer expresses declaratively what the dependencies amongst the
data manipulated by a user interface are, instead of how and when state
changes based on these dependencies occur–these latter concerns are
captured by reusable algorithms in a software library, and hence no longer
a (likely) source of defects. The approach
shows promise for significantly reducing the cost of developing
high-integrity user interfaces.

Writing distributed code is a hassle, requiring the mastery of multi-threading, synchronisation, communication protocols, security etc in addition to normal programming skills. Yet the interaction of multiple processes easily leads to phenomena like deadlocks, live locks, and opens the software for multiple attack techniques. Here we propose the idea of forging distributed systems from high quality sequential code, sketching that much of the technical difficult code for a distributed system, though case specific, is boilerplate code.

Web-scale solutions with tens of millions of customers require high availability, large scalability, and high performance. Because the hardware requirements are massive and the software is updated frequently, customer-facing web-scale solutions have frequent failures. Of special concern are local failures that cascade over critical parts of a system and cause systemic failures. Experience shows that traditional risk management is not sufficient to avoid serious failures leading to long unplanned downtime. In the talk, the presenter discusses how Netflix has implemented a cloud-based system for movie streaming that is robust to cascading failures. Netflix introduce redundancy to isolate the impact of local failures and induce failures in their production system (!) to learn how to make it increasingly robust to cascading failures.

Iterative C programs that manipulate dynamic data structures are infinite-state systems over an unbounded input domain. Studying reachability in these systems is challenging as it requires not only to deal with an infinite number of states but also with an infinite number of potential solutions. While most approaches deal with this problem by considering abstractions, we address it with a semi-correct procedure that captures iterative constructions and input shapes with the fundamental notion of relation or constraint. The talk will make a tour of the constraints and the constraint-based reasoning techniques we proposed to deal with reachability in C programs. A strong focus will be put on the applicability of these methods to the automatic generation of test cases for C programs and the recent methods and tools that have been developed and commercialized for this purpose.

Unit testing is an important part of modern software development, where individual code units are tested in isolation. Such tests are typically case-based, checking a likely error scenario or an error that has previously been identified and fixed. Coming up with good test cases is challenging, particularly when testing generic code, and focusing on individual tests can distract from creating tests that cover the full functionality. Axioms provide a generic way of describing code interfaces for (generic) code. With axioms, program behaviour can be specified algebraically in a formal or semi-formal way.

This talk will explain the underpinnings of the major types of calendars:
solar (Gregorian, Julian, Icelandic, Persian, Hindu); lunar (Islamic);
lunisolar (Chinese, Hebrew, Hindu); cyclical (Mayan, Balinese).
It will concentrate on computational features of calendrical calculations,
but also dwell on historical and cultural aspects.

1630 Finished

Participation is open and free. Lunch is included for registered participants, contact Marta as explained above.

ParaSail is a new language designed from scratch starting in 2009 to provide inherently safe, pervasively parallel programming. In the past parallel programming has been in the purview of the high-performance computing expert. But as we enter the era of multicore hardware, every programmer will need to be able to build systems that take advantage of parallel execution. The goal of ParaSail was to create a safe language where the default is parallel execution, and the programmer has to work harder to force computations to be performed sequentially. To ensure safety and to simplify automatic parallelization, the language was designed by eliminating troublesome features, by removing impediments to parallelism, by getting rid of constructs that have a sequential bias. The features that were added back in were all focused on parallelism and safety, in the form of powerful parallel iterators and full contract-based annotations woven directly into the syntax.

Magnolia is an experimental language being developed at BLDL. Central features is integration of algebraic style specifications with code, provisions for guarding (preconditions/alerts) and an aggressive reuse mechanism. There are no built-in types or operators, every type and operation comes from libraries. A library API will contain formal specifications of library properties, and encapsulates both parallel and sequential types and operations in the same way.

Guarding provides an automated approach to handle precondition and exceptional situations, a prerequisite for high integrity. The specifications support proving and testing program properties at every abstraction level, but also define semantic preserving code rewriting rules. Especially they allow high level optimisation of parallel types.

In an extension to Magnolia we are investigating data dependencies as first class entities. User defined mappings on data dependencies allow full control of parallel layout at a programmatic level. Simple properties of the mapping ensure deadlock free parallel distribution.

One of the most pernicious impediments to safe parallelization of code is the heavy use of pointer-based data structures. Parallel computing is epitomized by divide-and-conquer tactics, where large problems are broken up into many small pieces, which can be handed out and solved relatively independently and then have the results efficiently combined. Unrestricted pointer-based data structures are difficult to divide safely in a way that ensures computations do not interfere with one another, without imposing complex and inefficient per-node synchronization protocols.

There is an alternative to pointer-based data structures, expandable objects with value semantics, which can support efficient synchronization-free division into subcomponents for fully independent computation, while still representing the kinds of mutable data structures needed in complex data processing. Furthermore, pointer-free expandable objects allow for fully automatic storage management, without any requirement for an asynchronous garbage collector. Instead, storage can be allocated automatically only as needed, and reclaimed immediately and synchronously when no longer needed, using an efficient region-based storage management approach. These techniques can be combined with classic object-oriented type hierarchies, to provide a fully parallel, safe, and time-and-space bounded run-time model suitable for time-critical and resource-constrained systems.

1415 Discussion:
High Integrity and Parallelism: two sides of the same language design coin?