Contract Programming is something that�s been around for a long time but
is getting far more air play, and rigorous examination, in recent times.
Despite general agreement regarding the attraction of software contracts
to programmers (and users!), there remains equivocation on what to do when
contracts are broken. This is Part One of a series that takes a slightly
philosophical look at this important issue, considers the tradeoffs
between information and safety in reacting to contract violations, looks
at practical measures for shutting errant processes, and introduces a new
technique for the implementation of unrecoverable exceptions in C++.

The article is in four parts, of which this is the first, whose contents
are defined as follows:

Part 1

The first part contains a (re)fresher course on contract programming,
pointing out the difference between functional and operational contracts,
and detailing the three strict monkeys of contract programming for
functional contracts: pre-conditions, post-conditions and class
invariants. It also examines the issue of observation—who defines
and detects (in)correctness—and highlights The Principle of
Removability. Finally, it examines the often-misunderstood relationship
between exceptional conditions, invalid data and contract violations,
highlighting the fact that exceptions are a tool for use in the
implementation of correctly handling programs as well as a mechanism for
the reporting of contract violations.

Part 2

The second part takes a more detailed look at the separate phases of
contract enforcement: Detection, Reporting and Response. It then proceeds
to introduce the defining instrument of this article: The Principle of
Irrecoverability. The remainder of this part is concerned with the
refutation of objections to the, principle including, importantly, the
fallacy that precondition violations could be exempt from
irrecoverability.

Part 3

The third part takes one last swipe at objections to the Principle of
Irrecoverability, in the case of how the failure of plug-in components may
avoid irrecoverabilty. The remainder of this part takes a practical turn,
looking at the practical exceptions to the principle, and examining
techniques for maximising the likelihood of graceful shutdown when
violations are detected.

Part 4

The final part continues the practical bent of its predecessor. First, it
introduces an unrecoverable exception class for C++, which does exactly
what it says on the tin: once thrown, it can be caught to facilitate
graceful process shutdown, but its effect cannot be
quenched—shutdown is inevitable. The article series is then brought
to a close by an examination of a "methodology" for using
irrecoverability, known as Informed Zero Tolerance, with some success
stories from its application.

Introduction

Contract programming got its first, or at least most thorough and widely
recognized, treatment by Bertrand Meyer in his groundbreaking book
Object-Oriented Software Construction [1], where it was known as Design By
Contract, and it is a core element of the Eiffel language [2]. (Note: the term Design By
Contract was trademarked in 2003 by Dr. Meyer, so all the little
free software pixies are dropping the term like a hot coal. The latest
favoured term is Contract Programming, as suggested by Walter
Bright in 2004 and used in the recent proposal by Thorsten Ottosen and
Lawrence Crowl to the C++ standards body [3].)

The use of the contract metaphor in software engineering is a growing, but
not entirely well understood, phenomenon. A software contract itself is,
as in life, merely the agreement (explicit or otherwise) between the
parties involved. It "defines a set of expectations between the two
parties, that vary in strength, latitude and negotiability, and specified
penalties [for contract violation]" [4]. The software contract metaphor
encompasses not only functional behaviour—types, interfaces,
parameters and return values, and so on - but also operational
behaviour—complexity, speed, use of resources, and so on.

The issue of what action is to be taken in response to contract violation
is a separate matter, just as in life. In this four-part article I'm
focusing on the use of programmatic
constructs—enforcements—that police the functional contracts
codified in software: known as Contract Enforcement. Other aspects of the
software contract metaphor are outside the scope of this discussion.

Contract Programming 101

Contract programming is all about finding bugs in your software. Sounds
amazing? Well, let�s put it another way. It�s about finding design
flaws. Now it sounds even more amazing! How is a compiler—a nice
piece of kit to be sure, but still a very dumb thing compared to
a human being (marketing dept. notwithstanding)—supposed to be able
to understand your design, and to do so better than you can? After all,
it�s likely that no other software engineer, not even the gurus, will
understand your design even as well as you, never mind better. So, of
course, the compiler cannot. You have to lay a trail for it.

Just in the same way that human language contains redundancies and
error-checking mechanisms, so we must ensure that our code does the same.
You tell the compiler what your design is as you go, and it ensures that
each time it picks up a crumb, it verifies the design.

Essentially, contract programming is about specifying the design, in terms
of the behaviour, of your components (functions and classes), and
asserting truths about the design of your code in the form of runtime
tests placed within it. These assertions of truth will be tested as the
thread of execution passes through the parts of your components, and will
�fire� if they don�t hold. (Note: not all parts of contracts are amenable
to codification in current languages, and there is some debate as to
whether they may ever be [4]. This
does not detract from the worth of contract programming, but it does
define limits to its active realisation in code. In this article, I will
be focusing on the practical benefits of codifying contract programming
constructs.)

The behaviour is specified in terms of function/method preconditions,
function/method postconditions, and class invariants. (There are some
subtle variations on this, such as process invariants, but they all share
the same basic concepts with these three elements.) Preconditions state
what conditions must be true in order for the function/method to perform
according to its design. Satisfying preconditions is the responsibility
of the caller. Postconditions say what conditions will exist after the
function/method has performed according to its design. Satisfying
postconditions is the responsibility of the callee. Class invariants
state what conditions hold true for the class to be in a state in which it
can perform according to its design; an invariant is a �consistency
property that every instance of the class must satisfy whenever it�s
observable from the outside" [4].
Class invariants should be verified after construction, before
destruction, and before and after the call of every public member
function.

Let�s begin with a look at a simple function, strcpy(), which
is implemented along the lines of:

assert() is the standard C macro that calls
abort() if its expression evaluates false (0)

IsValidReadableString() is a notional system call that
tests to see if a pointer refers to a null-terminated string whose contents
span read-accessible memory

strlen() is the standard C function that returns the
number of characters in a null-terminated string

IsValidWriteableMemory() is a notional system call that
tests that a pointer refers to a certain size of writeable memory.

Note that in practice the precondition tests are not actually carried out
before the function, rather they are inside the function
implementation but before any of the operations the function carries out.