These questions, and others like them, motivate a current `hot topic' in computer science research: \emph{compiler verification} (for instance~\cite{leroy:formal:2009,chlipala:verified:2010}, and many others).

105

So far, the field has been focused on the first and last questions only.

106

In particular, much attention has been placed on verifying compiler correctness with respect to extensional properties of programs, which are easily preserved during compilation; it is sufficient to completely preserve the denotational semantics of the input program.

107

108

However, if we consider intensional properties of programs---such as space, time or energy spent into computation and transmission of data---the situation is more complex.

109

To even be able to express these properties, and to be able to reason about them, we are forced to adopt a cost model that assigns a cost to single, or blocks, of instructions.

110

Ideally, we would like to have a compositional cost model that assigns the same cost to all occurrences of one instruction.

111

However, compiler optimisations are inherently non-compositional: each occurrence of a high level instruction is usually compiled in a different way according to the context it finds itself in.

105

So far, the field has only been focused on the first and last questions.

106

Much attention has been placed on verifying compiler correctness with respect to extensional properties of programs, which are easily preserved during compilation; it is sufficient to completely preserve the denotational semantics of the input program.

107

108

If we consider intensional properties of programs---space, time, and so forth---the situation is more complex.

109

To express these properties, and reason about them, we must adopt a cost model that assigns a cost to single, or blocks, of instructions.

110

A compositional cost model---assigning the same cost to all occurrences of one instruction---would be ideal.

111

However, compiler optimisations are inherently non-compositional: each occurrence of a high level instruction may be compiled in a different way depending on its context.

112

112

Therefore both the cost model and intensional specifications are affected by the compilation process.

113

113

114

In the current EU project CerCo (`Certified Complexity')~\cite{cerco:2011} we approach the problem of reasoning about intensional properties of programs as follows.

115

We are currently developing a compiler that induces a cost model on the high level source code.

116

Costs are assigned to each block of high level instructions by considering the costs of the corresponding blocks of compiled object code.

117

The cost model is therefore inherently non-compositional.

118

However, the model has the potential to be extremely \emph{precise}, capturing a program's \emph{realistic} cost, by taking into account, not ignoring, the compilation process.

114

In the CerCo project (`Certified Complexity')~\cite{cerco:2011} we approach the problem of reasoning about intensional properties of programs as follows.

115

We are currently developing a compiler that induces a cost model on high level source code.

116

Costs are assigned to each block of high level instructions by considering the costs of the corresponding blocks of compiled code.

117

The cost model is therefore inherently non-compositional, but has the potential to be extremely \emph{precise}, capturing a program's \emph{realistic} cost.

118

That is, the compilation process is taken into account, not ignored.

119

119

A prototype compiler, where no approximation of the cost is provided, has been developed.

120

(The full technical details of the CerCo cost model is explained in~\cite{amadio:certifying:2010}.)

121

122

We believe that our approach is especially applicable to certifying real time programs.

123

Here, a user can certify that all `deadlines' are met whilst wringing as many clock cycles from the processor---using a cost model that does not over-estimate---as possible.

124

125

Further, we see our approach as being relevant to the field of compiler verification (and construction) itself.

126

For instance, an optimisation specified only extensionally is only half specified; though the optimisation may preserve the denotational semantics of a program, there is no guarantee that any intensional properties of the program, such as space or time usage, will be improved.

120

(The technical details of the cost model is explained in~\cite{amadio:certifying:2010}.)

121

122

We believe that our approach is applicable to certifying real time programs.

123

A user can certify that `deadlines' are met whilst wringing as many clock cycles from the processor---using a cost model that does not over-estimate---as possible.

124

125

We also see our approach as being relevant to the compiler verification (and construction) itself.

126

An optimisation specified only extensionally is only half specified; though the optimisation may preserve the denotational semantics of a program, there is no guarantee that any intensional properties of the program will be improved.

127

127

Another potential application is toward completeness and correctness of the compilation process in the presence of space constraints.

128

Here, a compiler could potentially reject a source program targetting an embedded system when the size of the compiled code exceeds the available ROM size.

129

Moreover, preservation of a program's semantics may only be required for those programs that do not exhaust the stack or heap.

128

A compiler could potentially reject a source program targetting an embedded system when the size of the compiled code exceeds the available ROM size.

129

Preservation of a program's semantics may only be required for those programs that do not exhaust the stack or heap.

130

130

Hence the statement of completeness of the compiler must take in to account a realistic cost model.

131

131

132

In the methodology proposed in CerCo we assume we are able to compute on the object code exact and realistic costs for sequential blocks of instructions.

133

With modern processors, though possible (see~\cite{bate:wcet:2011,yan:wcet:2008} for instance), it is difficult to compute exact costs or to reasonably approximate them.

134

This is because the execution of a program itself has an influence on the speed of processing.

135

For instance, caching, memory effects and other advanced features such as branch prediction all have a profound effect on execution speeds.

132

With the CerCo methodology, we assume we can assign to the object code exact and realistic costs for sequential blocks of instructions.

133

This is possible with modern processors (see~\cite{bate:wcet:2011,yan:wcet:2008} for instance) but difficult, as the execution of a program has an influence on the speed of processing.

134

Caching, memory effects, and other advanced features such as branch prediction all have a profound effect on execution speeds.

136

135

For this reason CerCo decided to focus on 8-bit microprocessors.

137

These are still widely used in embedded systems, and have the advantage of an easily predictable cost model due to the relative sparcity of features that they possess.

138

139

In particular, we have fully formalised an executable formal semantics of a family of 8 bit Freescale Microprocessors~\cite{oliboni:matita:2008}, and provided a similar executable formal semantics for the MCS-51 microprocessor.

140

The latter work is what we describe in this paper.

141

The main focus of the formalisation has been on capturing the intensional behaviour of the processor.

136

These are still widely used in embedded systems, with the advantage of an easily predictable cost model due to their relative paucity of features.

137

138

We have fully formalised an executable formal semantics of a family of 8 bit Freescale Microprocessors~\cite{oliboni:matita:2008}, and provided a similar executable formal semantics for the MCS-51 microprocessor.

139

The latter is what we describe in this paper.

140

The focus of the formalisation has been on capturing the intensional behaviour of the processor.

142

141

However, the design of the MCS-51 itself has caused problems in our formalisation.

143

142

For example, the MCS-51 has a highly unorthogonal instruction set.

144

To cope with this unorthogonality, and to produce an executable specification, we have exploited the dependent type system of Matita, an interactive proof assistant.

143

To cope with this unorthogonality, and to produce an executable specification, we have Matita's dependent types.

The MCS-51 has numerous disjoint memory segments addressed by pointers of

373

different sizes.

374

In our prototype implementation, we simply used a map datastructure (from the O'Caml standard library) for each segment.

375

Matita's standard library is relatively small, and does not contain a generic map datastructure. Therefore, we had the opportunity of crafting a dependently typed special-purpose datastructure for the job to enforce the correspondence between the size of pointers and the size of the segment .

376

We also worked under the assumption that large swathes of memory would often be uninitialized, trying to represent them concisely using stubs.

377

378

We picked a modified form of trie of fixed height $h$ where paths are

379

represented by bitvectors of length $h$, that are already used in our

380

implementation for addresses and registers:

370

The MCS-51 has numerous disjoint memory spaces addressed by pointers of different sizes.

371

In our prototype implementation, we use a map data structure (from O'Caml's standard library) for each space.

372

Matita's standard library is small, and does not contain a generic map data structure.

373

We had the opportunity of crafting a dependently typed special-purpose data structure for the job to enforce the correspondence between the size of pointer and the size of the memory space.

374

We assumed that large swathes of memory would often be uninitialized.

375

376

We picked a modified form of trie of fixed height $h$.

377

Paths are represented by bitvectors (already used in our implementation for addresses and registers) of length $h$:

Here, \texttt{Stub} is a constructor that can appear at any point in our tries.

388

It internalises the notion of `uninitialized data'.

384

\texttt{Stub} is a constructor that can appear at any point in a trie.

385

It represents `uninitialized data'.

389

386

Performing a lookup in memory is now straight-forward.

390

We merely traverse a path, and if at any point we encounter a \texttt{Stub}, we return a default value\footnote{All manufacturer data sheets that we consulted were silent on the subject of what should be returned if we attempt to access uninitialized memory. We defaulted to simply returning zero, though our \texttt{lookup} function is parametric in this choice. We do not believe that this is an outrageous decision, as SDCC for instance generates code which first `zeroes out' all memory in a preamble before executing the program proper. This is in line with the C standard, which guarantees that all global variables will be zero initialized piecewise.}.

387

We traverse a path, and if we encounter \texttt{Stub}, we return a default value\footnote{All manufacturer data sheets that we consulted were silent on the subject of what should be returned if we attempt to access uninitialized memory. We defaulted to simply returning zero, though our \texttt{lookup} function is parametric in this choice. We do not believe that this is an outrageous decision, as SDCC for instance generates code which first `zeroes out' all memory in a preamble before executing the program proper. This is in line with the C standard, which guarantees that all global variables will be zero initialized piecewise.}.

Hence compilers that support separate compilation cannot directly compute these offsets and select the appropriate jump instructions. These operations are

407

needleslly burdensome also for compilers that do not do separate compilation

408

and are thus handled by the assemblers, as we decided to do.

409

410

While introducing pseudo instructions we also introduced labels for locations

411

for jumps and for global data.

412

To specify global data via labels, we have introduced the notion of a preamble

413

before the program to hold the association of labels to sizes of reserved space.

403

Compilers that support separate compilation cannot directly compute these offsets and select the appropriate jump instructions.

404

These operations are also burdensome for compilers that do not do separate compilation and are handled by the assemblers, as we decided to do.

405

406

While introducing pseudoinstructions, we also introduced labels for locations to jump to, and for global data.

407

To specify global data via labels, we introduced a preamble before the program where labels and the size of reserved space for data is associated.

414

408

A pseudoinstruction \texttt{Mov} moves (16-bit) data stored at a label into the (16-bit) register \texttt{DPTR}.

415

409

416

Our pseudoinstructions and labels induce an assembly language similar to that of SDCC. All pseudoinstructions and labels are `assembled away', prior to program execution, using a preprocessing stage. Jumps are computed in two stages.

417

The first stage builds a map associating memory addresses to labels, with the second stage removing pseudojumps with concrete jumps to the correct address. The algorithm currently implemented does not try to minimize the object code size by always picking the shortest possible jump instruction. The choice of an optimal algorithm is currently left as future work.

410

Our pseudoinstructions and labels induce an assembly language similar to that of SDCC.

411

All pseudoinstructions and labels are `assembled away' prior to program execution.

412

Jumps are computed in two stages.

413

A map associating memory addresses to labels is built, before removing pseudojumps with concrete jumps to the correct address.

414

The algorithm currently implemented does not try to minimize the object code size by always picking the shortest possible jump instruction.

The complexity of the memory model is captured in the \texttt{get\_arg\_XX} and \texttt{set\_arg\_XX} functions that get and set data of size \texttt{XX} from memory, considering all addressing modes

448

442

449

443

%Overlapping, and checking which addressing modes can be used to address particular memory spaces, is handled through numerous \texttt{get\_arg\_XX} and \texttt{set\_arg\_XX} (for 1, 8 and 16 bits) functions.

This differs slightly from the design of the O'Caml emulator, which executed a program indefinitely, and also accepted a callback function as an argument, which could `witness' the execution as it happened, and provide a print-out of the processor state, and other debugging information.

479

Due to Matita's requirement that all functions be strongly normalizing, \texttt{execute} cannot execute a program indefinitely. An alternative is to

480

produce an infinite stream of statuses representing the execution trace.

481

Infinite streams are encodable in Matita as co-inductive types.

472

This differs from the O'Caml emulator, which executed a program indefinitely.

473

A callback function was also accepted as an argument, which could `witness' the execution as it happened, providing a print-out of the processor state.

474

Due to Matita's termination requirement, \texttt{execute} cannot execute a program indefinitely.

475

An alternative would be to produce an infinite stream of statuses representing an execution trace.

% Have lemmas proving that if an element is a member of a sub, then it is a member of a superlist, and so on

575

570

The final, missing component is a pair of type coercions from \texttt{addressing\_mode} to \texttt{subaddressing\_mode} and from \texttt{subaddressing\_mode} to \texttt{Type$\lbrack0\rbrack$}, respectively.

576

The first one is simply a forgetful coercion, while the second one opens

577

a proof obligation wherein we must prove that the provided value is in the

578

admissible set. This kind of coercions were firstly introduced in PVS to

579

implement subset types~\cite{pvs?} and then in Coq as an additional machinery~\cite{russell}. In Matita all coercions can open proof obligations.

571

The first is a forgetful coercion, while the second opens a proof obligation wherein we must prove that the provided value is in the admissible set.

572

These coercions were first introduced by PVS to implement subset types~\cite{pvs?}, and later in Coq as an addition~\cite{russell}.

573

In Matita all coercions can open proof obligations.

580

574

581

575

Proof obligations impels us to state and prove a few auxilliary lemmas related

We feed to the pattern matching the proof \texttt{(subaddressing\_modein} $\ldots$ \texttt{a)} that the argument $a$ is in the set $\llbracket$ \texttt{dptr} $\rrbracket$. In all cases but \texttt{DPTR}, the proof is a proof of \texttt{False} and we can ask the system to open a proof obligation $\bot$ that will be

603

discarded automatically using ex-falso.

604

Attempting to match against a non allowed addressing mode

605

(replacing \texttt{False} with \texttt{True} in the branch) will produce a

606

type-error.

607

608

All the other dependently and non dependently typed solutions we tried before

609

the current one resulted to be sub-optimal in practice. In particular, since

610

we need a large number of different combinations of address modes to describe

611

the whole instruction set, it is unfeasible to declare a data type for each

612

one of these combinations. Moreover, the current solution is the one that

613

matches best the corresponding O'Caml code, at the point that the translation

614

from O'Caml to Matita is almost syntactical. In particular, we would like to

615

investigate the possibility of changing the code extraction procedure of

616

Matita to recognize this programming pattern and output the original

617

code based on polymorphic variants.

596

We give a proof (the expression \texttt{(subaddressing\_modein} $\ldots$ \texttt{a)}) that the argument $a$ is in the set $\llbracket$ \texttt{dptr} $\rrbracket$ to the match expression.

597

In every case but \texttt{DPTR}, the proof is a proof of \texttt{False}, and the system opens a proof obligation $\bot$ that can be discarded using \emph{ex falso}.

598

Attempting to match against a disallowed addressing mode (replacing \texttt{False} with \texttt{True} in the branch) produces a type-error.

599

600

Other dependently and non-dependently typed solutions we tried were clumsy in practice.

601

As we need a large number of different combinations of addressing modes to describe the whole instruction set, it is unfeasible to declare a data type for each one of these combinations.

602

The current solution is the one that best matches the corresponding O'Caml code, to the point that the translation from O'Caml to Matita is almost syntactical.

603

We would like to investigate the possibility of changing the code extraction procedure of Matita to recognise this programming pattern and output O'Caml code using polymorphic variants.

618

604

619

605

% Talk about extraction to O'Caml code, which hopefully will allow us to extract back to using polymorphic variants, or when extracting vectors we could extract using phantom types

The O'Caml emulator has code for handling timers, asynchronous I/O and interrupts (these are not yet ported to the Matita emulator).

630

616

All three of these features interact with each other in subtle ways.

631

For instance, interrupts can `fire' when an input is detected on the processor's UART port, and, in certain modes, timers reset when a high signal is detected on one of the MCS-51's communication pins.

617

Interrupts can `fire' when an input is detected on the processor's UART port, and, in certain modes, timers reset when a high signal is detected on one of the MCS-51's communication pins.

632

618

633

619

To accurately model timers and I/O, we add an unbounded integral field \texttt{clock} to the central \texttt{status} record.

Further, if $\pi_1(k) = \mathtt{Some}~\langle \tau',i,\epsilon,k'\rangle$, then at time $\tau'$ the environment will send the asynchronous input $i$ to the processor and the status will be updated with the continuation $k'$.

651

This input will become visible to the processor only at time $\tau' + \epsilon$.

637

This input is visible to the processor only at time $\tau' + \epsilon$.

652

638

653

639

The time required to perform an I/O operation is partially specified in the data sheets of the UART module.

654

However, this computation is complex so we prefer to abstract over it.

655

We therefore leave the computation of the delay time to the environment.

640

This computation is complex so we prefer to abstract over it.

641

We leave the computation of the delay time to the environment.

656

642

657

643

We use only the P1 and P3 lines despite the MCS-51 having four output lines, P0--P3.

We spent considerable effort attempting to ensure that our formalisation is correct, that is, what we have formalised really is an accurate model of the MCS-51 microprocessor.

671

We spent considerable effort attempting to ensure that what we have formalised is an accurate model of the MCS-51 microprocessor.

686

672

687

673

First, we made use of multiple data sheets, each from a different semiconductor manufacturer. This helped us spot errors in the specification of the processor's instruction set, and its behaviour, for instance, in a datasheet from Philips.

This machine code exhibits a high degree of non-uniformity similar to the MCS-51.

774

However, only a very small subset of the instruction set is considered, and they over-approximate the possibilities of unorthogonality of the instruction set, largely dodging the problems we had to face.

760

However, only a small subset of the instruction set is considered, and they over-approximate the possibilities of unorthogonality of the instruction set, largely dodging the problems we had to face.

775

761

776

762

Further, it seems that the definition of the decode function is potentially error prone.

In the EU project CerCo (`Certified Complexity') we are interested in the

792

certification of a compiler for C that induces a precise cost model on the

793

source code. The cost model assigns costs to blocks of instructions by tracing

794

the way blocks are compiled and by computing the exact costs on the generated

795

assembly code. In order to perform this accurately, we have provided an

796

executable semantics for the MCS-51 family of processors, better known as

797

8051/8052. The formalization was done twice, first in O'Caml and then in Matita,

798

and also captures the exact timings of the processor. Moreover, the O'Caml one

799

also considers timers and I/O. Adding support for I/O and timers in Matita is

800

an on-going work that will not present any major problem. It was delayed only

801

because not immediately useful for the formalization of the CerCo compiler.

802

803

The formalization is done at machine level and not at assembly level:

804

we also formalize fetching and decoding. However, we separately provide also

805

an assembly language, enhanched with labels and pseudo-instructions, and

806

an assembler from this language to machine code. We also introduce cost labels

807

in the machine language to relate the data flow of the assembly program to that

808

of the C source language, in order to associate costs to the C program.

809

Finally, for the O'Caml version, we provide a parser and pretty printer from

810

the memory representation to the Intel HEX format. Hence we can easily perform

811

testing on programs compiled using any free or commercial compiler.

812

813

Compared with previous formalizations of micro-processors, the main difficulties

814

in formalizing the MCS-51 are due to the unorthogonality of its memory model

815

and instruction set. These are easily handled in O'Caml by using advanced

816

language features like polymorphic variants and phantom types to simulate

817

Generalized Abstract Data Types. In Matita, we use dependent types to recover

818

the same flexibility, to reduce spurious partiality and to grant invariants

819

that will be useful in the formalization of the CerCo compiler.

820

821

The formalization has been partially verified by computing execution traces on

822

selected programs and comparing them with an exhisting emulator. All

823

instructions have been tested at least once, but we have not yet pushed testing

824

further, for example with random testing or by means of development boards.

825

I/O in particular has not been tested yet, and it is currently unclear how

826

to provide exhaustive testing in presence of I/O. Finally, we are aware of

827

having over-specified the processor in several places, by fixing a behaviour

828

hopefully consistent with the real machine where the data sheets of the

829

vendors do not specify one.

777

The CerCo project is interested in the certification of a compiler for C that induces a precise cost model on the source code.

778

Our cost model assigns costs to blocks of instructions by tracing the way that blocks are compiled, and by computing exact costs on generated assembly code.

779

To perform this accurately, we have provided an executable semantics for the MCS-51 family of processors, better known as 8051/8052.

780

The formalisation was done twice, first in O'Caml and then in Matita, and captures the exact timings of the processor.

781

Moreover, the O'Caml formalisation also considers timers and I/O.

782

Adding support for I/O and timers in Matita is an on-going work that will not present any major problem, and was delayed only because the addition is not immediately useful for the formalisation of the CerCo compiler.

783

784

The formalisation is done at machine level and not at assembly level; we also formalise fetching and decoding.

785

We separately provide an assembly language, enhanched with labels and pseudo-instructions, and an assembler from this language to machine code.

786

We introduce cost labels in the machine language to relate the data flow of the assembly program to that of the C source language, in order to associate costs to the C program.

787

For the O'Caml version, we provide a parser and pretty printer from code memory to Intel HEX format.

788

Hence we can perform testing on programs compiled using any free or commercial compiler.

789

790

Our main difficulty in formalising the MCS-51 was the unorthogonality of its memory model and instruction set.

791

These problems are easily handled in O'Caml by using advanced language features like polymorphic variants and phantom types, simulating Generalized Abstract Data Types.

792

In Matita, we use dependent types to recover the same flexibility, to reduce spurious partiality, and to grant invariants that will be useful in the formalization of the CerCo compiler.

793

794

The formalisation has been partially verified by computing execution traces on selected programs and comparing them with an existing emulator.

795

All instructions have been tested at least once, but we have not yet pushed testing further, for example with random testing or by using development boards.

796

I/O in particular has not been tested yet, and it is currently unclear how to provide exhaustive testing in the presence of I/O.

797

Finally, we are aware of having over-specified the processor in several places, by fixing a behaviour hopefully consistent with the real machine, where manufacturer data sheets are ambiguous or under specified.