To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .

To send content to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.

Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services.
Please confirm that you accept the terms of use.

Special issue on Logical frameworks and metalanguages

There is both a great unity and a great diversity in presentations of logic. The diversity is staggering indeed – propositional logic, first-order logic, higher-order logic
belong to one classification; linear logic, intuitionistic logic, classical logic, modal
and temporal logics belong to another one. Logical deduction may be presented as
a Hilbert style of combinators, as a natural deduction system, as sequent calculus, as
proof nets of one variety or other, etc. Logic, originally a field of philosophy, turned
into algebra with Boole, and more generally into meta-mathematics with Frege and
Heyting. Professional logicians such as Gödel and later Tarski studied mathematical models, consistency and completeness, computability and complexity issues, set
theory and foundations, etc. Logic became a very technical area of mathematical
research in the last half century, with fine-grained analysis of expressiveness of subtheories of arithmetic or set theory, detailed analysis of well-foundedness through
ordinal notations, logical complexity, etc. Meanwhile, computer modelling developed
a need for concrete uses of logic, first for the design of computer circuits, then more
widely for increasing the reliability of sofware through the use of formal specifications and proofs of correctness of computer programs. This gave rise to more
exotic logics, such as dynamic logic, Hoare-style logic of axiomatic semantics, logics
of partial values (such as Scott's denotational semantics and Plotkin's domain theory) or of partial terms (such as Feferman's free logic), etc. The first actual attempts
at mechanisation of logical reasoning through the resolution principle (automated
theorem proving) had been disappointing, but their shortcomings gave rise to a considerable body of research, developing detailed knowledge about equational reasoning through canonical simplification (rewriting theory) and proofs by induction
(following Boyer and Moore successful integration of primitive recursive arithmetic
within the LISP programming language). The special case of Horn clauses gave rise
to a new paradigm of non-deterministic programming, called Logic Programming,
developing later into Constraint Programming, blurring further the scope of logic.
In order to study knowledge acquisition, researchers in artificial intelligence and
computational linguistics studied exotic versions of modal logics such as Montague
intentional logic, epistemic logic, dynamic logic or hybrid logic. Some others tried
to capture common sense, and modeled the revision of beliefs with so-called non-monotonic logics. For the careful crafstmen of mathematical logic, this was the final
outrage, and Girard gave his anathema to such “montres à moutardes”.

Formalising mathematics in dependent type theory often requires to represent sets as setoids,
i.e. types with an explicit equality relation. This paper surveys some possible definitions of
setoids and assesses their suitability as a basis for developing mathematics. According to
whether the equality relation is required to be reflexive or not we have total or partial setoid,
respectively. There is only one definition of total setoid, but four different definitions of partial
setoid, depending on four different notions of setoid function. We prove that one approach
to partial setoids in unsuitable, and that the other approaches can be divided in two classes
of equivalence. One class contains definitions of partial setoids that are equivalent to total
setoids; the other class contains an inherently different definition, that has been useful in the
modeling of type systems. We also provide some elements of discussion on the merits of each
approach from the viewpoint of formalizing mathematics. In particular, we exhibit a difficulty
with the common definition of subsetoids in the partial setoid approach.

TinkerType is a pragmatic framework for compact and modular description of formal systems
(type systems, operational semantics, logics, etc.). A family of related systems is broken down
into a set of clauses – individual inference rules – and a set of features controlling the inclusion
of clauses in particular systems. Simple static checks are used to help maintain consistency
of the generated systems. We present TinkerType and its implementation and describe
its application to two substantial repositories of typed lambda-calculi. The first repository
covers a broad range of typing features, including subtyping, polymorphism, type operators
and kinding, computational effects, and dependent types. It describes both declarative and
algorithmic aspects of the systems, and can be used with our tool, the TinkerType Assembler, to
generate calculi either in the form of typeset collections of inference rules or as executable ML
typecheckers. The second repository addresses a smaller collection of systems, and provides
modularized proofs of basic safety properties.

A lambda-free logical framework takes parameterisation and definitions as the basic notions to
provide schematic mechanisms for specification of type theories and their use in practice. The
framework presented here, PAL+, is a logical framework for specification and implementation
of type theories, such as Martin-Löf's type theory or UTT. As in Martin-Löf's logical
framework (Nordström et al., 1990), computational rules can be introduced and are used to
give meanings to the declared constants. However, PAL+ only allows one to talk about the
concepts that are intuitively in the object type theories: types and their objects, and families
of types and families of objects of types. In particular, in PAL+, one cannot directly represent
families of families of entities, which could be done in other logical frameworks by means of
lambda abstraction. PAL+ is in the spirit of de Bruijn's PAL+ for Automath (de Bruijn, 1980).
Compared with PAL, PAL+ allows one to represent parametric concepts such as families of
types and families of non-parametric objects, which can be used by themselves as totalities
as well as when they are fully instantiated. Such parametric objects are represented by local
definitions (let-expressions). We claim that PAL+ is a correct meta-language for specifying
type theories (e.g., dependent type theories), as it has the advantage of exactly capturing the
intuitive concepts in object type theories, and that its implementation reflects the actual use
of type theories in practice. We shall study the meta-theory of PAL+ by developing its typed
operational semantics and showing that it has nice meta-theoretic properties.

We show how to incorporate rewriting into the Calculus of Constructions and we prove that
the resulting system is strongly normalizing with respect to beta and rewrite reductions. An
important novelty of this paper is the possibility to define rewriting rules over dependently
typed function symbols. We prove strong normalization for any term rewriting system, such
that all function symbols satisfy the, so called, star dependency condition, and every rule is
accepted by the Higher Order Recursive Path Ordering (which is an extension of the method
created by Jouannaud and Rubio for the setting of the simply typed lambda calculus). The
proof of strong normalization is done by using a typed version of reducibility candidates due
to Coquand and Gallier. Our criterion is general enough to accept definitions by rewriting
of many well-known higher order functions, for example dependent recursors for inductive
types or proof carrying functions. This makes it a very good candidate for inclusion in a
proof assistant based on the Curry-Howard isomorphism.

This paper discusses an application of the higher-order abstract syntax technique to general-purpose theorem proving, yielding shallow embeddings of the binders of formalized languages.
Higher-order abstract syntax has been applied with success in specialized logical frameworks
which satisfy a closed-world assumption. As more general environments (like Isabelle/HOL
or Coq) do not support this closed-world assumption, higher-order abstract syntax may yield
exotic terms, that is, datatypes may produce more terms than there should actually be in the
language. The work at hand demonstrates how such exotic terms can be eliminated by means
of a two-level well-formedness predicate, further preparing the ground for an implementation
of structural induction in terms of rule induction, and hence providing fully-fledged syntax
analysis. In order to apply and justify well-formedness predicates, the paper develops a proof
technique based on a combination of instantiations and reabstractions of higher-order terms.
As an application, syntactic principles like the theory of contexts (as introduced by Honsell,
Miculan, and Scagnetto) are derived, and adequacy of the predicates is shown, both within a
formalization of the π-calculus in Isabelle/HOL.