I've been reading up a bit on the fundamentals of formal logic, and have accumulated a few questions along the way. I am pretty much a complete beginner to the field, so I would very much appreciate if anyone could clarify some of these points.

A complete (and consitent) propositional logic can be defined in a number of ways, as I understand, which are all equivalent. I have heard it can be defined with one axiom and multiple rules of inferences or multiple axioms and a single rule of inference (e.g. Modus Ponens) - or somewhere inbetween. Are there any advantage/disvantages to either? Which is more conventional?

What exactly is the relationship between an nth-order logic and an (n+1)th-order logic, in general. An explanation mathematical notation would be desirable here, as long as it's not too advanced.

Any formal logic above (or perhaps including?) first-order is sufficiently powerful to be rendered inconsistent or incomplete by Godel's Incompleteness Theorem - true/false? What are the advantages/disadvantages of using lower/higher-order formal logics? Is there a lower bound on the order of logic required to prove all known mathematics today, or would you in theory have to use an arbitrarily high-order logic?

What is the role type theory plays in formal logic? Is it simply a way of describing nth-order logic in a consolidated theory (but orthogonal to formal logic itself), or is it some generalisation of formal logic that explains everything by itself?

Hopefully I've phrased these questions in some vaguely meaningful/understandable way, but apologies if not! If anyone could provide me with some details on the various points without assuming too much prior knowledge of the fields, that would be great. (I am an undergraduate Physics student, with a background largely in mathematical methods and the fundamentals of mathematical analysis, if that helps.)

Note: I tried to add the propositional-logic and first-order-logic tags, but no luck, being a new user.
–
NoldorinFeb 21 '10 at 1:02

"A complete (and consitent) propositional logic can be defined in a number of ways, as I understand, which are all equivalent." There do exist many equivalent axiomizations of propositional logic. However, axiomiations of propositional logic which include functioral variables (as opposed to just propositional variables) or quantifiers qualify as richer than axiomizations you usually see in textbooks. A theorem which has functioral variables in it cannot get derived from a "standard" set of axioms, while there do exist functioral variable theorems than can deduce a standard axiom set.
–
Doug SpoonwoodMay 6 '14 at 16:07

4 Answers
4

This is a long list of questions! These are all related to a certain extent, but you might consider breaking it up into separate questions next time.

Proof theorists tend to prefer systems with many rules and few axioms such as natural deduction systems and Gentzen systems. The reason is that these are much easier to manipulate and visualize. (Look at the statement and proof of the Cut Elimination Theorem for an illustrative example.) Model theorists tend to prefer lots of axioms and just modus ponens, like Hilbert systems. The reason is that the semantics of such systems are easy to handle, and semantics is what model theorists really care about.

The real difference between propositional and first-order logic is quantification. Quantifiers are naturally more expressive than logical connectives.

There are many, many flavors of higher-order logic. The two main classifications are set-based systems and function-based systems. There are variants which don't really fit this division and there are variants which mix both. With strong enough internal combinatorics, these are all equivalent.

The set-based system have a set A of type 0 objects, which are considered atomic. Type 1 objects are elements of the powersets ℘(An). When the base theory has an internal pairing function (like arithmetic and set theory), the exponent n can be dropped. Then, type 2 objects are elements of the second powerset ℘(℘(A)), and similarly for higher types. Set-based higher types are somewhat inconvenient without an internal pairing function.

Function-based systems are similar, with functions An→A as type 1. Again, with an internal pairing function, the higher types streamline to A → A, (A → A) → A, ((A → A) → A) → A, etc. However, it is common to use more complex types in these systems.

Be careful how you read Gödel's Incompleteness Theorem. For first-order logic, the hypotheses state that the theory in question must be recusively enumerable and that it must be powerful enough to interpret a reasonable amount of arithmetic. Those are important hypotheses. (And they explain why no such theorem exists for propositional logic.) There are many variants for higher-order systems, but you need to be even more careful when stating them.

You can appreciate the difference between the different levels of higher-order logic by reading about Gödel's Speed-Up Theorems.

Most of mathematics today is based on set theory, at least in a theoretical sense. In set theory, higher types are interpreted internally using powersets and sets of functions. They also extend transfinitely and this transfinite hierarchy of higher types was shown necessary to prove theorems very low in the hierarchy, such as Martin's Borel Determinacy Theorem.

Thanks for your reply, Francois. It certainly clarifies at least several things for me. I do now realise how I've barely touched the surface of this subject, however. Might you be able to recommend a good book on formal logic (propositional, first-order, and higher-order), epsecially in the context of proofs/proof theory? (For someone with a fairly basic background in abstract mathematics, such as myself.)
–
NoldorinFeb 21 '10 at 14:20

Also, a few queries about your response: 4. As I undetrstand, first-order theory and everything higher is subject to Godel's Incompleteness Theorem. In addition, there exists proofs in mathematics that require an arbitrarily high-order logic. 5. I will certainly read up on categorial logic - however, does type theory not have its own role to play still? Some of the Wikipedia pages seem to suggest it does appear in proof theory/formal logic.
–
NoldorinFeb 21 '10 at 14:31

2

There are many books, but I can't think of one that fits all your requirements. Have a look at the following, maybe you'll like some: Troelstra & Schwichtenberg Basic Proof Theory; Girard Proofs and Types; Lambek & Scott Introduction to Higher Order Categorical Logic.
–
François G. Dorais♦Feb 21 '10 at 15:25

Re 4: Higher-order logic is very subtle. To have analogues of Gödel's Incompleteness Theorem, you must have some form of an effective presentation of the deduction system, not all higher-order systems have this property (but the usable ones do).
–
François G. Dorais♦Feb 21 '10 at 15:27

2

Note that the Girard book is available online: paultaylor.eu/stable/Proofs+Types.html. I second the recommendation for the Lambek & Scott book; it's a pleasant read with a nice selection of history, and I found it suitable even for a beginner such as myself.
–
L SpiceFeb 21 '10 at 17:04

" What are the advantages/disadvantages of using lower/higher-order formal logics?"

It seems that "strong" logics are not as well-behaved as first order logic. By Lindstrom's theorem, FOL is the strongest logic that satisfies both the Lowenheim-Skolm theorem down to Aleph0 and the compactness theorem (we say that a logic L1 is stronger than L2 when every elementary class of a theory of L2 is also en elementary class for a theory of L1). As the compactness property and the LS property are crucial for many important constructions in model theory, it seems that it's much harder to develop model theory for stronger logics.

This is correct and this is one of the many serious difficulties with standard higher-order semantics. However, Henkin semantics reduce higher-order logic to first-order logic, which therefore has all of these desirable properties.
–
François G. Dorais♦Feb 21 '10 at 17:37

3

Although it's possible to reduce many problems in stronger logics (logics with generalized quantifiers, infinitary logics, etc) to the case of FOL, the reduction itself is far from being trivial. For example, it takes a lot of effort to prove that FOL logic+cofinality quantifiers has the compactness/completeness property, and the existence of many other basic model-theoretic properties for these logics (such as the Beth property and other interpolation properties) is yet unknown. ps. Shelah wrote a nice paper about this subject titled "Compact logics in ZFC".
–
HaimFeb 21 '10 at 20:45

Yes there are advantages/disadvantages in where the balance lies between the number of inference rules and the number of axioms when defining a logic. As Francois Dorais says in his answer, it depends on what you want to do with the logic.

All logics are for representing proofs, including propositional logic. The higher the order of the logic, the more powerful it is in the sense of its language being more expressive and its deduction being more general.

The criterion that determines the order of a logic relates to the kinds of value that can be quantified over. In a zeroth-order logic, there are just values and quantification is not supported (e.g. propositional logic, where the values are boolean values). In a first-order logic, there are functions which are distinct from values; only values can be quantified over (e.g. first-order predicate logic, natural number arithmetic). In a second-order logic, functions may take first-order functions as arguments, and first-order functions may be quantified over. In a third-order logic, second-order functions may themselves be arguments to functions and be quantified over, etc, etc. In a higher-order logic (this is a distinct concept from the concept of an nth-order logic), there is no fundamental distinction between functions and values, and all functions can be quantified over.

Note that usage of predicates can be considered as equivalent to usage of sets (some predicate returning "true" for a given value can be considered as equivalent to the value being an element of some set). Note also that a given logic may or may not fundamentally distinguish between general values and boolean values, and between functions and predicates (functions that return a boolean value).

Godel's (First) Incompleteness Theorem only relates to logics capable of at least expressing natural number arithmetic - any such logics are incomplete (unless they are inconsistent, in which case they are trivially complete). His Second Incompleteness Theorem relates to whether such logics are capable of proving their own consistency.

The advantage of a less powerful logic is that it is easier to reason about, and that it is tends to be easier to write algorithms for, in the sense that (depending on what the algorithm is intended to do) these algorithms will tend to be more complete and/or efficient and/or to terminate (e.g. algorithms for proving statements in the logic). The advantage of a more powerful logic is that it is more expressive and thus capable of representing and/or proving more of mathematics.

There are certainly higher-order logics that are not capable of expressing the whole of mathematics today (e.g. not capable of fully expressing category theory). I'm afraid I don't know enough to say whether there are/aren't any formal logics that are capable of expressing all of contemporary mathematics.

Type theory is the study of type systems. Presumably your question relates to the purpose of using a type system in a formal logic? (Apologies if I got the wrong end of the stick.)

Whether or not a logic uses a type system is another fundamental distinguishing attribute between logics. The alternative is to base the logic on set theory. Either way, the logic must somehow avoid consistency problems, like Russell's Paradox that exposed the inconsistency of Frege's formal logic, or the Kleene-Rosser Paradox that exposed the inconsistency of Church's original lambda calculus.

The purpose of a type system is to impose extra well-formedness restrictions on a formal language in addition to the restrictions imposed by the language's syntax. This is a way of ensuring that paradoxes can be avoided, as well as a practical way of helping the user of a logic avoid writing meaningless statements (such as "the number 5 is a vector space"). Russell actually invented type theory to solve the problem raised by his own paradox. Church used type theory to come up with an alternative, consistent lambda calculus.

@Mark: Thanks for your response. This is a great answer; it answers many of my questions without being needlessly technical. Just a couple of little clarifications really: a) how exactly do nth-order logic and higher-order logic differ? (I always understood them to be the same thing.) does higher-order logic imply the use of type theory/category theory? b) How does a formal logic with a type theory relate to its semantics? They seem closely related, but I can't say much more.
–
NoldorinJan 31 '11 at 17:19

@Noldorin: I forgot to say in the answer to (4) that there are 1st-order logics that are not as powerful as natural number arithmetic and so the Incompleteness Theorem says nothing about these. But 1st-order logic with natural numbers is necessarily incomplete.
–
Mark AdamsFeb 5 '11 at 15:10

a) In higher-order logic, variables can range over any values. In nth-order logic there is explicit distinction between different kinds of variable, which must range over (n-1)th order values or lower. So in a logic dealing with natural numbers, a 0th-order value would be a natural number, a 1st-order value would be a function over natural numbers, e.g "+", a 2nd-order value is a function that takes function arguments, e.g. "IsBijection", etc. Higher-order logic does not necessarily imply use of type theory/category theory.
–
Mark AdamsFeb 5 '11 at 15:26

b) I'm not particularly familiar with semantics, but as I understand it semantics deals with values, or at least equivalences between values. The type system of a formal language would partition values in the semantics, forbidding equivalences between values of different type.
–
Mark AdamsFeb 5 '11 at 15:32