One standard approach to showing compactness of first-order logic is to show completeness, of which compactness is an easy corollary. I am told this approach is deprecated nowadays, as Compactness is seen as the "deeper" result.

Could someone explain this intuition? In particular, are there logical systems which are compact but not complete?

EDIT: Andreas brings up an excellent point, which is that the Completeness theorem is really a combination of two very important but almost unrelated results. First, the Compactness theorem. Second, the recursive enumerability of logical validities. Note that neither of these results depends on the details on the syntactic or axiomatic system.

What is the connection between these two aspects of completeness? Are there logical systems that have one of these properties but not the other?

As to the last question - I think so. If we reason ABOUT intuitionistic logic USING classical logic, then completeness is false : because consistent constructive theories can contradict classical tautologies, therefore cannot be satisfiable from the classical point of view. But it looks like compactness is still true, based on what I'm reading below : because it is semantic, therefore dependent only on the "meta" system, which is still classical, and not the "object" system (though I'm not sure here, is that right?)
–
Daniel MehkeriJun 25 '11 at 23:25

2

For an example of a system that has compactness but not completeness, take usual first-order logic, keep the semantics the same, and remove all the inference rules. Now nothing at all is provable, but compactness is unchanged. The point of completeness is to express a harmony between a semantics and a deductive system, you can break completeness by changing either.
–
Carl MummertJun 26 '11 at 3:01

2

There are a lot of answers here from model theorists talking about why compactness is important in model theory, but it's worth noting that the question isn't restricted to model theory. As Joel pointed out in the answer Qiaochu linked to, proof-theorists might focus on the completeness theorem rather than the compactness.
–
Mike ShulmanJun 26 '11 at 5:10

10 Answers
10

The point is that we care about the models, rather than about the proofs. The compactness theorem---the claim that a theory is satisfiable iff every finite subset of it is satisfiable---is fundamentally connected to the models, and the possiblility of truth in these models. To use it, you need to understand your theory, the models of your theory and the models of finite pieces of your theory. And this is what we want to be thinking about and what we know about. In particular, when working with models, we can use all the mathematical tools and constructions at our disposal, with no need to remain inside any first-order language or formal system (well, perhaps we have set theory as our background system). We are free to reason about the models via reducts and ultrapowers and limits of systems of morphisms and so on, using any mathematical method at all.

The completeness theorem, in contrast, is fundamentally connected with the details of a formal deduction system. And so when using it, one is thinking about whether certain tautologies might be provable or not, or whether a certain formal consequence is allowed in the system or not.
But when we are studying a certain first-order class of groups or rings or whatever, such details about the proof system might seem to be an irrelevant distraction.

Of course, as has been brought up elsewhere (and as you mentioned in your answer to the other question) the assertion that "we" care about the models, rather than the proofs, depends upon who "we" refers to. One might argue that at a foundational level, what we really care about is always the proofs. For instance, even when studying model theory, I would venture to assert that model theorists reach their conclusions by proving them. (-:
–
Mike ShulmanJun 26 '11 at 19:19

1

@Joel: I agree completely with your answer (for what that's worth -- you are 1000 times the expert I am on this subject). When I taught introductory model theory last summer and spoke to some colleagues who had had encounters with it, I came to the idea that perhaps model theory could be recast so as not to be part of mathematical logic at all. I found some sympathy for this in Poizat's introductory text, which phrases things in terms of "local isomorphisms" and "back and forth" but I didn't have the time (and perhaps not the expertise) to really follow up on this. What about you?
–
Pete L. ClarkJun 27 '11 at 4:33

Mike, yes indeed, and perhaps I stated the view more carefully in my answer to the other question. Of course the proof theorists are undertaking a fascinating and important foundational study. And although mathematicians generally strive to prove their results, wouldn't you say that this arises mostly from a concern with the mathematical objects themselves rather than with a concern specifically with the proof objects? (And one might view proof theory as a case where the proof objects become the mathematical object of study.)
–
Joel David HamkinsJun 27 '11 at 21:14

Pete, you are very kind, but I think you are being modest. Like many maturing subjects, model theory increasingly touches other mathematical areas, and while the majority of model theorists I know continue to self-identify as logicians, I also know a number of model theorists who don't quite know what label to apply to their work. Of course, it is often work that crosses established boundaries that is the most valuable in mathematics, and perhaps the most difficult. Perhaps a similar situation has arisen in set theory, which has become vast, now touching many areas of mathematics.
–
Joel David HamkinsJun 27 '11 at 21:25

Pete, I might also add that the methods of local isomorphism and back-and-forth were invented by Cantor in his proof on the uniqueness of the countable dense endless linear order, and logicians perhaps like to regard them as among the fundamental contributions of logic.
–
Joel David HamkinsJun 27 '11 at 21:41

From the point of view of modern model theory, the Compactness Theorem is central to almost everything in the model theory of first order logic, while the Completeness Theorem is almost irrelevant. Completeness comes into play when proving the decidability of complete recursively axiomatized theories or in dealing with recursively saturated models for example, but these are not very central. Even when we want to apply Completeness, what's important is that the set of logical consequences of $T$ is recursively enumerable in $T$--the details of the proof system are of no importance whatsoever. This explains why in my model theory book, while I do explain that Compactness is an almost trivial consequence of Completeness, I give a direct Henkin-style proof.

Nevertheless, I view the Completeness Theorem as one of the great intellectual achievements of our subject. The fact that the semantic notion of "logical consequence" can be captured
by the syntactic notion of "proof" is really surprising. The first is, a priori, $\Pi_1$ over the universe of sets, while the second is recursively enumerable. I find this amazing.

You write: "Completeness comes into play when proving the decidability of complete recursively axiomatized theories ...". Don't you use here the notion of "completeness" in another sense (namely as a characteristics of a theory as opposed to one that appears in the Completeness Theorem, which characterizes a logic)?
–
Gyorgy SerenyJun 26 '11 at 17:45

@Gyorgy--I am using "completeness in two senses. We want to look at "complete theory" $T$, i.e., one where $T\models \phi$ or $T\models\neg\phi$ for all sentences $\phi$. But we are also using the Completeness Theorem to argue that for any $T$ the set of logical consequences of $T$ is recursively enumerable in $T$.
–
Dave MarkerJun 26 '11 at 22:28

Thank you for your answer. As a matter of fact, I take it for granted, that the completeness of a theory is a syntactic property, that is, one formulated in terms of provability rather than in terms of semantic consequence: T is complete just in case $T\vdash\sigma$ or $T\vdash\lnot\sigma$ for all sentences $\sigma$. In this case, the decidability of complete recursively axiomatized theories can be shown without the Completeness Theorem.
–
Gyorgy SerenyJun 30 '11 at 15:40

Really, compactness is seen as the deeper result? I have to say that I am also more interested in the models rather than formulas and deduction systems, and hence like
to teach students proofs of the compactness theorem that do not use the completeness
theorem. Typically I prove compactness using ultraproducts.

On the other hand, I still believe that the completeness theorem for first order logic
is the most important theorem of mathematical logic.
The theorem tells you that in principle there are computer checkable proofs for all "true"
theorems. I know that many mathematicians are not really concerned with having
a solid foundation for the concept of a "proof" (you know it when you see it).
But having this formal concept of proof in the background helps tremendously when
you want to fight off people who present their $n$-th proof of the inconsistency
of PA or ZFC, or that there are no infinite sets or that CH holds.
Also, the completeness theorem does explain why we can do mathematics the way we do.
Even though nobody really writes formal proofs of anything, ever, unless the correctness
of the proofs is seriously challenged (see the FLYSPECK project at http://code.google.com/p/flyspeck/).

Also, I think that the proof of the completeness theorem is deeper than of the compactness
theorem. In some sense the proof of the completeness theorem (I am thinking of the proof where one builds a canonical model of a maximally consistent Henkin theory) is more straight forward than for example the ultraproduct proof of the compactness theorem,
but it is more complicated in the details and certainly less accessible to mainstream mathematics than for instance the ultraproduct proof of the compactness theorem.

Stefan, the Henkin idea can also be used directly to prove the compactness theorem (see mathoverflow.net/questions/45487/…), and I think actually that this use of the Henkin method to prove compactness may even be easier than when it is used for completeness. But of course I agree with your main point that completeness is a fundamentally important theorem.
–
Joel David HamkinsJun 25 '11 at 16:09

Joel, thanks for pointing this out. I had read the answer that you linked to at some point, but wasn't aware of the Henkin proof of the compactness theorem anymore. This is in some sense more transparent than the Henkin proof of the completeness theorem since you don't have to go through the details of the deduction system.
–
Stefan GeschkeJun 25 '11 at 16:52

This is a side comment. There are several answers that explain why compactness is so important in model theory, and I agree with what they say. But I want to point out that the "in model theory" part is key here. In the overall study of logic, not restricted to model theory, both compactness and completeness are important, and each of those has areas of logic that favor it. Model theory, being a semantic field, naturally identifies more with semantic notions.

In mathematics outside logic, I think there is more implicit use of completeness than of compactness. Every time I prove that an identify is derivable from the the axioms of a group by working semantically and showing that the identity holds in every group, I am implicitly using the completeness theorem. It is easy to miss this or take it for granted, because the completeness theorem is so well known.

There are systems that do not have complete deduction systems; one example is second-order logic with full second-order semantics. In this system it is perfectly possible for something to be true in every model without being provable in our usual proof system. Therefore, when we study this system in logic, we have to keep a close watch on whether we have shown something is provable, or just shown that it is logically valid.

Imagine the difficulties in an alternate world where mathematicians have to distinguish between "true in all groups" and "provable from the axioms of a group". The completeness theorem is what lets us ignore this. By comparison, it's more difficult to see reflections of the compactness theorem in everyday mathematics.

It might be worth noting that, when you have shown semantically that an identity holds in every group, you can conclude not only that the identity is formally derivable from the group axioms by means of first-order logic but also that it is derivable by purely equational reasoning (assuming you've expressed the group axioms as identities). In other words, you can invoke the completeness theorem of equational logic.
–
Andreas BlassJun 26 '11 at 4:20

How strong is the statement "provable from the axioms of a group"? Does any such statement hold for all group objects in all categories with finite products? (If so, that seems to be a nice motivation for caring about this point of view - you prove a first-order statement for all groups and subsequently it must be true for all group objects!)
–
Qiaochu YuanJun 26 '11 at 13:47

1

@Quiaochu: If what's being proved is an identity (a universally quantified equation between terms), then it will hold for group objects in categories with finite products. That follows from my previous comment (about equational deducibility) plus the fact that equational logic is sound in categories with finite products. (I suspect you don't even need the finite products, if you define a group object in a category as an object $G$ plus a lifting of the set-valued functor Hom$(-,G)$ to a group-valued functor.) For the non-equational case, see my next comment.
–
Andreas BlassJun 27 '11 at 0:31

1

If a consequence $\phi$ of the group axioms is a first-order sentence but not an identity, then it's deducible from the group axioms in first-order logic. For that deduction to ensure $\phi$ for all group objects in a category $C$, you'd need to assume that $C$ is something like a Boolean pretopos, so that first-order formulas can be interpreted in structures in $C$ and first-order reasoning is sound. (If your proof of $\phi$ from the group axioms can be done in intuitionistic first-order logic, then you wouldn't need $C$ to be Boolean.)
–
Andreas BlassJun 27 '11 at 0:35

I will address the the most recent version of the question, which asks about the relationship between the following two features of a logic $L$, but note that a careful discussion of this topic should first clearly define what counts as a logic.

(1) Abstract Completeness of $L$: The set of valid $L$-sentences is recursively/computably enumerable [hereafter: r.e.].

(2) Compactness of $L$: A set $S$ of sentences of $L$ has a model if every finite subset of $S$ has a model.

(1) does not imply (2).

For example, let $Q$ be the quantifier expressing "there are uncountably many", i.e., $Qx\phi(x)$ holds in a structure $\cal{M}$ with universe $M$ iff the set of $m\in M$ such that $\phi(m)$ holds in $\cal{M}$ is uncountable. Let $L_{FO}(Q)$ be the result of augmenting first order logic $L_{FO}$ with the new quantifier $Q$. Vaught proved that the set of valid sentences of $L_{FO}(Q)$ is r.e. Later [1970] in a seminal paper, Keisler gave an elegant axiomatization of $L_{FO}(Q)$.

On the other hand, it is easy to see that $L_{FO}(Q)$ does not satisfy compactness, e.g., for $\alpha < \aleph_1$ introduce constant symbols $c_{\alpha}$ and consider the set $S$ of sentences consisting of $\lnot Qx (x=x)$ [expressing "the universe is not uncountable"] plus sentences of the form $c_{\alpha}\neq c_\beta$ for $\alpha < \beta < \aleph_1$. It is easy to see that every subset of $S$ has a model, but S itself does not have a model.

I should point out that $L_{FO}(Q)$ has a limited form of compactness known as countable compactness: if $S$ is a countable set of sentences of $L_{FO}(Q)$, then S has a model if every finite subset of $S$ has a model [Vaught, ibid].

All of the above features of $L_{FO}(Q)$ [abstractly complete, countably but not fully compact] are shared by a number of other generalized quantifiers, including the stationary quantifier [introduced in the late 1970's and intensely studied in the 1980's]. However, as shown by Shelah, there are other generalized quantifiers that generate fully compact logics that also are abstractly complete

(2) does not imply (1) either.

For example consider the "logic" whose nonlogical symbols are the arithmetical ones, and whose axioms are the usual axioms of first order logic plus all the axioms of true arithmetic , i.e, arithmetical sentences that hold in $\Bbb{N}$. The semantics of the logic is the same as first order logic, so compactness continues to hold; but clearly abstract completeness fails by Tarski's undefinability of truth-theorem, which says that $Th(\Bbb{N})$ is not arithmetical, let alone r.e.

PS A "naturally occurring logic" that also serves to show that (2) does not imply (1) is the existential fragment of second order logic; its compactness follows from the usual proofs of compactness of first order logic [including the ultraproduct proof], but its set of valid sentences is known to be co-r.e., but not r.e.

Because of my natural inclination toward semantics rather than syntax, I tend to view the completeness theorem (for first-order logic) as being essentially the conjunction of two rather different facts. One is the compactness theorem. The other is the recursive enumerability of the set of valid sentences (say in any finite vocabulary). Both of these facts are often deduced from the completeness theorem, though they can also be proved by other methods (I like a version of Herbrand's theorem that doesn't mention any axioms or rules). But there is also a sort of converse, if one is willing to accept an unorthodox (but in my opinion fairly reasonable) notion of deduction. Namely, fix an algorithm $A$ enumerating the valid sentences, and define a "deduction" of a conclusion $\phi$ from a set $H$ of hypotheses to be a finite conjunction $\eta$ of members of $H$ together with a computation showing that $\eta\to\phi$ is enumerated by $A$. The completeness of this "deductive" system is an immediate consequence of the compactness theorem plus the fact that $A$ enumerates the valid sentences.

The "other fact" is not just recursive enumerability of valid sentences, but of the finitary consequence operator (in other words, finite strong completeness). This distinction is of utmost importance in logical systems lacking the deduction theorem.
–
Emil JeřábekJun 27 '11 at 15:01

Here is an interesting quote from Bruno Poizat's "A Course in Model Theory":

The compactness theorem, in the forms of Theorems 4.5 and 4.6, is due to Gödel; in fact, as explained in the beginning of Section 4.3 [Henkin's Method], the theorem was for Gödel a simple corollary (we could even say an unexpected corollary, a rather strange remark!) of his "completeness theorem" of logic, in which he showed that a finite system of rules of inference is sufficient to express the notion of consequence (see Chapter 7). It could also have been taken from [Herbrand 1928] or [Gentzen 1934], in which results of the same sort were proven.

This unfortunate compactness theorem was brought in by the back door, and we might say that its original modesty still does it wrong in logic textbooks. In my opinion it is a much more essential and primordial (and thus also less sophisticated) than Gödel's completeness theorem, which states that we can formalize deduction in a certain arithmetic way; it is an error in method to deduce it from the latter.

If we do it this way, it is by a very blind fidelity to the historic conditions that witnessed its birth. The weight of this tradition is apparent even in a work like [Chang-Keisler 1973], which was considered a bible of model theory in the 1970s; it begins with syntactic developments that have nothing to do with anything in the succeeding chapters. This approach---deducing Compactness from the possibility of axiomatizing the notion of deduction---once applied to the propositional calculus gives the strangest proof on record of the compactness of $2^\omega$!

It is undoubtedly more "logical," but it is inconvenient, to require the student to absorb a system of formal deduction, ultimately quite arbitrary, which can be justified only much later when we can show that it indeed represents the notion of semantic consequence. We should not lose sight of the fact that the formalisms have no raison d'être except insofar as they are adequate for representing notions of substance.

The remark about the compactness of $2^\omega$ is interesting. I personally don't have any use for deductive systems in propositional calculus. People closer to computer science might see this differently, though. I would always derive the compactness theorem for propositional logic from the topological compactness of $2^\omega$ (or something similar).
–
Stefan GeschkeJun 25 '11 at 17:00

I was to say that the answers given so far are all wrong and misleading, but thanksfully I recalled that I am not a mathematician :-)

There are mainly two approaches to the concept of "logic".

The classical (or mathematical) approach to logic. Roughly speaking, a logic consists of two classes: a set of formulae and a class of models, together with a satisfiability relation saying what formulae are true in what models. Then, we may develop a proof system (or various proof systems) for the logic, which helps us --- in a systematic and coherent way --- derive satisfiability of formulae. Desirable properties of such proof systems are soundness (what we have derived is true), completness (what is true, we can derive). There is also compactness (if something follows a theory then it follows from a finite subset of the theory) that refers to the logic itself (here: satisfiability relation). This is how mathematicians are taught logic.

Modern (or computer-scientistic) approach to logic. Logic is a kind of a formal system (deductive system). To help proving facts about such a system, we may introduce the concept of "models" (or various concepts of models) for the logic. Desirable properties of such classes of models are that the deductive system over them is sound and complete (that is --- for a given system --- we develop the appropriate concept of models, such that the proof system is sound and complete; if we add/remove some axioms/rules to the system then we have to restrict/extend our class of models; this is most easily seen in temporal logics --- for example, LTL is sound and complete in linear models). There is also compactness (if something follows a theory then it follows from a finite subset of the theory) that refers to the logic itself (here: proof system; if a logic allows only finitary proofs, then it is obviously compact). Simply, in this approach, the system is fundamental here. This is how computer scientists are taught logic.

Of course, in the presence of soundness and completeness, classical compactness and modern compactness coincide.

So, moving back to your question --- I do agree with other answers saying that completeness and compactness are just far different concepts, so neither is "deeper". However, I do not think that the classification of wht belongs to models and what belongs to proofs is that obvious --- it is just all about how you think of logic.

I don't understand why this answer was downvoted. Perhaps there are some mathematicians who don't realize that it's true that computer scientists have a different approach to logic, and that it might be equally valid? True, it may be unnecessarily combative to call #2 the "modern" approach. But on the other hand, it's worth pointing out that in order to even start talking about "models", you need some sort of ambient set-theory or something, which makes #1 somewhat questionable as a foundational perspective.
–
Mike ShulmanJun 26 '11 at 5:03

1

@Mike Shulman: I voted it up from -2 to -1 but I would have voted down if it had a positive score. I agree the word choice of "modern" isn't ideal, and I think the differences between the fields are being exaggerated. Even in mathematical logic, people in constructive mathematics and proof theory more generally are likely to start with a deductive system and move on to models. From my point of view, that is the "classical" approach, while the "semantics first" approach from model theory is more recent, as the influence of formalism from the early 20th century is receding.
–
Carl MummertJun 26 '11 at 12:23

1

Re: "semantics", I've recently learned that people in computer science and proof theory say "operational semantics" to refer to things that I, with a background in mathematical logic, might be more inclined to regard as part of "syntax" or "proof theory". Then they say "denotational semantics" for what I was trained to call just "semantics". After all, "semantics" literally means "meaning", and so does "denotation", so if one has to qualify "semantics" with "denotational" then it seems like a word got misused somewhere. (cont.)
–
Mike ShulmanJun 26 '11 at 19:12

1

To me, Michal's last three comments tend to confirm Mike's previous comment that some people use "semantics" to mean a sort of syntax. In particular, Michal seems happy to dismiss unnamed elements, so that models amount to just term models. My own (admittedly limited) experience with "operational semantics" also tends to confirm what Mike said; whenever I've seen an operational semantics explicitly exhibited, it looked (to me) just like a proof system. But see my next comment (this one's about to hit the length limit) for "on the other hand".
–
Andreas BlassJun 27 '11 at 0:19

2

On the other hand, I think that this conflation of syntax and semantics is one of the beauties of (some versions of) category-theoretic semantics. I really like the idea that, for example, a (set-based) model of an equational theory is the same sort of thing as an interpretation into another equational theory, namely a finite-product-preserving functor. From a sufficiently abstract point of view, a syntactic interpretation of one theory $T$ in another $T'$ is a semantic model of $T$ in the "universe" generated by a generic model of $T'$ ("generic" in the category sense, not forcing)
–
Andreas BlassJun 27 '11 at 0:25

I think that everything important that can be said about the
differences between Compactness and Completeness Theorems and their
proofs from the technical point of view has been said. (I also like
most the detailed and elucidating answer given by Joel David Hamkins
(at
In model theory, does compactness easily imply completeness?)). On
the other hand, one of the most important differences between these
theorems is a non-technical one, and indeed some previous answers
contain hints to this effect. Indeed, Completeness Theorem has an
obvious metamathematical (or even philosophical) flavour as opposed to
Compactness Theorem. Actually, it is about the relation between the
two most important mathematical notions, i.e., those of proof and
truth.

And here I would like to argue with those (Carl Mummert and Stefan
Geschke) who claim that sometimes Completeness Theorem is used in
everyday mathematics. Actually, as I see it, it is about everyday
mathematics, but it does not belong to everyday mathematics.

Actually, contrary to what Carl Mummert says, I doubt that, in
everyday mathematics, anybody in any time uses completeness theorem in
either an explicit or implicit way. Obviously, one can successfully
work in any field of mathematics (which are not intimately connected
to logic) without any knowledge of mathematical logic. (Clearly she or
he has to have a good sense of logic, but this is a completely
different matter.) In other words (unlike Carl Mummert), I cannot
imagine any `difficulties in an alternate world where mathematicians
have to distinguish between "true in all groups" and "provable from
the axioms of a group" '. The reason is simple. I do not think that
anyone proves "that a group identity is derivable from the axioms
of a group by working semantically and showing that the identity holds
in every group." Though I am not a group theorist, I think that no
group theorist is interested in the statements that are provable from
the axioms of group theory alone. (On the other hand, of course, the
most important elementary statements needed to begin group theory at
all are usually derived directly from the axioms.) Most mathematicians
work in intuitive set theory and freely make use of the different
possibilities that this rich theory offers (independently of the fact
that she or he is aware of the existence of ZFC). (Actually, the
notion of a group itself is defined as a model, that is, generally in
terms of sets rather than a first order theory. And, of course, this
kind of definition is very practical, since otherwise every course on
groups have to be preceded by an introduction to logic.) I think that
the pure first order theory of groups has only theoretical or didactic
significance for being a nice widely known example of a first order
theory.

Likewise, I do not agree with Stefan Geschke that "the completeness
theorem does explain why we can do mathematics the way we do." Just
the other way around. Clearly, metamathematics is the study of real
mathematics by exact mathematical means. Therefore, its notions are
intended to mimic those of everyday informal mathematics as faithfully
as this is possible. So a metamathematical result cannot explain or
justify anything. What it can do is to describe in exact terms and
clarify the way mathematics is normally done (and, of course, to draw
consequences about everyday mathematics from the results of this
description). But its results do not affect the way mathematics is
normally done. Obviously, we would do everyday mathematics in exactly
the same way if the Completeness Theorem did not hold. Just as those
mathematicians do who never have heard of this theorem. And indeed, we
do arithmetic in exactly the same way as mathematicians before Gödel
(who might well think that true arithmetic was recursively axiomatizable) did.

Compactness is a "semantic" theorem, whose statement involves no "syntactic" concepts such as proofs or provability. So it seems one should not need the latter concepts to prove compactness (and of course, one does not).