Completeness Theorem: A theory T is syntactically consistent -- i.e., for no statement P can the statement "P and (not P)" be formally deduced from T -- if and only if it is semantically consistent: i.e., there exists a model of T.

It is well-known that the Compactness Theorem is an almost immediate consequence of the
Completeness Theorem: assuming completeness, if T is inconsistent, then one can deduce
"P and (not P)" in a finite number of steps, hence using only finitely many sentences of T.

The traditional proof of the completeness theorem is rather long and tedious: for instance, the book Models and Ultraproducts by Bell and Slomson takes two chapters to establish it, and Marker's Model Theory: An Introduction omits the proof entirely. There is a quicker proof due to Henkin (it appears e.g. on Terry Tao's blog), but it is still relatively involved.

On the other hand, there is a short and elegant proof of the compactness theorem using ultraproducts (again given in Bell and Slomson).

So I wonder: can one deduce completeness from compactness by some argument which is easier than Henkin's proof of completeness?

As a remark, I believe that these two theorems are equivalent in a formal sense: i.e., they are each equivalent in ZF to the Boolean Prime Ideal Theorem. I am asking about a more informal notion of equivalence.

UPDATE: I accepted Joel David Hamkins' answer because it was interesting and informative. Nevertheless, I remain open to the possibility that (some reasonable particular version of) the completeness theorem can be easily deduced from compactness.

$\begingroup$I think of the completeness theorem as being mainly the conjunction of two quite separate facts. One is compactness. The other is the recursive enumerability of the set of valid formulas (in a recursive vocabulary). Both of these follow from completeness. Conversely, if you have these two consequences, you can get completeness by defining a deduction of $A$ from hypotheses $H$ to be a finite subset $X_1,\dots,X_n$ of $H$ plus a record of the computation showing that $(X_1\land\dots,\land X_n)\to A$ is in the r.e. set of valid formulas.$\endgroup$
– Andreas BlassJul 18 '17 at 23:19

3 Answers
3

There are indeed many proofs of the Compactness theorem. Leo Harrington once told me that he used a different method of proof every time he taught the introductory graduate logic course at UC Berkeley. There is, of course, the proof via the Completeness Theorem, as well as proofs using ultrapowers, reduced products, Boolean-valued models and so on. (In my day, he used Boolean valued models, but that was some time ago, and I'm not sure if he was able to keep this up since then!)

Most model theorists today appear to regard the Compactness theorem as the significant theorem, since the focus is on the models---on what is true---rather than on what is provable in some syntactic system. (Proof-theorists, in contast, may focus on the Completeness theorem.) So it is not because Completness is too hard that Marker omits it, but rather just that Compactness is the important fact. Surely it is the Compactness theorem that has deep applications (or at least pervasive applications) in model theory. I don't think formal deductions appear in Marker's book at all.

But let's get to your question. Since the exact statement of the Completeness theorem depends on which syntactic proof system you set up---and there are a huge variety of such systems---any proof of the Completeness theorem will have to depend on those details. For example, you must specify which logical axioms are formally allowed, which deduction rules, and so on. The truth of the Completness Theorem depends very much on the details of how you set up your proof system, since if you omit an important rule or axiom, then your formal system will not be complete. But the Compactness theorem has nothing to do with these formal details. Thus, there cannot be hands-off proof of Completeness using Compactness, that does not engage in the details of the formal syntactic proof system. Any proof must establish some formal properties of the formal system, and once you are doing this, then the Henkin proof is not difficult (surely it fits on one or two pages). When I prove Completeness in my logic courses, I often remark to my students that the fact of the theorem is a foregone conclusion, because at any step of the proof, if we need our formal system to be able to make a certain kind of deduction or have a certain axiom, then we will simply add it if it isn't there already, in order to make the proof go through.

Nevertheless, Compactness can be viewed as an abstract Completness theorem. Namely, Compactness is precisely the assertion that if a theory is not satisfiable, then it is because of a finite obstacle in the theory that is not satisfiable. If we were to regard these finite obstacles as abstract formal "proofs of contradiction", then it would be true that if a theory has no proofs of contradiction, then it is satisfiable.

The difference between this abstract understanding and the actual Completness theorem, is that all the usual deduction systems are highly effective in the sense of being computable. That is, we can computably enumerate all the finite inconsistent theories by searching for formal syntactic proofs of contradiction. This is the new part of Completness that the abstract version from Compactness does not provide. But it is important, for example, in the subject of Computable Model Theory, where they prove computable analogues of the Completeness Theorem. For example, any consistent decidable theory (in a computable language) has a decidable model, since the usual Henkin proof of Completeness is effective when the theory is decidable.

Edit: I found in Arnold Miller's lecture
notes
an entertaining account of an easy proof of (a fake version of) Completeness from Compactenss (see page 58). His system amounts to the abstract formal system I describe above. Namely, he introduces the MM proof system (for Mickey Mouse),
where the axioms are all logical validities, and the only
rule of inference is Modus Ponens. In this system, one can
prove Completeness from Compactness easily as follows: We
want to show that T proves φ if and only if every model
of T is a model of φ. The forward direction is
Soundness, which is easy. Conversely, suppose that every
model of T is a model of φ. Thus, T+¬φ has no
models. By Compactness, there are finitely many axioms
φ0, ..., φn in T such that
there is no model of them plus ¬φ. Thus,
(φ0∧...∧φn implies
φ) is a logical validity. And from this, one can easily
make a proof of φ from T in MM. QED!

But of course, it is a joke proof system, since the
collection of validities is not computable, and Miller uses this example to illustrate the point as follows:

The poor MM system went to the Wizard of OZ and said, “I
want to be more like all the other proof systems.” And the
Wizard replied, “You’ve got just about everything any other
proof system has and more. The completeness theorem is easy
to prove in your system. You have very few logical rules
and logical axioms. You lack only one thing. It is too hard
for mere mortals to gaze at a proof in your system and tell
whether it really is a proof. The difficulty comes from
taking all logical validities as your logical axioms.” The
Wizard went on to give MM a subset Val of logical
validities that is recursive and has the property that
every logical validity can be proved using only Modus
Ponens from Val.

And he then goes on to describe how one might construct
Val, and give what amounts to a traditional proof of
Completeness.

$\begingroup$Also, for whatever it's worth, I don't understand your remark about completeness being a foregone conclusion. It is certainly not clear that incompleteness can be remedied by adding further axioms on the fly (c.f. the Incompleteness Theorem!).$\endgroup$
– Pete L. ClarkDec 18 '09 at 22:13

1

$\begingroup$Come to think of it, I guess you can take F a nonprincipal ultrafilter on the set of primes, define for each prime number p an algebraically closed field of characteristic p, and let K be the ultraproduct of the K_p's with respect to F. Then Los' theorem asserts that any first order sentence that is true in every algebraically closed field of positive characteristic is true in every algebraically closed field of characteristic zero.$\endgroup$
– Pete L. ClarkDec 18 '09 at 23:03

1

$\begingroup$Yes, that proof was merely about the finite obstacle, which Compactness provides. The situations where one seems to need Completeness over Compactness, as I mentioned in my answer, have to do with the effectivity of the finite obstacle, for example, when if the question concerns the computability of a theory or model, or whether there is a computable procedure for eliminating quantifiers, and so on.$\endgroup$
– Joel David HamkinsDec 18 '09 at 23:38

3

$\begingroup$What I meant about Completeness being a foregone conclusion, is that when you start proving Completeness, you periodically need to know various things about the formal system you defined. So, if you are not so interested in having the optimal proof system, then you can simply add them to the system on the fly as the proof proceeds. Of course, this method only works because the theorem is true! But it does mean that you don't have to remember the exact proof system in advance, as long as you remember the essential proof outline. $\endgroup$
– Joel David HamkinsDec 18 '09 at 23:42

2

$\begingroup$I added a description of A. Miller's entertaining MM system at the end.$\endgroup$
– Joel David HamkinsJan 25 '10 at 3:41

I think you're looking for the Fraïssé School of Model Theory, which is based strictly on structures and types as primitives and avoids all syntax. I don't know of a good source for the "extremist Fraïssean approach," but Bruno Poizat's "A Course in Model Theory" is a good bridge (if you can tolerate Poizat's eccentic, and sometimes polemic, style).

Poizat starts off defining types (via back & forth) in Chapter 1, then he (apologetically) introduces formulas in Chapter 2. In Chapter 4, he proves the Compactness Theorem using ultrapowers and then presents the Henkin method as an afterthought. (He does more formal deduction later in Chapter 7, but only in order to prove the Incompleteness Theorems.) In the notes at the end of Chapter 4, Poizat writes:

The compactness theorem, in the forms of Theorems 4.5 and 4.6, is due to Gödel; in fact, as explained in the beginning of Section 4.3 [Henkin's Method], the theorem was for Gödel a simple corollary (we could even say an unexpected corollary, a rather strange remark!) of his "completeness theorem" of logic, in which he showed that a finite system of rules of inference is sufficient to express the notion of consequence (see Chapter 7). It could also have been taken from [Herbrand 1928] or [Gentzen 1934], in which results of the same sort were proven.

This unfortunate compactness theorem was brought in by the back door, and we might say that its original modesty still does it wrong in logic textbooks. In my opinion it is a much more essential and primordial (and thus also less sophisticated) than Gödel's completeness theorem, which states that we can formalize deduction in a certain arithmetic way; it is an error in method to deduce it from the latter.

If we do it this way, it is by a very blind fidelity to the historic conditions that witnessed its birth. The weight of this tradition is apparent even in a work like [Chang-Keisler 1973], which was considered a bible of model theory in the 1970s; it begins with syntactic developments that have nothing to do with anything in the succeeding chapters. This approach---deducing Compactness from the possibility of axiomatizing the notion of deduction---once applied to the propositional calculus gives the strangest proof on record of the compactness of $2^\omega$!

It is undoubtedly more "logical," but it is inconvenient, to require the student to absorb a system of formal deduction, ultimately quite arbitrary, which can be justified only much later when we can show that it indeed represents the notion of semantic consequence. We should not lose sight of the fact that the formalisms have no raison d'être except insofar as they are adequate for representing notions of substance.

There are two key points in there. The first, which comes through rather clearly, is that Model Theory could ultimately be done without any formal syntax and deduction rules. The second, much more subtle point, is present only in the parenthetical remark "and thus also less sophisticated" in the second paragraph. It sounds like Poizat is saying that the Completeness Theorem does not follow from the Compactness Theorem. But it does follow, at least in some abstract sense. The Compactness Theorem does imply that there is some system of finitary rules for deduction which are complete for semantic consequence. The only "sophisticated" part missing is that this set of rules has a simple description. In particular, the Incompleteness Theorems are not consequences of the Compactness Theorem.

About the equivalence of the compactness, completeness, prime ideal theorems over ZF: what really matters here is the case when the language $L$ over which the theory $T$ is defined is not well-ordered. Otherwise, the Henkin proof gives a model of $T$ without using any form of axiom of choice. In particular, it is OK when the considered language is countable.

Now, the implication compactness $\Rightarrow$ completeness in the general case goes as follows (although it still uses completeness for well-ordered theories, which is a theorem of ZF).

Fix a first-order language $L$. Let $T$ be a syntactically consistent theory. Then any finite $F\subseteq T$ is syntactically consistent. Define $L_F$ to be the language whose operational and relational symbols are the ones occurring in $F$. Since $F$ is finite, $L_F$ is finite. Then $F$ is a syntactically consistent theory in the language $L_F$. We have completeness for countable languages, so we have a model $M_F$ of $F$ treated as a theory over $L_F$. The model $M_F$ can easily be extended to a model $M_F'$ of $F$ treated as a theory over $L$ (just give the unused symbols trivial interpretations: empty relations and constant operations). By compactness we now have a model of $T$.

$\begingroup$Very Nice. This shows that Compactness provides a clean reduction of Completeness for uncountable languages to finite languages. So if we have Compactness, we can avoid the transfinite issues in Completeness that arise for uncountable languages (which are sometimes difficult issues for students).$\endgroup$
– Joel David HamkinsJan 25 '10 at 15:00