The following are a collection of doubts, some of which shall have concrete answers while others may have not. Any kind of help will be welcome.

Reading Peter Smith's "Gödel Without (Too Many) Tears", particularly where he gives a nonstandard model of Q, I began wondering if the reason for the existence of nonstandard models of arithmetic has anything to do with incompleteness theorems.

I do not know if categoricity implies completeness (in the sense of every sentence being decidable by proof), but anyway, it seems reasonable, when one is formalizing a given (informal) theory, to try to "force" somehow the formal theory to talk "almost exclusively" about the intended interpretation. So I started thinking if some axiom (or axiom schema) could be added to PA in order to forbid its most obvious nonstandard models.

The first idea in this line was: ok, we have our class of terms 0, S0, SS0, etc. So, if we found a way to tell that for every x there is some term to which it is equal, we would be done.

But then I realized that our terms are defined inductively and that we are making implicitly the assumption: “and nothing else is a term”, very similar to the desired “and nothing else is a number” we would like to add to PA. This thought sort of worried me: every metatheoretic concept (terms, formulas, and even proofs!) is based on assumptions like these! (I have not still found a way out of these worries).

Leaving that apart. What if we move on to a stronger theory (with different axioms, but with an extension by definitions that proves every axiom of PA), for example ZFC? Natural numbers become then 0 (the empty set) plus the von Neumann ordinals (obtained by Pair and Union) that contain no limit ordinal. The set of natural numbers is obtained from Infinity, just selecting them by Comprehension. Kunen says in page 23 of his “The Foundations of Mathematics” that the circularity in the informal definition of natural number is broken “by formalizing the properties of the order relation on omega”. Could nonstandard models survive this formalization?

Well, I think I've read somewhere that being omega is absolute, so forcing would not be a way to obtain such nonstandard models. Also, I am not sure if (the extension by definitions from) ZFC set theory is a conservative extension of PA, but then it would not be able to prove anything about natural numbers (expressible in the language of arithmetic) that PA alone cannot prove. So somehow it looks like nonstandard models must manage to survive! Maybe due to the notion of being a subset of a given set not being particularly clear (although it looks like it should not be problematic with hereditarily finite sets).

As François mentions, the non-standard models are there as a consequence of Lowenheim-Skolem. However, note that the first incompleteness theorem actually produces a 'witness' sentence for the existence of models which are not \emph{elementarily equivalent} to the standard one. Even complete theories have models of all cardinalities by Lowenheim-Skolem, but these models could all be elementarily equivalent. The incompleteness theorem guarantees this isn't the case.
–
Brendan CordyJun 1 '10 at 15:34

4 Answers
4

Unfortunately, nonstandard models will survive any such attempt. This is guaranteed by the Löwenheim-Skolem Theorem which says that if a countable first-order theory T has an infinite model then it has one of every infinite cardinality. Since an uncountable model necessarily has nonstandard elements, this guarantees that there is a nonstandard model of T (and even countable ones).

Actually, in your case you need a "two-cardinal" version of Löwenheim-Skolem. In your ZFC example, you move to a theory which interprets arithmetic inside a definable substructure (the set ω). The definable substructure of such a model which might still be countable even if the model itself is uncountable. Nevertheless, one can still blow up the size of the natural number substructure via the ultrapower construction, for example.

To evade the Löwenheim-Skolem Theorem, one has to move beyond first-order logic. For example, in infinitary logic one allows infinite disjunctions such as
$$\forall x(x = 0 \lor x = S0 \lor x = SS0 \lor \cdots)$$
which ensures that the model is standard. Also, second-order allows quantification over arbitrary sets under the standard interpretation, which again prohibits non-standard models. (See this related question.) This is the characterization of N most commonly used by working mathematicians.

Although not stated clearly, the idea is not to get rid of every nonstandard model (nor of every countable one), which, as you mention, is impossible. I would be happy if one could get rid of one of them (which could be impossible too, certainly I don't know). Restating the second part of the question: would moving to a stronger theory such as ZFC remove any of the nonstandard models of the weaker one (PA)? Moving from Q to PA certainly does! (See Smith's notes for an example). And what's more important. Does this have any relation with Incompleteness?
–
Marc Alcobé GarcíaJun 1 '10 at 14:48

No, moving to ZFC doesn't help. This is what the second paragraph is about: no matter what theory you decide to interpret arithmetic in, if there is a model at all then there must be one with nonstandard integers. Perhaps surprisingly, this is not related to incompleteness per se, those are just properties of first-order logic.
–
François G. Dorais♦Jun 1 '10 at 14:56

2

To clarify, moving to ZFC does reject some nonstandard models, but not all such models. For example, since ZFC proves Con(PA), no model of PA + ¬Con(PA) can be interpreted as the omega of a model of ZFC.
–
François G. Dorais♦Jun 1 '10 at 16:23

Another question would be then if ZFC proves any mathematically interesting arithmetic statement (other than Con(PA) or some Gödel sentence for PA) that PA cannot prove.
–
Marc Alcobé GarcíaJun 2 '10 at 14:07

I think Francois Dorais has done a good job of answering your questions as stated, but let me add some comments that may get at the issues that may be worrying you under the surface. Many people I've met seem to view nonstandard models as demonstrating some kind of "flaw" in the set of axioms. They seem to have some kind of tacit expectation that the purpose of writing down the first-order axioms of Peano Arithmetic is to single out the natural numbers from among all other mathematical structures. But this is not the purpose of writing down the first-order axioms of Peano Arithmetic. If you want to single out the natural numbers, then you should proceed in the normal mathematical manner: Say what it means for two structures to be isomorphic, and prove that the natural numbers are unique up to isomorphism.

First-order logic is weak. We say that two structures are elementarily equivalent if they satisfy exactly the same set of first-order sentences. It is the norm, rather than the exception, for there to exist non-isomorphic structures that are elementarily equivalent. In the case of the natural numbers, the set of first-order sentences satisfied by the natural numbers is usually denoted $Th(\mathbb{N})$. This is an extremely rich set of statements about the natural numbers. (Incompleteness is tangentially relevant here, because for any sentence $S$ in the first-order language of arithmetic, either $S$ or its negation is in $Th(\mathbb{N})$, so incompleteness tells us that we can never hope to capture $Th(\mathbb{N})$ with a recursive set of axioms.) Nevertheless, there are plenty of nonstandard models that are elementarily equivalent to $\mathbb{N}$ (i.e., satisfy all the sentences in $Th(\mathbb{N})$.

There are lots of reasons to study first-order languages, but hoping to use elementary equivalence to capture isomorphism is not one of them. The existence of non-isomorphic elementarily equivalent structures does not demonstrate any kind of "flaw" with first-order logic, any more than the existence of non-diffeomorphic but topologically homeomorphic manifolds demonstrates any kind of flaw with the axioms for a topological space.

I realize I have mixed two things. I actually have no problem with Löwenheim-Skolem and the existence of elementarily equivalent models of PA with any cardinality. This is one reason for the existence of nonstandard models of arithmetic. But not the only reason. In the case of omega in ZFC, I didn't know about the ultrapower construction, but, as Bertrand Cody said, the first incompleteness theorem gives us an undecidable arithmetic statement, and hence a different reason: we then have different models that cannot be elementary equivalent.
–
Marc Alcobé GarcíaJun 2 '10 at 14:00

It's worth pointing out explicitly that this requires assuming ECT in the metatheory. There are certainly nonstandard models of ECT in usual first-order logic.
–
Carl MummertJun 15 '10 at 11:44

Yeah, it follows from ECT that there are no nonstandard models of HA, it does NOT follow from HA that there are no nonstandard models of ECT. But it does not follow that there are, either. It requires assuming classical logic in the meta-theory to show that.
–
Daniel MehkeriJun 15 '10 at 22:48

Mightn't it be enough just to add axioms that contradict ECT and also permit the desired construction? For example, the compactness theorem of logic might be provable from the fan theorem; I don't know if it is or not. But since ECT implies the negation of the fan theorem intuitionistically, and since the compactness theorem for countable theories is equivalent to the fan theorem classically, this is at least plausible. I know there is some existing work on intuitionistic model theory (e.g. jstor.org/pss/2271944) but I'm not familiar enough to skim for this result.
–
Carl MummertJun 16 '10 at 11:29

ECT can be weakened too. The paper I cited uses a weaker form, and the fan theorem is consistent with that form, so your example doesn't quite work. But you probably were thinking of WKL. Classically WLK is equivalent to the fan theorem, but constructively WKL implies a restricted form of the law of the excluded middle called LLPO. So it might work.
–
Daniel MehkeriJun 19 '10 at 15:37

Lowenheim-Skolem holds for first-order languages. If you replace the induction axiom schema of first-order PA with a sentence quantifying over properties you get second-order PA. Second-order PA is categorical but incomplete.

With second-order semantics, we have to distinguish between syntactic completeness and semantic completeness, because these are no longer the same. Being categorical is stronger than being semantically complete, and PA with second-order induction is semantically complete. No effective theory with equality in 2nd-order logic with an infinite model is syntactically complete, so the fact that PA with second-order semantics is not syntactically complete is not really its fault. That is: it's not usually interesting to ask whether a 2nd-order theory is syntactically complete, because it isn't.
–
Carl MummertJun 15 '10 at 11:35