Skip to the Main Content

Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

October 14, 2009

A Dual for Set?

Posted by David Corfield

•\bullet The category of sets is a well-pointed topos with a natural numbers object satisfying the axiom of choice.

Do we take then models of ETCS as forming the 2-category of well-pointed toposes with a natural numbers object satisfying the axiom of choice? And is SetSet initial in this 2-category? If so, is it then an initial (2)-algebra of sorts? And if all that, is there an analogue terminal (2)-coalgebra?

I know about the idea of well-founded and non-well-founded sets as minimal and maximal fixed points for the powerset functor on the category of classes, as in algebraic set theory. I was wondering if there is something similar going on in a structural set theory such as ETCS.

Posted at October 14, 2009 3:10 PM UTC

TrackBack URL for this Entry: http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2083

186 Comments & 2 Trackbacks

Re: A Dual for Set?

Do we take then models of ETCS as forming the 2-category of well-pointed toposes with a natural numbers object satisfying the axiom of choice?

That’s certainly reasonable. Although you have to specify what the morphisms and 2-cells are. There are two obvious choices for morphisms between toposes: geometric morphisms and logical functors. Probably logical functors is what you want in this context. And I think there are various reasons why noninvertible transformations between logical functors aren’t well-behaved, so probably this is really only a (2,1)-category, but that’s okay.

And is Set initial in this 2-category?

I’m almost positive the answer is “no.” Presumably in order to ask this question, you have some ambient set+class theory in which you can define the 2-category of all (possibly large) models of ETCS in such a way that it contains the category SetSet of (small) sets. Now usually the initial object in a (2-)category of toposes is a “free topos” or “term model,” so if this 2-category has an initial object I would expect it to be of that sort. (I’m actually doubtful that it even has an initial object, since well-pointedness and choice aren’t obviously expressible in the type-theoretic form used for constructing a free topos.)

Moreover, there are also plenty of small models of ETCS, and it seems pretty unreasonable to expect a logical functor from SetSet to a small topos. Note that free toposes are countable, since they’re built out of logical terms, but also for instance the category of sets of cardinality <ℶω\lt \beth_\omega is a small model of ETCS (as long as your ambient theory is strong enough to define ℶω\beth_\omega).

It is true that SetSet is initial in the 2-category of cocomplete models of ETCS—but that’s because (up to equivalence) it’s the only object of that 2-category!

Re: A Dual for Set?

It is true that Set is initial in the 2-category of cocomplete models of ETCS—but that’s because (up to equivalence) it’s the only object of that 2-category!

BTW, This is only true if the morphisms in that 2-category are cocontinuous logical functors. If they’re arbitrary logical functors, then SetSet might still have nontrivial endomorphisms. I don’t know whether anyone has asked that question, let alone answered it; the answer might even conceivably relate to the possible existence of large cardinals. So that’s another reason why I would expect the answer to your original question to be “no”: SetSet might have endomorphisms in the 2-category of models of ETCS.

Re: A Dual for Set?

From what I’m able to gather, that sounds right to me. From the Wikipedia article on measurable cardinal:

Equivalently, κ\kappa is measurable means that it is the critical point of a non-trivial elementary embedding of the universe VV into a transitive class MM.

where “elementary embedding” corresponds to “logical functor” and “critical point” means κ\kappa doesn’t map to itself. In other words, existence of a measurable cardinal in SetSet means (in structural terms) that there exists a logical endomorphism on SetSet not isomorphic to the identity, if I have my facts straight.

Re: A Dual for Set?

in which he shows that the inexistence of a measurable cardinal in SetSet is equivalent to the fact that an exact endofunctor on SetSet (exact = preserving finite limits and finite colimits) is isomorphic to the identity. That exactness condition is of course much weaker than the condition of being logical.

Re: A Dual for Set?

Thanks, Todd. So if there are no measurable cardinals, then there are no nontrivial exact endofunctors of SetSet, and a fortiori no logical ones, whereas if there are measurable cardinals, then there are nontrivial exact endofunctors of SetSet. However, the question of nontrivial logical endofunctors of SetSet in the presence of measurable cardinals is, I think, subtler. The existence of a measurable cardinal is indeed equivalent to the existence of an elementary embedding j:V→Mj\colon V\to M, but this doesn’t give a functor Set→SetSet\to Set, only a functor Set→SetMSet\to Set_M where SetMSet_M is the category of sets and functions in the model MM. An elementary embedding j:V→Vj\colon V\to V requires the existence of a Reinhardt cardinal, which is immensely stronger than the existence of a measurable cardinal—so strong that it’s inconsistent with the axiom of choice. On the other hand, having an elementary embedding says much more than having a logical functor: the former preserves and reflects the truth of all sentences, while the latter only preserves (and not reflects, unless it’s also conservative) the truth of bounded sentences.

Re: A Dual for Set?

The existence of a measurable cardinal is indeed equivalent to the existence of an elementary embedding j:V→Mj: V \to M, but this doesn’t give a functor Set→SetSet \to Set, only a functor Set→SetMSet \to Set_M where SetMSet_M is the category of sets and functions in the model MM.

Okay, thanks for this and other comments. I’m still trying to get myself straightened out here; maybe you can help.

I’m looking at Bell’s notes on measurable cardinals. Assume we have a model (V,∈)(V, \in) of ZFC which has a measurable cardinal κ\kappa, in other words a cardinal κ\kappa which has an ultrafilter UU closed under intersections of arbitrary subset collections of size less than κ\kappa. The MM you refer to is basically the ultrapower Vκ/UV^{\kappa}/U, which is a model of ZFC essentially by Łos’s theorem; we have to be a bit careful about the well-foundedness axiom (no infinite descending chains xn+1∈xnx_{n+1} \in x_n), but the fact that UU is closed under countable intersections is used to take care of that. I’ll use the notation (M,∈U)(M, \in_U) to refer to this ultrapower construction.

Following Bell (which I’m guessing is a pretty standard treatment), we can isomorphically identify MM (or rather the structure (M,∈U)(M, \in_U)) with a transitive substructure of (V,∈)(V, \in). In other words, because (M,∈U)(M, \in_U) is well-founded, we may define an embedding

e:M→Ve: M \to V

by recursion: letting f/Uf/U denote an element of the ultrapower, i.e., a UU-equivalence class of a function f:κ→Vf: \kappa \to V, we recursively define

e(f/U)={e(g/U):g/U∈Uf/U}e(f/U) = \{e(g/U): g/U \in_U f/U\}

(I guess to do this properly, we have to use Scott’s trick.) So we have on the one hand an elementary embedding

j:V→M=Vκ/Uj: V \to M = V^{\kappa}/U

and on the other an embedding of transitive models

e:M↪Ve: M \hookrightarrow V

and it seems to me that in some places Bell is really contemplating the composite

e∘j:V→Ve \circ j: V \to V

because otherwise I am slightly mystified by statements such as “any ordinal less than κ\kappa is a fixed point of jj” or “jj restricted to ORD maps ORD into itself”.

If we rename e∘je \circ j just jj, then I was vaguely putting forth the surmise that the embedding j:V→Vj: V \to V induces a logical functor Set→SetSet \to Set. Maybe I was wrong to surmise that, but the situation is still murky to me.

Your information on Reinhardt cardinals tells me it’s hopeless to expect that j:V→Vj: V \to V is an elementary self-embedding, but as you point out in your comment, that’s implies something much stronger than the functor Set→SetSet \to Set being logical. So I don’t know – what do you think?

On the other hand, having an elementary embedding says much more than having a logical functor: the former preserves and reflects the truth of all sentences, while the latter only preserves (and not reflects, unless it’s also conservative) the truth of bounded sentences.

Re: A Dual for Set?

Yes, I think you’re right that set theorists frequently confuse the elementary embedding j:V→Mj\colon V \to M with the composite V→M↪VV \to M \hookrightarrow V. By “MM” they almost always mean the substructure of VV that is the transitive collapse of the actual ultrapower. And since to material set theorists a function is just a set (or class) of ordered pairs, it doesn’t come with information about its codomain, so a function V→MV\to M is the same as a function V→VV\to V that happens to land inside MM.

However, it’s only when considered as a map V→MV\to M that jj is an elementary embedding. I don’t know what one can say about the composite V→M↪VV\to M \hookrightarrow V. I don’t think that in my scanty reading on the subject I’ve encountered anything about what, if anything, is preserved by M↪VM\hookrightarrow V.

what do you think?

Do you mean, do I think there might be a nontrivial logical endomorphism of SetSet? Well, so far we know that such a thing (1) implies the existence of a measurable cardinal, by Blass’ result, and (2) is implied by the existence of a Reinhardt cardinal. So it is, itself, a large-cardinal axiom of sorts. The question is, I guess, how much weaker it is than a Reinhardt cardinal, and in particular whether it’s enough weaker to be consistent with choice. I don’t think I have much intuition for that. We could see whether one could extract more out of Blass’ proof if the functor is logical and not just exact.

Re: A Dual for Set?

Echoing Mike, I’m pretty sure there is no initial object.

As he said, there is a problem with well-pointedness. For example, the first-order theory of ℕ\mathbb{N} has an undecidable sentence; if that sentence has a definite truth value “true” or “false” in an initial object, then that truth value would have to be preserved under any logical morphism, hence either true in every model or false in every model, and would therefore be decidable.

Re: A Dual for Set?

An interesting question that occurs to me is whether the constructible universeLL (or rather, its category of sets) is initial in some (2-)category of toposes. The minimality of L suggests that it might be initial in the 2-category of models of ETCS that contain all the ordinals, which would be a very interesting result. I’ve been wondering for a while whether there is some structural characterization of LL.

Re: A Dual for Set?

Is it that any topos in this 2-category is perfectly capable of acting as Set?

That’s how I understand it. It’s just like the way that any set with an identified element and a successor operation satisfying a certain universality property is perfectly capable of acting as the natural numbers.

All that ever matters to sets is what the axioms say you can do with them. Beyond that, one model is just as good as any other.

Re: A Dual for Set?

It’s just like the way that any set with an identified element and a successor operation satisfying a certain universality property is perfectly capable of acting as the natural numbers.

Actually, I think that’s a misleading comparison. Any two natural-numbers-objects are uniquely isomorphic, but there are lots of inequivalent models of ETCS. In fact, NNOs are initial objects of a certain category of algebras. Initiality in the 2-category of models of ETCS would characterize a model up to equivalence, if such an initial model existed, but we’ve more or less established that it doesn’t.

Re: A Dual for Set?

Or is that the wrong way of looking at things? Is it that any topos in this 2-category is perfectly capable of acting as Set?

I would say yes.

Let me state a question which I believe is completely analogous to your question, which I hope will make clear the sense in which it is the wrong question. Consider the collection of all models of ZFC. Does “the” model of ZFC have any special property inside this collection? Well, if you are a Platonist and believe that there is a “real” model of ZFC, then yes—it’s special because it is that “real” model. If not, then in order for “the” model of ZFC to have any meaning, you have to have chosen a particular model—in which case it’s special because it’s the one you have chosen.

I think the question about special properties of SetSet is completely analogous. As you say, any topos in this 2-category is perfectly capable of “acting as SetSet” (modulo the irrelevant and easily-rectified issue of how ZFC is stronger than ETCS). What’s special about “the” category SetSet? It’s the one you’ve chosen to call SetSet.

(It is special because it is the unique cocomplete topos in that 2-category. But of course the notion of “cocomplete” depends on having an ambient notion of “set”, so this is circular. Every model of ETCS is the unique model of ETCS that is cocomplete with respect to itself.)

Re: A Dual for Set?

Is it that any topos in this 2-category is perfectly capable of acting as Set?

To elaborate a little more: while this is true, we should remember that the choice of which topos in this 2-category we call “SetSet” affects the rest of mathematics. We are perfectly free to choose any such topos to call SetSet, but we have to fix one such topos before we do anything else, because notions like group, manifold, locally small category, etc. all depend on having a prior notion of what a “set” is. I expect you know this, but just saying that “any model of ETCS is capable of acting as SetSet” might give some people the incorrect impression that models of ETCS could be swapped out for each other without changing the rest of mathematics, like yanking a tablecloth out from under a fancy dinner setting and replacing it with a new tablecloth before anything can spill.

Re: A Dual for Set?

Can you give an example from “real mathematics” (ie groups, manifolds, etc) showing that the choice of model for ETCS affects things?

Try the Whitehead problem. Every free abelian group AA satisfies Ext1(A,ℤ)=0Ext^1(A,\mathbb{Z}) = 0; must every abelian group AA that satisfies Ext1(A,ℤ)=0Ext^1(A,\mathbb{Z}) = 0 be free?

If you formalise group theory within ETCS as a set theory, then this is undecidable. (It is still undecidable in ZFC, equivalently in ETCS + Collection.) In some models it is true, while other models have counterexamples.

Re: effect of foundations on “real” mathematics

Thanks. That’s great.

It would be nice to have a statement of the following kind: answers to reasonable questions in real mathematics do not depend on the choice of foundations. This is probably hopelessly naive, but I’d like to pursue it a bit, if people don’t mind.

You could say that the problem with the Whitehead problem is that freeness is not a good concept. Why exactly isn’t it? One possible answer is that it involves an existential quantifier— a module is said to be free if it has a basis. This condition for goodness is no doubt too strict, but ignoring that, we have question #1: Is there a statement not involving existential quantifiers whose truthfulness depends on the choice of a model for ETCS?

A more refined reason why freeness is not a good concept is that it is not local: a locally free module is not in general the same as a free module. (Hmmm, are they the same over Z\mathbf{Z}? Certainly for finitely generated modules they are. But over Z\mathbf{Z}, flatness is the same as torsion-free-ness, which is not the same as freeness. I don’t know which of these, if either, is equivalent to local freeness.) Anyway, this gives question #2: Is there a statement whose existential quantifiers are only of a local nature and whose truthfulness depends on the choice of foundations?

Question #3: Do any experts here on topos theory and syntax know the right way of making “existential quantifier of a local nature” precise?

This is, of course, the Continuum Hypothesis. There are models of ETCS that satisfy it, and others that violate it (assuming that there’s a model of ETCS at all).

This is a bit of a trivial answer to your question, because it would perhaps be more natural to use existential quantifiers in the formulation: ‘for every set AA such that there exist injections…’. But it’s an answer.

(The subject matter isn’t trivial at all, in my opinion, even though it’s outside most people’s interests.)

Re: effect of foundations on “real” mathematics

answers to reasonable questions in real mathematics do not depend on the choice of foundations.

Do you have some intuitive notion of “reasonable” or “real” in mind other than “something that doesn’t depend on the foundations?” I don’t see anything intuitively unreasonable or unreal about the Whitehead problem, or the continuum hypothesis, or projective determinacy, or Vopenka’s principle, or any of the many other mathematical statements whose truth depends on foundational axioms.

Is there a statement not involving existential quantifiers whose truthfulness depends on the choice of a model for ETCS?

At least in classical logic, which is the logic of ETCS, any existential quantifier can be removed by turning it into a universal one: ∃≡¬∀¬\exists \equiv \not\forall\not, and conversely. So I don’t really see how to tell whether a statement “involves” existential quantifiers.

I also have the feeling that you’re pretty close to running into Gödel’s Incompleteness Theorem.

Re: effect of foundations on “real” mathematics

I’ve read about foundations but am not an expert. Take ZFC where C is the Axiom of Choice which enables many notions to be proven. The AofC was proven to be independent of the other axioms. I don’t think that there is such a thing as absolute truth about what axioms should be included in a system. The criteria appear to be a mutual agreement on usefulness and a self-evidential reasonableness. Mathematics is a language which means it will always be arbitrary, symbolic, and an abstraction. I think the idea that Church-Turing Thesis will remain forever unproven is a related concept/question, physically speaking. What is the foundation that computability depends upon? Maybe my reply is too elementary.

Re: effect of foundations on “real” mathematics

Here’s another example which I think all mathematicians should spend a little time contemplating: there exist particular, specific Diophantine equations for which the existence or nonexistence of solutions is undecidable in ZFC (or whatever your preferred foundation is). Moreover, in principle it’s completely clear how to go about writing down such an equation: start from a sentence TT that you know is undecidable, and use Gödel encoding to write the property “xx is a natural number coding for a proof of TT” as asserting that xx satisfies some polynomial. I believe that at least for simple systems like Peano Arithmetic, such polynomials have been written down with as few as 9 variables and of degree as low as 4 (though not both at once).

Re: effect of foundations on “real” mathematics

start from a sentence TT that you know is undecidable, and use Gödel encoding to write the property “xx is a natural number coding for a proof of TT” as asserting that xx satisfies some polynomial.

I don't understand this. If xx is a natural number coding for a proof of TT, then TT is provable. If I know that TT is undecidable, then I know that there is no such xx (nor is there any natural number coding for a refutation of TT). It seems to me that we need to take a sentence whose undecidability is undecidable.

Re: effect of foundations on “real” mathematics

If xx is a natural number coding for a proof of TT, then TT is provable. If I know that TT is undecidable, then I know that there is no such xx

I think this is essentially the standard argument for why a “Gödel sentence,” which asserts its own unprovability, must “actually” be true. But it’s still true that ZFC can prove the undecidability of such a sentence without proving that the sentence is true, because saying “sentence TT is undecidable in ZFC” is always an (arguably sloppy) shorthand for saying “if ZFC is consistent, then it proves neither TT nor ¬T\neg T.” So just because ZFC proves this statement doesn’t mean that it proves TT, since it can’t prove its own consistency (unless it is actually inconsistent).

“This paper describes the enormous impact of Gödel’s work on mathematical logic and recursion theory. After a brief description of the major theorems that Gödel proved, it focuses on subsequent work extending what he did, sometimes by quite different methods. The paper closes with a new result, applying Gödel’s methods to show that if scientific epistemology (what Chomsky calls our “scientific competence”) could be completely represented by a particular Turing machine, then it would be impossible for us to know that fact.”

Yes, but HOW? Re: effect of foundations on “real” mathematics

Begs the question, pondered deeply by n-Category folks, HOW, if at all, “scientific epistemology could be completely represented by a particular Turing machine” – as that would seem to require a formalization of at least Applied Math as used in Science, PLUS formalization of what we THINK is the Physics of the universe that defines what is possible for intelligent organisms, measuring equipment, and the ambient reality.

How much of this HAS been done? What guidelines do we even have to find a path that leads towards any pair of these?

I’m not asking rhetorically.

Maybe 1/4 of all my draft monographs grapple with even a fragment of this. Such as Intelligent Agents with several kinds of liars. Such as Quantum Cosmology with various toy models of QM and GR. Such as Decimal Godelization to measure the parameters of the fractal structure of formalized Propositional Calculus.

scientific competence completely represented by a TM

‘could be completely represented by a particular Turing machine, then it would be impossible for us to know that fact.’
———————————-
http://www.cs.rochester.edu/~nelson/courses/csc_173/computability/undecidable.html
“* Given a Pascal compiler written in Pascal, we might want to know if the compiler halts when given itself as input. [There are lots of similar results.]

Given the program HALTS, we can construct a new (more limited) program that tests whether a program P halts when the input data is a copy of P. [leads to the Conclusion]
This is a contradiction! Since our only assumption was the existence of HALTS, procedure HALTS cannot exist.”
————————————

I don’t think there is a claim that such a grand Scientific Competent TM (SCTM) program exists. For one thing, if it had been constructed or crafted, then it would be distinguishable. This claim uses “could” meaning if it existed, it could not be distinguished as the SCTM by itself because that would be an undecidable procedure.

On another not, have you heard of Wolfram’s claim that the universe may be a universal CA rule that has evolved over billions of years? He wasn’t able to find it, but he talked about it. The problem is that to test such a candidate, it takes maybe 9 billion years, one can’t really distinguish it, match it up with our current perception of reality any sooner. That is more intractable than undecidable. Anyway, I agree with you that the SCTM is not at all feasible.

Steve Witham: “Ideas like that we’re running within a Turing machine, or a perfect simulation, or that if the laws of physics can be represented as a computer program, then we might be running within a computer, are all fun except they’re completely unfalsifiable. So they can’t be ‘facts’ at all.”

JVP: The Matrix” hinted at types of falsification. For example, the same black cat walks past the same doorway twice in a row in rapid succession. So, what experiments can we design to tell? Is the Bell Inequality a hint of the Operating System? Is the apparent incompatibility of QM and GR a problem of syntax or semantics?

Kenneth Rapp: “One interesting bit about the Matrix was what the machines described as humans’ tendency to reject its reality… in the case of the first Matrix because it was ‘too perfect.’ Perhaps a well designed Matrix-type simulation would allow for tantalizing clues as to a higher or more true reality to sort of placate this restless cynicism in the human mind, without actually allowing anything to bear fruit. Perhaps they put the stars so far out to let us waste time trying to reach them.”

About Rapp: I thought it might be some kind of cosmic imprint when I read “Fractals Everywhere”, but I don’t think it’s designed. Thanks for the Matrix thoughts. I found the full “After Godel” paper online which has a sensible explanation.

www.cs.tau.ac.il/~nachum/papers/LJ/After_Goedel.pdf by Hilary Putnam [self-reference]
“I shall close by describing some recent research of mine. What provoked this research is actually a conversation I had with Noam Chomsky more than twenty years ago. I asked Noam whether he thought our total competence – not just the competence of the “language organ” that he postulates, but also the competence of the “scientiﬁc faculty” that he also postulates – can be represented by a Turing Machine (where the notion of “competence” is supposed to distinguish between our true ability and our “performance errors”). He said “yes”. I at once though that a Godelian argument could be constructed to show that if that were right, then we could never know –
not just never prove mathematically, but never know even with the help of empirical
investigation – which Turing Machine it is that simulates our “scientiﬁc competence”.”

Re: effect of foundations on “real” mathematics

It seems to me that we need to take a sentence whose undecidability is undecidable.

Actually, I think you have a point; I was too hasty. I think I was unconsciously thinking of the Gödel sentence which asserts its own unprovability. Probably the neatest form of the argument is the one alluded to by Todd, namely:

The set ThmThm of all natural numbers which code for theorems of ZFC is recursively enumerable (just start listing all the consequences of all the axioms applying all the rules of logical inference).

Therefore, by the Matiyasevich-Robinson-Davis-Putnam theorem, the set ThmThm is Diophantine. This means that there is a polynomial pp with integer coefficients such that n∈Thmn\in Thm if and only if there exist integers a1,…,aka_1,\dots,a_k such that p(n,a1,…,ak)=0p(n,a_1,\dots,a_k)=0.

Let nn be the Gödel number of the statement “0=10=1”. Then the statement “n∉Thmn\notin Thm” is equivalent to “ZFC is consistent.” Thus, by the Second Incompleteness Theorem, assuming that ZFC is consistent (so that n∉Thmn\notin Thm), ZFC cannot prove that n∉Thmn\notin Thm. And it can’t prove that n∈Thmn\in Thm either, since it is consistent and nn is not actually in ThmThm.

Therefore, if we define q(x1,…,xn)=p(n,x1,…,xn)q(x_1,\dots,x_n) = p(n,x_1,\dots,x_n), ZFC can neither prove that qq has a solution in integers (since that would imply that n∈Thmn\in Thm) or that it doesn’t (since that would imply that n∉Thmn\notin Thm). Unless, of course, ZFC is inconsistent, in which case it can prove both.

Re: effect of foundations on “real” mathematics

Is this really true? Are saying there is a polynomial PP with integer coefficients in some finite number of variables such that whether PP has a solution in integers depends on the choice of foundations? I don’t know much about foundations, but it can’t be that the value of a polynomial at a point depends on the foundations you choose, can it? I must be misunderstanding something.

By the way, I’m certainly not attacking work on foundations. I just want to try to understand better why foundational issues are never relevant in my own work.

Re: effect of foundations on “real” mathematics

there exist particular, specific Diophantine equations for which the existence or nonexistence of solutions is undecidable

Is this really true? Are [you] saying there is a polynomial P with integer coefficients in some finite number of variables such that whether P has a solution in integers depends on the choice of foundations?

Basically, yes.

I do want to point out that “choice of foundations” can mean multiple things. Often when we say “choice of foundations” we are referring to the choice of an axiom system for foundations, e.g. ZFC, NBG, MK, ETCS, SEAR, PA, ITT, etc. This can have an effect on what we can prove to be true. But even after we pick a particular axiom system, there are many possible models of those axioms. Everything that we can prove in, say, ZFC, will, of course, be true in all of its models, but a statement which is neither provable nor disprovable from the axioms of ZFC will necessarily be true in some models and false in others. But when we do mathematics, we don’t usually “choose” a particular model of our axioms, if only because in order to actually specify such a model we’d need a more powerful axiom system in the meta-logic.

What I was thinking of is the latter sort of “choice.” That is, there exists a polynomial P with integer coefficients in some finite number of variables such that the statement “P has a solution in integers” is true in some models of ZFC and false in others. However, it’s also true that there exists such a polynomial P such that “P has no solution in integers” is provable in ZFC but not in ETCS, and a different polynomial Q such that “Q has no solution in integers” is provable in MK but not in ZFC. These implications don’t reverse, since ZFC is strictly stronger than ETCS and MK is strictly stronger than ZFC.

I don’t know much about foundations, but it can’t be that the value of a polynomial at a point depends on the foundations you choose, can it?

No, it can’t, not for any particular point that you can write down. What changes between models is how many integers there are.

If you believe that there exists an “actual” or “standard” set of integers, as many people seem to do, and that ZFC is consistent, as many people also seem to do, it follows that if the solvability of P in integers is undecidable in ZFC, then P doesn’t have a solution in “standard” integers. For if it did, then we could write down that solution, and that would be a proof (in ZFC, or even PA) that it does have a solution. The point is that models of ZFC can contain “nonstandard” integers which “satisfy” that polynomial.

However, ever since I learned Gödel’s Incompleteness Theorem, I’ve never really understood why people believe that there exists a “standard” set of natural numbers. I’m not even sure what sort of “existence” is meant here.

I just want to try to understand better why foundational issues are never relevant in my own work.

I would answer that there’s probably no deep reason for it; they just haven’t happened to come up in the particular work that you were doing. ZFC is a very powerful axiom system, so it’s fairly rare that you come across something that it can’t decide one way or the other. Even ETCS, though much weaker than ZFC in the absolute scheme of things (unless you add an axiom of replacement), is quite powerful for everyday mathematics.

Re: effect of foundations on “real” mathematics

Thanks, Mike. So it seems you’re saying there is a Turing machine that halts in one model of ZFC but not in all– for example, the Turing machine that searches for solutions to the Diophantine equation that above you asserted exists. Is that right? I hope I don’t come across as laughably naive if I say this astounds me. Or is there some trickery about what the definition of an integer is? If not, someone should code this Turing machine up and search for solutions. We might be able to discover which model of ZFC we’re living in! :)

Re: effect of foundations on “real” mathematics

Thanks, Mike. So it seems you’re saying there is a Turing machine that halts in one model of ZFC but not in all– for example, the Turing machine that searches for solutions to the Diophantine equation that above you asserted exists. Is that right? I hope I don’t come across as laughably naive if I say this astounds me. Or is there some trickery about what the definition of an integer is? If not, someone should code this Turing machine up and search for solutions. We might be able to discover which model of ZFC we’re living in! :)

Keep in mind that a Turing machine, despite its description in terms of paper tape, is not really a physical object. It's not just that we usually think about it as a Gedankenexperiment; we could not build one if we tried. Besides having an infinitely long tape, it may run for an infinitely long time. We can approximate a Turing machine by a physical computer, but that is only an approximation.

Suppose that you pick one of these undecidable polynomials, write a computer program to search for solutions, and run it. Assuming no errors in your programming or in the operation of your computer, if that program ever finds a solution, then ZFC is inconsistent, and that is as definite as anything. But if the program doesn't find a solution first, then the computer will eventually run out of memory or break down from the inexorable march of entropy.

So whether this program finds a solution doesn't depend on ‘which model of ZFC we're living in’. It depends on whether there is a solution that it will find before the approximation breaks down. If you specify the size of the memory and the number of steps that the computer can take before it stops working, then you can in principle calculate whether the program will find a solution, and this is independent of the model.

When you abstract away the physical limitations and ask whether it will find a solution if it runs forever, then you get something potentially model-dependent. But now you are no longer talking about anything real.

Re: effect of foundations on “real” mathematics

I find myself going a bit nuts over something; I hope someone can help.

It seems to me it should in principle be possible to write down a Diophantine equation which has a solution if and only if NBG is inconsistent. (Basically, the set of deductions in NBG whose conclusion is 0 = 1 is recursively enumerable.) That is a lesson we have from Matiyasevich and others. So we have some explicit Diophantine equation whose locus is a subset of a lattice ℤm\mathbb{Z}^m.

So, it seems we can build a Turing machine, some finite state machine, which enumerates all points in ℤm\mathbb{Z}^m and checks each in turn to see whether it fits the Diophantine equation. If it ever does, it prints out “NBG IS INCONSISTENT STOP” and halts.

Keep in mind this is some finite computer, some machine having a finite number nn of possible internal states, with an infinitely long tape as scratch pad.

Now, on the other hand, there is this bedeviling thing called the busy beaver function, SS. My understanding is that S(n)S(n) is the least upper bound on how many times an arbitrary Turing machine with nn internal states (and using a 2-bit alphabet {0,1}\{0, 1\}) can write the number 1 before it halts. For example, suppose our Turing machine writes the number 1 every time it checks that the current number on its list is not a solution to the Diophantine equation.

But now I have some real difficulty. So we’re all set, we run our Turing machine, and we wait. We wait and see if it prints out more than S(n)S(n) 1’s (or check marks). If we get past this point, then by definition of S(n)S(n), if the machine hasn’t halted by then, it’s never, ever going to halt, and: NBG is consistent!

Aaaaaaaggggghhhhh!!!!!

I mean, I guess in principle the NBG machine can be designed with, I don’t know, let’s say 1000 internal states tops. (Probably Conway can cook up some fractran program for the job.) In principle, S(1000)S(1000) is computable (the function S(n)S(n) itself grows faster than any computable function, but I think each individual like S(1000)S(1000) is in principle computable). So in principle, we could even figure out how long we have to wait.

So, all we have to do is wait for some determinate number of kalpas of time and we find out NBG is consistent by arithmetic means? Somewhere I feel I must have made a stupid error. Where?

Re: effect of foundations on “real” mathematics

I don’t think you’ve made a mistake Todd. I think you’ve simply shown that calculating S(1000) is at least as hard as proving the consistency of NBG (though I don’t see any reason that even in principle we *ought* to be able to calculate S(1000)).

Re: effect of foundations on “real” mathematics

Thanks, Tom. I’m not sure I made clear the extent of my discomfiture though.

My main difficulty is that even if we lay aside the difficulty of placing an upper bound on S(1000)S(1000) (or whatever it is), the apparent conclusion is that consistency of NBG is decidable. That is, by the busy beaver business, there exists a finite calculation (in PA even) which would decide consistency of NBG: just run the finite state machine long enough.

But NBG is strong enough to prove the consistency of PA. (If NBG is inconsistent, it would of course also prove inconsistency of PA.) Now we seem to be in a terrible bind: there exists a finite calculation in PA which would tell us either that

NBG is consistent, and therefore PA is consistent. In this case, the finite calculation in PA concludes Con(PA); therefore, by Gödel incompleteness, PA is inconsistent. So, from the premise NBG is consistent, we are able to conclude both Con(PA) and not-Con(PA). Therefore NBG is inconsistent. Or…

The calculation tells us that NBG is inconsistent. But either way, we seem to have concluded that NBG is inconsistent.

(I picked NBG for this thought experiment to be on the safe side, because it’s finitely axiomatizable, but I’m not sure this is crucial for the thought experiment to go through. The punch line though seems to be devastating.)

Re: effect of foundations on “real” mathematics

(Why are you even using NBG at all? Couldn’t you argue in exactly the same way replacing it with PA?)

There’s a good chance yes, but as I said here, I decided to play it safe by working with a finitely axiomatizable theory, in this case NBG. But if that’s a red herring, then the overall structure of the (surely fallacious) argument might be clearer by replacing NBG by PA.

For any recursively enumerable theory T, it is well known that there is a Turing machine which halts if and only if T is inconsistent.

Yes, this is due to some combination of Church, Kleene, Rosser, and Turing; I don’t know the exact details of who did what.

From this you may well be able to prove or disprove a formula P which is true iff Con(T) is true, but how do you know you can prove P↔Con(T)P \leftrightarrow Con(T) in T?

(Let me say up front that I am of course not at all worried that I might have destroyed the foundations of mathematics with a bombshell of an argument. I suspect the problem is that there are subtleties to the busy beaver stuff that I haven’t grasped. Still, I’m annoyed that I haven’t pinned down the error.)

For the sake of argument, let’s assume P is the formula “there exists no solution to the Diophantine equation whose locus encodes the r.e. set of proofs of 0 = 1”, and let’s assume T is PA. (By the way, I should explain that the reason I am bringing in this translation into Diophantine equations is to make sure the sentence we’re trying to decide is one of low logical complexity. For example, in the Wikipedia article, there is discussion of an application of busy beaver to a proof of the Goldbach conjecture, which is a Π2\Pi_2 sentence. I have a feeling that busy beaver arguments will not apply so readily to sentences of greater logical complexity, unless one introduces oracles or something.)

Anyway, to your question. Yes, Con(T)Con(T) and P↔Con(T)P \leftrightarrow Con(T) are statements about T, in other words are outside the formal framework of T, even if they are encodable in T via mechanisms at the meta-level. You may be hinting at a problem with my argument (and surely there is a problem with my argument), but I’m not quite seeing it.

At the risk of being redundant, I’ll say again: it’s this busy beaver stuff and the sheer finite bounds it seems to place on the problem of deciding difficult or intractable problems like GC that makes me incredibly uneasy, so what I’m trying to do is put this busy beaver application to an acid test: if it can apply to GC, then why couldn’t it apply to Con(T)?

Re: effect of foundations on “real” mathematics

Todd, I think I understand your difficulty.

You’ve (correctly) asserted the existence of a particular Turing machine X that halts iff T is inconsistent. (The machine that checks all proofs in T and looks for inconsistencies). Directly from this machine that we can construct the formula Con(T).

From X, and S(1000) you construct a Turing machine Y, that halts with “no” if T is inconsistent, and “yes” if T is consistent. Directly from this machine we construct a formula P, whose validity is equivalent to that of Con(T) because X and Y are equivalent Turing machines (nearly – you have to be a bit careful with halting because X might not).

You claim that since Y halts, P is decidable. (This is true by the theorem that says every recursive function is representable in PA). From that you claim that Con(T) is decidable, and deduce the inconsistency of T.

The problem is, that although the validity of P and Con(T) are the same, there will be no proof in T of this fact, i.e. P <-> Con(T) is not a theorem of T.

You may be (probably will be) able to prove in NBG or ZF that P <-> Con(T), but that’s no surprise. ZF already proves Con(T)!

Re: effect of foundations on “real” mathematics

Cheap arguments aside, is it clear to you what my real difficulty is here? And do you claim you’ve somehow addressed the difficulty?

I guess that I don't see the difficulty. You claim that each S(n)S(n) is computable, citing Wikipedia, but Wikipedia's argument is cheap. If my cheap argument doesn't lead you to a difficulty, then why does theirs?

Re: effect of foundations on “real” mathematics

OK, with the help of Daniel's comment, I think that I've found a result that, while not making me worry, is at first surprising. Let's suppose that we can test NBG with a 10001000-state Turing machine (which is not essential), and suppose that NBG is consistent (which is essential). Then we cannot prove, for any numeral (that is any string, in the language of NBG, of applications of the successor operation to zero) nn, that n>S(1000)n \gt S(1000). So even though presumably there is some number mm greater than S(1000)S(1000) (although one needs excluded middle in the metasystem to prove this), we cannot prove in NBG that this is so (for any particular mm given as a numeral in the language of NBG).

Re: effect of foundations on “real” mathematics

I’ll have to think about this. Maybe this is the explanation, but I find it really very creepy and mind-bending. (But maybe I’m getting used to the idea even as I write this.) I’d say it belongs in a Hall of Fame of creepy results. It reminds me a bit of some things Edward Nelson has said, who rejects the Church-Turing thesis.

Re: effect of foundations on “real” mathematics

You're welcome; I'm happy to have helped.

And I wasn't trying to be subtle and cryptic. In retrospect, I should have leapt on your claim that S(1000)S(1000) is computable and attacked it as meaningless (as Arnold Neumaier implictly knew, although he applied the insight in the wrong context). But it took me a little while to notice that myself, and by then it was too late; I was already writing something different. (^_^)

Re: effect of foundations on “real” mathematics

Todd Trimble wrote: “So, all we have to do is wait for some determinate number of kalpas of time and we find out NBG is consistent by arithmetic means? Somewhere I feel I must have made a stupid error. Where?”
————————————

Probably, even your stupid errors are quite subtle. I was wondering about this topic and found a summary of (v=von) NBG on the FOM mailing list by the perspicacious Joe Shipman. I’ll provide an excerpt so you can decide if his comments are worth your attention.

http://cs.nyu.edu/pipermail/fom/2001-May/004922.html
“However, though Friedman is correct that “in any finite language supporting a small amount of arithmetic, any system that contains full induction must prove the consistency of any finitely axiomatized subtheory”, so one cannot embed PA in a finitely axiomatized system in the language of arithmetic, one CAN embed PA in a finitely axiomatized system in the language of set theory. This is what I was asking about before, which no one picked up on – are there interesting finitely axiomatizable theories of sets that allow one to do all or most of “ordinary mathematics”?”

Re: effect of foundations on “real” mathematics

Todd wrote:
“It seems to me it should in principle be possible to write down a Diophantine equation which has a solution if and only if NBG is inconsistent. (Basically, the set of deductions in NBG whose conclusion is 0 = 1 is recursively enumerable.)

So, it seems we can build a Turing machine, some finite state machine, which enumerates all points in Z^m and checks each in turn to see whether it fits the Diophantine equation. If it ever does, it prints out “NBG IS INCONSISTENT STOP” and halts.”
——————————-

SH: Well, I wondered if it could effectively be determined which Diophantine equation would produce a solution iff NBG is inconsistent; is your ‘in principle’ a correct assumption?

I realize I’m far more likely than you to have overlooked something important.
================================================

“Computability and Logic” by Boolos, Burgess and Jeffrey, Chapter 11, “The Undecidability of First-Order Logic”
“It follows that if there were an effective procedure for deciding when a finite set of sentences implies another sentence, then the halting problem would be solvable; whereas, by Turing’s thesis, the latter problem is not solvable, since it is not solvable by a Turing machine. The upshot is, one gets an argument, based on Turing’s thesis for (the Turing-Buchi proof of) Church’s theorem, that the decision problem for implication is not effectively solvable.”
========================================

SH: My idea is [no method] “for deciding when a finite set of sentences implies another sentence” defeats the idea that one could selectively discover which Diophantine equation from that set would proceed to or entail, NBG IS INCONSISTENT STOP. I read this about NBG from the Wiki

NBG A statement in the language of ZFC is provable in NBG if and only if it is provable in ZFC.
NBG, unlike ZFC and MK, can be finitely axiomatized.

which I at the moment doubt weakens my argument-> if a finite set doesn’t imply a specific meta-result, neither does an infinite set. Sigh, every once in awhile some understanding that I’ve struggled for years to obtain is skewered. I almost didn’t submit this because it seems to likely that you have considered this and think my idea is apples and oranges.

One thing I’ve noticed is that the recursively-enumerable property is often mentioned, but it is the lack of recursiveness that produces undecidability because recursively-enumerable doesn’t also always mean recursive.

Yuri Matijasevic, and Julia Robinson.
A quote from their article: “The consistency of a recursively axiomatizable theory is equivalent to the assertion that some definite Diophantine equation has no solutions.”

‘The halting problem was one of the first to be shown recursively undecidable. The formulation of recursive undecidability of the halting problem and many other recursively undecidable problems is based on Godel numbers. For instance, the problem of deciding for any given x whether the Turing machine whose Godel number is x is total is recursively
undecidable. Hilbert’s tenth problem [Diophantine] is perhaps the most famous recursively undecidable problem.’

Hilbert’s tenth problem
“Given a Diophantine equation with any number of unknown quantities and with rational integral numerical coefficients: To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers.”

Re: effect of foundations on “real” mathematics

Mike pointed me here because I have a passing interest in CS. Unfortunately I haven’t had time to read all of the comments, but I think I get the gist of what’s going on.

The key element here is that it’s not possible to prove a bound on S(n) for every n. Thus, there must be some n for which it is impossible to bound S(n) (or else you could prove a bound for every n!). In fact, you can probably do better and say that there is some value above which S cannot be bounded, since if m < n then a bound for S(n) is also a bound for S(m).

The proof: suppose that it were possible. Then one could construct a Turing machine M which, on input n, outputs a bound for S(n). But then we can construct a Turing machine with k states that runs for more than S(k) steps. We call this machine N: N runs M(k), writes 0 on its tape, and increments the value on its tape until it comes to a value greater than M(k). This N runs for greater than S(k) steps, contradiction.

We could also use provable bounds on S(n) to decide the halting problem, or to compute S(n) itself, both of which are impossible, via similar proofs.

The upshot being that you can’t actually prove a bound on S(n) in general. The naive way of just running every Turing machine with n states doesn’t work (because you’re never sure when one is looping), and neither does any more subtle method.

Re: effect of foundations on “real” mathematics

Thanks, Daniel. But darn it, the uncomputability of SS isn’t my problem here. At least, I don’t think it is!

The argument is that once the problem of deciding whether there is a proof of 0 = 1 in NBG is converted into existence of a solution of some fixed Diophantine equation (in some fixed ℤm\mathbb{Z}^m), then we can build a computer – an actual computer or Turing machine, with a specific number of internal states, let’s say 1000, which will work on a specified enumeration of points in ℤm\mathbb{Z}^m and systematically check them one by one to see whether one of them satisfies the equation (and once it does so, it halts); it leaves a check mark for each point which doesn’t satisfy the equation. By the busy beaver argument, it would seem that if it hasn’t halted after S(1000)S(1000) check marks, then it’s never going to halt. And therefore, it would never find a solution to the Diophantine solution, even if we let it continue running forever. And therefore, that no proof of 0 = 1 in NBG exists!

This has nothing to do with computing S(n)S(n) for all nn, AFAICT.

(Mind you, I know I’m making a fool of myself in this thread. I think I’m even beginning to sense the flaw, even though I can’t put it into words yet, but what the hay.)

Re: effect of foundations on “real” mathematics

So first of all, I should preface all this with the comment that I too am likely making a fool of myself — indeed, if this were another week I’d go back and read this (quite lengthy) thread to understand the question you’re asking more precisely, but things are just too busy right now for me to do so. However, I think I can respond to your concerns.

I agree completely with every sentence of your second paragraph until the final one — I claim that having S(1000) check marks is not sufficient, because you cannot prove the value of S(1000), and so you do not have a proof!

The reason I was talking about bounding S(n) for every n is that if you can’t do so, then there is some n for which you cannot find a bound. It is quite likely (in fact, my guess is certain) that such an n would be n = 1000 (i.e. the numer of states of your hypothetical Turing machine).

Is that OK? I could easily be missing something, especially since I haven’t read over, for example, the definition of NBG — I just know that it’s finitely aximomatizable. (Although this argument would work just as well if the axioms were recursively enumerable.)

Re: effect of foundations on “real” mathematics

The key element here is that it’s not possible to prove a bound on S(n)S(n) for every nn. Thus, there must be some nn for which it is impossible to bound S(n)S(n) (or else you could prove a bound for every nn!). [Emphasis mine – TT] In fact, you can probably do better and say that there is some value above which SS cannot be bounded, since if m<nm \lt n then a bound for S(n)S(n) is also a bound for S(m)S(m).

Encouraged by Toby’s level-headed example, I went back and read what you wrote a few more times, more slowly and carefully. The bit I’ve bold-faced still makes me go wow!, but I’m slowly coming around to what you’re saying.

It’s still very, very weird to me. In some Platonic realm [speaking very metaphorically], God knows when the Turing machine has passed the S(1000)S(1000) mark [assuming 1000 is the right number], but the point is that on the basis of NBG, we can never know even in principle when that is. Wow!! Now that’s what I call uncomputability! (But it seems to me that the Wikipedia article, in reference to the Goldbach conjecture, was somewhat garbled on that point. Toby was subtly calling my attention to the fact that their argument was “cheap”, but even in retrospect I think he was being a little too subtle and cryptic for me to follow at the time.)

I’ll need a bit more time before your argument has really percolated through my thick head, but I sense you’ve pinned down my difficulty.

This is really mind-blowing stuff for me, and I appreciate your taking time to explain (and thanks to Mike for enlisting your help).

Re: effect of foundations on “real” mathematics

I have to agree that it’s mind-blowing, but that’s why I love this stuff so much! The “God knows when you’re past S(n)” is particularly amusing and strange. In some sense, though, all we’re saying is that the statement S(n) < m is not provable; considering that the definition of S is actually quite complex, the unprovability of S(n) < m shouldn’t be that surprising (I mean, it is that surprising, it just shouldn’t be!). After all, our definition of S involves somehow indexing over TMs which are themselves powerful enough to contain the axioms of our system, so something about this had better be unprovable!

Consider, for example, that your original discomfiture was over a Diophantine equation. You produced a specific Diophantine equation that has a solution if and only if NBG is inconsistent. Well, God knows if this equation has a solution. If NBG is inconsistent, there are integers sitting out there just waiting to be plugged in. But, if NBG is consistent, then we can never prove either way if this equation has a solution. To me, this idea of an equation where a solution is either sitting out there or isn’t is just as strange as the idea that S(n) is some fixed value, but we cannot prove even a bound on it! God knows the number, we don’t, because we can’t make axioms powerful enough to prove it… such are the limits of being a mathematician!

Like I said, I love this stuff. I hope that I have indeed been helpful, and I’ll happily continue to monitor this page to see if I’ll be of more use.

Re: effect of foundations on “real” mathematics

I picked Sol Feferman as a reference because he is one of the top five logicians in the world and an expert on Godel. There are many different formal axiomatic systems, and you choose the one with the axioms that you want to use. There is more than one foundation to choose from. ZFC and PA are common shared choices and PA is usually adequate for the ‘real mathematics’ that you mentioned.
—————————————

http://math.stanford.edu/~feferman/papers/Godel-IAS.pdf
“The nature and significance of Godel’s incompleteness theorems” by Solomon Feferman
…
“Let’s start with a current formulation of Godel’s first incompleteness theorem that is imprecise but can be made precise:

In any _sufficiently_ strong formal system there are true arithmetical statements that can’t be proved (in the system).
…
Taking the concept of truth in the integers for granted, we now understand everything in the above rough formulation of Godel’s first incompleteness theorem except what it means to be “sufficiently strong”. Usually that is taken to mean that S includes the system PA of Peano Arithmetic;2 that is a formal version of the axioms proposed for arithmetic by the
Italian mathematician Giuseppe Peano in the 1890s. Its axioms assert some simple basic facts about addition, multiplication and the equality and less-than relations. Beyond those, its main axioms are all the instances of the principle of mathematical induction that can be expressed in the language of arithmetic.
2. Actually, much weaker systems than PA are known to suffice for the theorem.”

SH: I think most systems which are too weak to experience incompleteness are also too weak to do your ‘real mathematics’. It depends on the axioms selected for each formal mathematical
system. I think it is a foundational issue whether your chosen formal system is capable of displaying incompleteness, or not. For instance geometry has been proven to be complete. Godelian Incompleteness or Undecidability rarely influence what applied mathematicians do.
Maybe that is your concern.

Re: effect of foundations on “real” mathematics

MS: ever since I learned Gödel’s Incompleteness Theorem, I’ve never really understood why people believe that there exists a “standard” set of natural numbers. I’m not even sure what sort of “existence” is meant here.

I guess the reason is that mathematicians do not think that first order logic (on which Goedel’s result are based) fully describes mathematical intuition.

That logicians prefer first order logic is only due to the fact that first order logic gives rise to a much nicer theory.

In second order logic, standard models exist quite trivially, both for natural numbers and for ZF+global choice (assuming their consistency).

Thus a framework that forbids thinking in terms of standard models is to some extent defective.

Re: effect of foundations on “real” mathematics

I guess the reason is that mathematicians do not think that first order logic (on which Goedel’s result are based) fully describes mathematical intuition.

“Intuition” is a bit hazy. I’d rather talk about what first-order logic can do.

To this end, keep in mind that ZFC is a first-order theory, and that insofar as mathematicians believe that the mathematics they do is in principle expressible in ZFC, there is a first-order theory that suffices to express what mathematicians want to do.

In second order logic, standard models exist quite trivially, both for natural numbers and for ZF+global choice (assuming their consistency).

Really? Don’t standard models refer to an ambient universe of sets? If so, then in order for standard models to exist “quite trivially”, there has to be a background of sets which “exists” (quite trivially?). Now, can’t there be different ambient backgrounds of sets?

In each given background, there will be one standard model of ℕ\mathbb{N}, but the characteristics of ℕ\mathbb{N} will vary from background to background. For example, if one posits a background model SetSet and then passes to an ultrafilter quotient SetUSet_U of SetSet, seen as a locally small category, then externally there will be more elements of the object ℕ\mathbb{N} in SetUSet_U than there are elements of ℕ\mathbb{N} in SetSet, even though the natural numbers object in SetUSet_U is internally uniquely specified so as to make it the internal standard model, say by universal properties or whatnot.

Re: effect of foundations on “real” mathematics

TT: ZFC is a first-order theory

You cannot discuss the limitations of first order logic by means of first order logic itself.

It is easy to convert ZF+global choice into a second order theory by adding
(in an induction axiom like fashion) the statement that any theory satisfying the first order axioms for ZF+global choice
(in a suitably materialized form) contains the standard model.

The fact that this conversion is possible
and preserves all mathematical theory dome on the basis of ZF+global choice (apart from metamathematical completeness and undecidability questions) morally implies that nothing in a mathematical foundation should essentially depend on that we have a first order logic in the background.

TT: In each given background, there will be one standard model of ℕ, but the characteristics of ℕ will vary from background to background.

In model theory, one conventionally takes the background as fixed and known.
Otherwise one gets into an infinite regress: The model of the background also needs a background to be well-defined, etc..

Thus, in order to give mathematical definitions any formal sense at all. one needs to have a definite starting point somewhere in the background hierarchy,

So, if one starts with a standard model of ZF+global choice in some fixed generation above the working level, all subsequent levels are uniquely determined by standardness, and hence the model in which one works is, too.

Re: effect of foundations on “real” mathematics

In second order logic, standard models exist quite trivially, both for natural numbers and for ZF+global choice (assuming their consistency).

Just how is it that a model for ZF+global choice “exists quite trivially”? Show me a model of ZF+global choice.

You may think there’s a Platonic heavenly background of sets to which you can refer to create such a model, but I will not agree with that.

In the same way, you may think there is such a thing as “the standard integers”, but we may not agree on that either, since that refers to some mythical standard background which I do not believe exists.

In model theory, one conventionally takes the background as fixed and known.
Otherwise one gets into an infinite regress: The model of the background also needs a background to be well-defined, etc..

As to the first sentence, speaking to what is conventional: many and perhaps most model theorists and set theorists are platonists: they accept the objective existence of a world of sets, which can serve as a fixed referent for theories such as ZFC. IMO, this has been a limitation, which may partly explain why many such mathematicians are reluctant to accept toposes as viable alternate universes of sets, to their loss. They think (and you appear to agree) that there is a real world of sets (dressed or extensional), and these other universes just aren’t the “real thing”. But that’s just an opinion.

If I may draw an analogy: it’s just like how it was in the time of Kant, when people believed that there was a fixed background of space and time in which events of the world played out, and moreover, people believed this background was Euclidean, and believed in the objective truth of Euclidean geometry. This was a limitation for quite a while: it took quite a long time before such beliefs were thoroughly shaken and people began to accept non-Euclidean geometry as equally viable, and eventually began to imagine living inside such geometries on an intrinsic level, without the need for an ambient extrinsic Euclidean space to “hold it”.

(To pursue this example further, I can well imagine some Euclidean geometer holdouts who, upon being shown Beltrami’s model of the hyperbolic plane, might say, “sure, this models non-Euclidean geometry, but those ‘lines’ – those aren’t real lines, you know. They’re curved.” In the same way, some diehard Platonic set theorist might say, “okay, so you’ve modeled some alternate set theory, but you know, those aren’t real sets.”)

As to the second sentence: I have no idea what the fixed background might be, no idea whether you and I are referring to the same background, and in any event I don’t believe in the objective existence of a universe of sets. There is no fixed and known background, as far as I’m concerned; that’s a philosophic belief that I don’t subscribe to.

To put it another way, I’ll repeat: show me this model of ZF+choice you think exists quite trivially. You can’t. But what we can agree on is a first-order theory to work within, such as ZFC (or something else, like ETCS).

There is no more infinite regress going on than there is infinite regress of geometries.

Thus, in order to give mathematical definitions any formal sense at all. one needs to have a definite starting point somewhere in the background hierarchy,

Baloney, I say. Foundations start with axiomatic theories. There is no fixed, known background as far as I’m concerned.

Re: effect of foundations on “real” mathematics

You cannot discuss the limitations of first order logic by means of first order logic itself.

I think this is basically false; all mathematics I’ve ever seen can be formalized in first-order logic (such as ZFC). In particular, the incompleteness theorem for first-order logic, and the existence of standard models for second-order theories (relative to a fixed background set theory), can all be proven in an ambient first-order logic.

In second order logic, standard models exist quite trivially,

This is true, but it is only provable relative to a background set theory. There are two ways to give semantics for second-order logic. On the one hand you can assume a fixed background notion of set, but that’s no good for a foundational theory that’s supposed to tell you what a “set” is, not assume it. On the other hand you can give the collection of predicates along with the collection of individuals (Henkin semantics), which makes it essentially equivalent to two-sorted first-order logic. This is why, in my view, second-order logic has no foundational status.

In model theory, one conventionally takes the background as fixed and known.

Yes, this is because model theory, like all mathematics, is done (or, at least, can be formalized) in some axiomatic system, such as ZFC or what-have-you. Working in some formal system is equivalent to assuming a background model of that theory and working in it.

Mathematics is an abstraction, parts of which describe aspects of the real world. But the people doing mathematics are also part of the real world, governed by the same equations of physics written in mathematics which describe the rest of the world. Likewise, proof theory and model theory are abstractions which attempt to describe mathematics. But they are also part of mathematics, and thus can be described by the same sorts of formal systems which describe the rest of mathematics. In both cases there is self-reference and self-applicability, but no infinite regress unless you insist that mathematics really is formal logic, rather than merely being described by it.

Riddle Of The Sphynx

Mathematics is an abstraction, parts of which describe aspects of the real world. But the people doing mathematics are also part of the real world, governed by the same equations of physics written in mathematics which describe the rest of the world. Likewise, proof theory and model theory are abstractions which attempt to describe mathematics. But they are also part of mathematics, and thus can be described by the same sorts of formal systems which describe the rest of mathematics. In both cases there is self-reference and self-applicability, but no infinite regress unless you insist that mathematics really is formal logic, rather than merely being described by it. (Mike Shulman).

Logic is a normative science, not a descriptive science.

You might be able to say that logic is the normative science of how we ought conduct the business of description if we that business to succeed, that is, to achieve the end of description, but it may be that only in part.

C.S. Peirce held that mathematics is fundamental to any normative science, and thus more basic than logic.

Re: Riddle Of The Sphynx

Logic is a normative science, not a descriptive science.

Actually, it’s both. At a fundamental level, it’s descriptive: logic can’t tell us what logic “should” be except by observing how people reason. But once the basic rules are established, logic can tell us what derived rules of inference are valid, and in that sense it can be normative.

Re: Riddle Of The Sphynx

As if telling parallel lines that they are forbidden to meet will somehow stop them. Or that telling a dialetheia that it cannot exist is sufficient to banish it.

I can see how intelligent people from Euclid to Kant could believe this about geometry, and how intelligent people from Aristotle to Peirce could believe this about logic, but surely we know better now.

Re: Riddle Of The Sphynx

Well, I never —

— heard of any body suggesting that geometry is a normative science … until today … much less Euclid or Kant or Peirce.

The Big 3 normative sciences are classically reckoned to be Esthetics, Ethics, and Logic, whose respective objectives are Beauty, Justice, and Truth, each of whose pursuits is pursued under a legion of other names, of course.

Fortunately, the idea that logic is a just another Babelography of formal languages and not a science of using language usefully is one whose time has come and gone, along with all those other confusions of the Late Grate Twinkieth Century.

Re: Riddle Of The Sphynx

Anachronicities aside — and no matter how we fill the cells of the array thus generated — let’s not confound the conceptual axes of normative vs. descriptive, a priori vs. a posteriori, and analytic vs. synthetic.

It’s been a while, but I’m pretty sure Kant had a lot to say about regulative principles, which are norms by any other name.

As far as geometry goes — sure we’ve all gotten the relativistic religion since the revelation that God geometrizes e pluribus as often as unum, but I have to say that Kant was far less absolutist than the average mathematician of his day, since he conceived space and time to be forms of the sensible intuition and thus bearing some sort of relation, at least, to the contingent class of participant observers we mortals be.

But all that is tangent to the point of Peirce I was trying to relate about the normative status of logic and the standing of mathematics in relation tuit.

Re: Riddle Of The Sphynx

You're right that I shouldn't describe geometry (even when construed in absolute terms) as ‘normative’; it's not saying anything about how people should behave, or otherwise about how the world should be.

But I still can't see logic (at least mathematical logic, those mediaeval lists of fallacies to avoid are another matter) as normative. What does it say about how we should think? If, for example, Peirce says that I should accept ((P→Q)→P)→P((P \to Q) \to P) \to P as always valid, then I simply don't agree; it is because I have learnt about logic that I don't agree.

Re: Riddle Of The Sphynx

But I still can’t see logic (at least mathematical logic, those mediaeval lists of fallacies to avoid are another matter) as normative. What does it say about how we should think? If, for example, Peirce says that I should accept ((P→Q)→P)→P as always valid, then I simply don’t agree; it is because I have learnt about logic that I don’t agree. (Toby Bartels)

It’s not really like me to think in dichotomies — I always strive for unity or else a third something to mediate the polarities, so I’d be the last to try and convince you that policy and theory can or should be utterly dissociated — I think it’s more a matter of recognizing independent goals and views with respect to what are frequently overlapping realms of existence and experience.

I think that part of the communicative skewness here may be that there are at least two ways of understanding what a normative science is.

I can’t be sure I get them precisely, but some people seem to say that genuine norms tell you what you ought to do in absolute terms, in particular, no matter what your frame of reference or your end-in-view might be, and they seem to dismiss other types of norms under the headings of “relativist” or “utilitarian”.

Another take on the issue is unabashedly “pragmatic” —

The sciences are called that because they seek and may even find knowledge, approximate, most likely, but good enough to serve in the mean time.

A normative science is one that seeks and may even find knowledge of how we ought to conduct a given type of practical activity if we want to actualize the specific object of that practice.

Re: effect of foundations on “real” mathematics

I’ve never worked through this stuff myself, but I gather that Mike may be referring to Matiyasevich’s theorem, as applied to the recursively enumerable set of Gödel-numbered sentences that ZFC is able to prove.

Re: effect of foundations on “real” mathematics

I gather that Mike may be referring to Matiyasevich’s theorem, as applied to the recursively enumerable set of Gödel-numbered sentences that ZFC is able to prove.

Yes, that’s the basic idea, although for this particular r.e. set I don’t think one has to appeal to the general result; one can just look at the notion of provability and write down the Diophantine equation.

Re: effect of foundations on “real” mathematics

Torkel Franzen, bless his memory, used the Matiyasevich-Robinson-Davis-Putnam theorem to prove the first incompleteness theorem. He wanted to dispel the notion that strange self-referential statements were required to prove the first incompleteness theorem. Todd, you were acknowledged in one book I looked at for your help with Peirce Algebras.

Re: effect of foundations on “real” mathematics

MS: there exist particular, specific Diophantine equations for which the existence or nonexistence of solutions is undecidable in ZFC (or whatever your preferred foundation is).

I don’t think this is a correct statement of the underlying theorem by Matijasevich.
Undecidability is about classes of problems, not about single problems.

If your statement were true, there were a solution in some model of ZFC. But since this solution consists of a particular list of integers, one could presumably check with a finite algorithm this solution to be a solution in any model of ZFC.
(At least this holds if the integers are in the standard model.)

Re: effect of foundations on “real” mathematics

I believe that it’s true that the set of Diophantine equations is recursively-enumerable. But some specific equations or even entire sets are not recursive, thus are undecidable, but not the Diophantine set itself, because some equations are both r.e. and recursive.

“Every recursively-enumerable set is Diophantine. Because there exists a recursively enumerable set that is not computable, the unsolvability of Hilbert’s tenth problem is an immediate consequence.”

Which means that some Diophantine set is recursively-enumerable, but not computable because it’s not recursive. At least in Naive Set Theory, a set is defined by its membership.

Yuri Matijasevic, and Julia Robinson.
A quote from their article: “The consistency of a recursively axiomatizable theory is equivalent to the assertion that some definite Diophantine equation has no solutions.”

I’m not sure of this, but I don’t think that the definite Diophantine equations can be listed, even if they can be known to exist.

This by itself doesn’t talk about undecidability phenomena, so I’m not sure why you thought to “correct” Mike in this way. (It has applications to undecidability and to Hilbert’s tenth problem of course, but as a stand-alone statement it doesn’t talk about that.)

I’m also puzzled what you mean by “undecidability is about classes of problems, not a single problem”. I think it might depend on just how one decides to divvy up “problems”, but here is an example of what is arguably a single problem: there exists an example of a finitely presented group GG whose word problem is undecidable (no algorithm to decide whether two words in a specific group presentation of GG are congruent modulo the normal relator subgroup). See the last example of the Examples section here.

Re: effect of foundations on “real” mathematics

TT: here is an example of what is arguably a single problem: there exists an example of a finitely presented group G whose word problem is undecidable (no algorithm to decide whether two words in a specific group presentation of G are congruent modulo the normal relator subgroup).

Yes. The undecidability of the word problem says that there is no algorithm to decide whether two arbitrary words are equal in G. Thus it refers to
a class of individual problems.

But it does not say that there are two particular words in G for which it is undecidable whether they are equal.

Similarly, Matiyasevich’s theorem
(as given in wikipedia) implies
that there is no algorithm to decide whether two arbitrary integral polynomial systems have an integral solution.
(This is also a theorem by M., and th one to which I was referring, even though wikipedia dubbes a different theorem M’s theorem.)

The class of problems under consideration can be narrowed (by stronger arguments) to have a limited degree or a limited number of variables.

But M’s theorem does not imply that there is a particular integral polynomial system for which it is undecidable whether this system has an integral solution.

But this is what Mike had claimed as being true, though not mentioning M as source.

But I do not think that there is a setting very different from that of M
that allows one to prove undecidability questions.

Thus if Mike indeed meant his particular formulation to be true, I’d like him to point to a proof (in terms of M’s theorem or otherwise), detailed enough to show how it comes that the solvability of a particular system without any parameters is undecidable.

Re: effect of foundations on “real” mathematics

But M’s theorem does not imply that there is a particular integral polynomial system for which it is undecidable whether this system has an integral solution.

But this is what Mike had claimed as being true, though not mentioning M as source.

As Toby said, that’s not the meaning of “undecidable” that I was referring to. The meaning I was referring to has nothing to do with computability, but rather refers to independence from an axiomatic system.

Thus if Mike indeed meant his particular formulation to be true, I’d like him to point to a proof

Re: effect of foundations on “real” mathematics

MS: there exist particular, specific Diophantine equations for which the existence or nonexistence of solutions is undecidable in ZFC (or whatever your preferred foundation is).

AN: I don’t think this is a correct statement of the underlying theorem by Matijasevich.
Undecidability is about classes of problems, not about single problems.

AN: M’s theorem does not imply that there is a particular integral polynomial system for which it is undecidable whether this system has an integral solution.
But this is what Mike had claimed as being true, though not mentioning M as source.

MS: The meaning I was referring to has nothing to do with computability, but rather refers to independence from an axiomatic system.

AN: Thus if Mike indeed meant his particular formulation to be true, I’d like him to point to a proof.

I doubt that the coefficients of such a polynomial can be written explicitly in terms of decimal digits.

But if this is not part of the requirement, then I can improve your statement dramatically:

Theorem: There is a polynomial of degree 2 in 1 variable with integer coefficients for which it is undecidable if it has an integral zero.

Proof: Consider the polynomial p(x)=x2−cp(x)=x^2-c, where cc is the integer defined by c:=1c:=1 if the continuum hypothesis holds, and c:=2c:=2 otherwise. (This can be easily encoded
in a formula valid in FOL+ZFC.)

Thus your statement is trivially true
given the undecidability of CH.

Matiyasevich’s theorem, on the other
hand is highly nontrivial since it assumes that both coefficients and solution values are computable integers.

I therefore do not believe that
there exist particular, specific Diophantine equations (with coefficients given by computable integers - what else should specific mean?)
for which the existence or nonexistence of solutions (with values given by computable integers) is undecidable in
ZFC.

Re: effect of foundations on “real” mathematics

Undecidability is about classes of problems, not about single problems.

There are two senses of ‘undecidable’ that are getting mixed up here.

One is indeed about classes of problems. Simplifying a bit, it's about functions from ℕ→{0,1}\mathbb{N} \to \{0,1\} and whether they are computable. This is a class of problems, since for each integer we ask whether the value of the function at that integer is 11; the class of problems is decidable iff the function is computable. The word problem for a finitely presented group can be expressed in this way, since finite strings from a finite algebra can be encoded with integers.

If we have a single problem, then we are asking whether a constant function is computable. And of course it is, by this cheap trick.

But there is another sense of undecidability. Given a theory TT (such as PA or NBG) with a notion of proof and refutation, a statement PP in the language of TT is undecidable in TT if and only if there exists neither a proof of PP in TT nor a refutation of PP in TT. That is what Mike was talking about. And it is in that sense that (when TT satisfies the hypotheses of Gödel's famous theorem) the consistency of TT is undecidable in TT if (and only if, obviously) TT is consistent.

Re: effect of foundations on “real” mathematics

That Wikipedia link seems to indicate that the theorem is akin to Goodstein’s theorem in that it’s not provable in Peano arithmetic. But Goodstein’s theorem is still true. I’m really finding it hard to believe that there is an undecidable statement (assuming, say, transfinite induction) regarding solutions to a finite set of integer equations.

Re: effect of foundations on “real” mathematics

Goodstein’s theorem is “true” under the assumption that you can do induction up to the ordinal ϵ0\epsilon_0. Likewise, what is decidable is always relative to an axiomatic theory – there is no absolute meaning to either “true” or “decidable”.

Re: effect of foundations on “real” mathematics

These are integers, ferchrissakes. You don’t have to be too much of a Platonist to […]

No, not much, but maybe a little. It's still an infinitary statement, after all. I was a bit surprised to see Todd write ‘no absolute meaning’ with such finality, since obviously lots of people seem to think that there is an absolute meaning; still, I don't see any absolute meaning in it either.

Specific finite integers are another matter. Even if the computation may be unfeasibly large, I think that I understand how in principle one ought to be able to decide, absolutely, what is the last decimal digit of A(500,5000)A(500,5000) (where AA is the Ackermann function). But to decide something for every integer? That's not obviously absolute to me.

Re: effect of foundations on “real” mathematics

I was a bit surprised to see Todd write ‘no absolute meaning’ with such finality

I will confess to having some strong formalist tendencies. But I hope no one reads into that I think that mathematics is a meaningless game of symbol manipulation; on the contrary, there is a plethora of meanings, and no one of them is distinguished (to put it roughly).

I have a feeling that that such tendencies (which are common but not universal among category theorists in particular) run very counter to what people like Arnold believe in their heart of hearts, and that a lot of our disagreements over at the infamously long thread were basically philosophic in nature. The idea of such relativism in mathematics is obnoxious to many.

Re: effect of foundations on “real” mathematics

You don’t have to be too much of a Platonist to believe that either there exists an integer such that Goodstein’s theorem doesn’t hold or there doesn’t.

Maybe you don’t have to be “too much” of a Platonist, in the sense that you don’t have to believe that “very many” or “very complicated” mathematical objects exist Platonically. But I don’t see how “a little bit” of Platonism is any less problematic, philosophically, than a lot of it.

Re: effect of foundations on “real” mathematics

Is there actually no conflict between your disbelief and this quote?

“Computability and Logic” by Boolos, Burgess and Jeffrey, Chapter 11, “The Undecidability of First-Order Logic”
“It follows that if there were an effective procedure for deciding when a finite set of sentences implies another sentence, then the halting problem would be solvable; whereas, by Turing’s thesis, the latter problem is not solvable, since it is not solvable by a Turing machine. The upshot is, one gets an argument, based on Turing’s thesis for (the Turing-Buchi proof of) Church’s theorem, that the decision problem for implication is not effectively solvable.”

Re: A Dual for Set?

Mike pointed out that

Every model of ETCS is the unique model of ETCS that is cocomplete with respect to itself.

Is that expressible in a more category theoretic way? Is there a functor ETCS×ETCS→ETCSETCS \times ETCS \to ETCS which takes a pair of models of ETCS and forms the cocompletion of the first with respect to the second?

I’d be interested to hear of any constructions at the level of the category of models of ETCS. Is there, for example, a ‘choicification functor’ which maps well-pointed toposes with NNO (not necessarily with choice) to ETCSETCS?

Re: models of ETCS

Actually, what I said above was a bit misleading. Given two arbitrary models of ETCS, say S1S_1 and S2S_2, it doesn’t even make sense to ask whether S1S_1 is cocomplete with respect to S2S_2. That ought to mean that any S2S_2-set of objects of S1S_1 has a coproduct. But the phrase “S2S_2-set of objects of S1S_1” has no meaning since an S2S_2-set is just an object of the category S2S_2, with no a priori relationship to objects of the category S1S_1. You can only talk about cocompleteness relative to S2S_2 for categories/toposes indexed (or, equivalently, fibered) over S2S_2, which come equipped with a specified notion of “XX-indexed family of objects” for any X∈S2X\in S_2. So the precise version of what I said is that “given any model SS of ETCS, its canonical self-indexing is the unique cocomplete SS-indexed model of ETCS,” or equivalently “in the universe of mathematics based upon SS as a foundation, SS is the unique cocomplete model of ETCS.”

On the other hand, if your models of ETCS live inside some ambient set theory, then you can ask instead whether S1S_1 has coproducts (in the ambient sense) indexed by the ambient hom-sets homS2(1,X)hom_{S_2}(1,X), or equivalently that the “naive indexing” of S1S_1 over S2S_2 is cocomplete. However, this isn’t necessarily true even when S1=S2S_1=S_2; in that case it is equivalent to asserting that SS is either the category SetSet of all ambient-sets or a Grothendieck universe therein. (I thought this was interesting enough that I recorded it.) And if S1S_1 and S2S_2 are two Grothendieck universes, of course saying that S1S_1 is cocomplete w.r.t. S2S_2 just means that |S2|≤|S1|{|S_2|}\le {|S_1|}.

Re: A Dual for Set?

What if one adds to ETCS the Replacement axiom of McLarty (p.48)? Are there still many inequivalent models?

I ask because I had the impression that (i) ETCS plus Replacement was in some sense ‘the same as’ ZFC, and (ii) there were many inequivalent models of ZFC (assuming there’s one at all). On the other hand, I find it hard to see a conceptual difference between Replacement and the existence of coproducts, which (given that we have coequalizers) is the same as cocompleteness. And if I understand Mike correctly, ETCS + cocompleteness has only one model.

I guess I’m missing some subtlety about internal and external cocompleteness. Help!

Re: the replacement axiom

What if one adds to ETCS the Replacement axiom of McLarty (p.48)? Are there still many inequivalent models?

Yes, either because it’s equivalent to ZFC, or by a direct application of the incompleteness theorem.

On the other hand, I find it hard to see a conceptual difference between Replacement and the existence of coproducts

I think the missing point is actually very subtle, and it confused me for a long time. It’s the difference between the first-order replacement axiom and the second-order replacement axiom.

The Replacement axiom, in McLarty’s categorial phrasing, does intuitively say that any set-indexed family of sets has a coproduct. However, since it is a first-order axiom schema, what it actually says is that for any first-order formula φ[x,Y]\varphi[x,Y] which is functional up-to-isomorphism from elements x:1→Xx\colon 1\to X to sets YxY_x, there is a suitable XX-indexed family of sets. This makes sense in pure first-order logic without any ambient set theory. By contrast, the second-order replacement/cocompleteness axiom would say that for any actual function (in the sense of some ambient set theory) from elements x:1→Xx\colon 1\to X to sets YxY_x, there is a suitable XX-indexed family of sets. To see how much stronger the second statement is, notice that there are only countably many first-order formulas, but uncountably (indeed, proper-class many) many set-valued functions.

Gödel’s Incompleteness Theorems only apply to first-order logic. Second-order logic, in which quantifiers can range over predicates in addition to over individuals, is more powerful. (One also thinks of the predicate variables as “sets of individuals,” in contrast to the “individuals” which first-order quantifiers range over, but this is confusing when the theory involved is a set theory like ZFC in which the “individuals” are themselves traditionally called “sets”—in this case the “predicate variables” are more like “classes”.) However, second-order logic is notoriously tricky. Its proof theory is poorly behaved. And its “intended” or “full” semantics, in which predicate variables range over “all” possible predicates, doesn’t make sense without an ambient set theory relative to which we interpret those quantifiers. (Alternately, we can include in the notion of “model” an additional collection of things for the predicates to range over; but this essentially reduces us back down to a first-order logic with two sorts called “individuals” and “predicates”—or, if the theory is ZFC, “sets” and “classes”, as in NBG or MK. This is called “Henkin semantics”.)

In summary, because McLarty’s replacement schema is still first-order, the Incompleteness Theorem applies and there are many different models. But if you assume instead a second-order replacement axiom, relative to some ambient set theory, then it uniquely characterizes Grothendieck universes, for basically the same reasons that I said above. The analogous statement for ZFC is that VκV_\kappa is a model of ZFC with the second-order replacement axiom if and only if κ\kappa is inaccessible, but VκV_\kappa can be a model of ZFC with first-order replacement much more frequently than that. In fact, by being clever with the Löwenheim-Skolem theorem, you can show that if κ\kappa is inaccessible, then there are κ\kappa ordinals α<κ\alpha\lt\kappa such that VαV_\alpha models first-order ZFC.

Re: A Dual for Set?

I realise increasingly the depths of my ignorance. Something certainly appeals about structural characterisations of foundational mathematics. For example, I’d love it if Mike could show that the constructible universe is initial somewhere.

Re: Some Totals May Appear Smaller In Your Rearview Mirror

Re: A Dual for Set?

There’s something vexing about taking your version of Set to be non-extremal.

Unfortunately, I think the Incompleteness Theorem means that you will always continue to be vexed. (-: If SetSet exists at all, it can’t be characterized uniquely by any set of axioms not referring, explicitly or implicitly, to the notion of set. For instance:

I’d love it if Mike could show that the constructible universe is initial somewhere.

To the extent that this might be true, it could only be true relative to a previously chosen universe. The minimality of LL refers only to transitive models of ZFC that contain all the ordinals in some previously chosen model of ZFC. But given two unrelated models of ZFC, their corresponding versions of LL will likewise be unrelated.

FWIW, I read something of Lawvere’s once where he opined that the axiom of constructibility (V=LV=L) gives a better model for SetSet because SetSet is supposed to be the topos of maximally-constant/unvarying sets (arbitrary toposes being thought of as “variable” sets), and the constructible sets are somehow “more constant” or “less variable.” I can’t remember exactly where this was, though. I’m not sure I agree.

Re: A Dual for Set?

Perhaps you meant Lawvere from the end of Foundations and applications: axiomatization and education

Yes, that’s it, thanks. He asserts on p223 that

Gödel’s “constructible” sets LL in themselves have a tremendous amonut of structure, but the relevant category derived from them is, remarkably, in general much more constant (“structureless”) than that derived from an original ambient model VV.

But I can’t really find any justification for this. The closest I can see is his point that the sets in “forcing” models which violate CH are “variable” sets (= objects of sheaf toposes), and that sheaf models (= cohesive/variable sets) frequently violate CH because a variable subset of ℝ\mathbb{R} can be “sometimes countable and sometimes uncountable.” (Of course, LL satisfies CH.) But I don’t see any reason to regard this as evidence that falsity of CH is itself evidence of “non-constancy.” After all, a set in a sheaf model can be sometimes of cardinality ℵ0\aleph_0 and sometimes of cardinality ℵ2\aleph_2, but that doesn’t mean there isn’t also a “constant” cardinality ℵ1\aleph_1 in between ℵ0\aleph_0 and ℵ2\aleph_2.

Perhaps he is also thinking of the fact that the functor SetL→SetVSet_L \to Set_V is faithful, but not full, since he said on p222 that “In case the domain category is the more constant, then the comparison map [I(BA)→(IB)(IA)I(B^A) \to (I B)^{(I A)}] is injective and may be interpreted as the inclusion of the maps which are more inert into those less inert.” But again, I don’t see any reason a faithful functor must be interpreted that way; it can just as easily (in fact, I would think more easily) be interpreted as forgetting extra structure, where the injective comparison map includes those maps which preserve the extra structure into those that don’t.

This is getting off-topic (and it’s completely my fault), but it reminds me of the arguments for and against CH in II.3.4 and II.3.11 of Penelope Maddy’s lovely paper “Believing the Axioms” (J. Symbolic Logic 53 no. 2). On the one hand, CH can be regarded as a “restrictive” hypothesis because it prevents us from introducing a set of cardinality intermediate between ℵ0\aleph_0 and 2ℵ02^{\aleph_0}, which we know is a perfectly consistent thing to have. On the other hand, not-CH can be regarded as restrictive because it prevents us from introducing a bijection between ℝ\mathbb{R} and ω1\omega_1, which we know is also a perfectly consistent thing to have.

Re: A Dual for Set?

Re: A Dual for Set?

FWIW, I read something of Lawvere’s once where he opined that the axiom of constructibility (V=LV=L) gives a better model for SetSet because SetSet is supposed to be the topos of maximally-constant/unvarying sets (arbitrary toposes being thought of as “variable” sets), and the constructible sets are somehow “more constant” or “less variable.” I can’t remember exactly where this was, though. I’m not sure I agree.

I know that Lawvere considers the axiom of choice a feature of the constancy of sets. Among constructivists, one sometimes sees this position (paraprhased):

I don't see why people worry about the axiom of choice when they use excluded middle. If you're going to accept nonconstructive uses of excluded middle, why not go all the way and accept the full axiom of choice?

To this, one might add:

If you're going to accept the full axiom of choice, why not go all they way and accept V=LV = L?

Personally, I don't see how the constancy of sets leads one to accept the axiom of choice (or even excluded middle), any more than you see how the constancy of sets leads one to accept V=LV = L. But perhaps it is the same intuition for Lawvere.

Re: A Dual for Set?

Another baby question, if you have the patience: Can you give an example of a statement which does not obviously depend on the excluded middle but which actually does? In my experience, it often plays only expository or psychological roles in arguments, so it would be nice to know a few really convincing examples where this is not the case.

Re: A Dual for Set?

Re: A Dual for Set?

Can you give an example of a statement which does not obviously depend on the excluded middle but which actually does?

That depends on what you mean by ‘statement’ and what you mean by ‘obvious’. Even in logic, it may not be obvious that Peirce's Law ((P→Q)→P)→P((P \to Q) \to P) \to P implies excluded middle, until you take QQ to be false. On the other hand, in a way everything is obvious once you understand the proof.

In analysis, there are many statements that imply excluded middle, and many more that imply special cases of excluded middle that constructive analysts (or at least some of them) don't accept. For example, that every inhabited set of real numbers with an upper bound has a supremum is (if you define ‘real number’ right) equivalent to excluded middle; at the other extreme, that every real number that does not equal zero has an absolute value greater than some positive rational number is Markov's Principle (which is accepted by Markov's school of constructive analysis but not most others).

In first-order arithmetic, there is a sense in which there are no such examples. Specifically, if PP is a statement that can be proved in Peano arithmetic (with a first-order induction scheme), and if PP does not feature disjunction or existential quantification, then PP can be proved in Heyting arithmetic. On the other hand, if PP does feature disjunction or existential quantification, then you ought to be suspicious that it's not constructive, shouldn't you?

Re: A Dual for Set?

Thanks for your help. It just seems to me that for every foundations-sensitive thing XX that comes up in my own work, there is an approximation to XX which is both foundations-insensitive and more interesting than XX, meaning we should forget about XX. This probably says more about me than about mathematics, but whatever the case, it’s clear that getting to the bottom of things would require more time than I’m willing to put in, at least for the foreseeable future.

Re: A Dual for Set?

It just seems to me that for every foundations-sensitive thing XX that comes up in my own work, there is an approximation to XX which is both foundations-insensitive and more interesting than XX, meaning we should forget about XX.

I'm not surprised by that! Dabbling in constructive mathematics has shown me that often the constructively valid version of a classical result is more interesting, and that is a special case of this phenomenon. Fred Richman writes about that here (pages 4–6).

Re: A Dual for Set?

“Dabbling in constructive mathematics has shown me that often the constructively valid version of a classical result is more interesting…”

Yes, that’s exactly what I’m saying.

To answer Mike’s question here are a few that come to mind:

1. Let KK be a field. Then lots of people like to talk about Galois representations over KK, by which they mean representations of the Galois group Gal(K¯/K)\mathrm{Gal}(\bar{K}/K), where K¯\bar{K} denotes an algebraic closure of KK. Also, they often fix a choice of K¯\bar{K} once and for all. But it’s much better to define them to be sheaves on the etale topology of KK, which are certain functors on the category of all etale KK-algebras. The first definition doesn’t even get off the ground unless you assume the axiom of choice, and then it’s equivalent to the sheaf-theoretic definition. But the sheaf-theoretic definition is really the better concept.

2. Typically people define schemes to be topological spaces equipped with a sheaf of rings satisfying certain properties. There is a contravariant functor Spec\mathrm{Spec} from the category of (commutative) rings to the category of schemes. The points of Spec(A)\mathrm{Spec}(A) are the prime ideals of AA. This functor is fully faithful assuming the axiom of choice. But you don’t know you have any prime ideals without it. It’s much better (though many people disagree with this) to define schemes to be certain sheaves of sets on the opposite of the category of rings equipped with a certain topology. Then you don’t need prime ideals and hence the axiom of choice anymore. You just use the rings themselves.

On the other hand, according to what is currently known, I believe, this approach requires the use of universes. This is because the category of rings is not small. I think the right way around this is to look, not at all sheaves on this large site, but only the sheaves that are small colimits of representables. I suspect that not only would this eliminate the need of universes, but also that these sheaves are all you ever really cared about anyway, giving an example within the example.

3. Cohomology of sheaves using injective resolutions versus hyper-Cech cohomology. I haven’t thought much about the distinction here, so let me just say that I like the smell of the latter more than the former.

It’s a little embarrassing to care too much about these issues, at least in public, but it would be nice if a few people did.

Re: A Dual for Set?

James wrote:

Typically people define schemes to be topological spaces equipped with a sheaf of rings satisfying certain properties. There is a contravariant functor Spec from the category of (commutative) rings to the category of schemes. The points of Spec(A) are the prime ideals of A. This functor is fully faithful assuming the axiom of choice. But you don’t know you have any prime ideals without it. It’s much better (though many people disagree with this) to define schemes to be certain sheaves of sets on the opposite of the category of rings equipped with a certain topology. [Emphasis mine – TT] Then you don’t need prime ideals and hence the axiom of choice anymore. You just use the rings themselves.

You might like to know that Lawvere would strongly agree with this – I heard him say pretty much exactly the same thing in a talk he gave at U. Chicago I attended.

Without having a precise idea of the history myself, I think he was saying that the locally ringed space approach was popularized to a large extent by Dieudonné, but that Grothendieck himself favored the sheaf gros topos approach. (Perhaps the Bourbakist in Dieudonné didn’t care for the foundational subtleties the Grothendieck approach entailed.) Lawvere added some remarks on how much simpler conceptually the gros topos approach is.

Re: A Dual for Set?

Definitely many people agree. I’ve never been able to understand Lawvere’s papers, so I don’t know what to make of his opinion, but it seems many category theorists who I respect think very highly of him.

Regarding the historical point, as much as I would like to believe that Grothendieck preferred the approach using sheaves to that of locally ringed spaces, I don’t think I’ve seen any evidence of this in his writings. But I’ve only read a fraction of them, especially if you include his informal writings after 1970. He used choice all over the place, so I doubt he would have cared too much about getting around it, even though he created some of the ways of doing it.

Re: A Dual for Set?

Those are nice examples, thanks! When you said “foundations-sensitive” my mind interpreted that as referring to statements like large cardinals or GCH. I’ve definitely seen examples like these before, where the version of an argument that avoids PEM or AC is the “better” one and not just because it’s more general.

Re: A Dual for Set?

I have another question about foundations and Diophantine equations. Is it possible to come up with a Diophantine equation (=a finite system of polynomial equations with integral coefficients) that has a solution assuming the axiom of choice but not otherwise? Same question but with excluded middle instead of choice.

Re: A Dual for Set?

Every cocomplete, locally-small, well-pointed topos is equivalent to SetSet

Although you proved it as a corollary to a theorem that used excluded middle (at least as you stated it), I don't think that the corollary requires it. Once we get the embedding of SS in SetSet, we just need to show that it is essentially surjective, which we can do because SS is cocomplete.

open subtoposes of Set

Every cocomplete, locally-small, well-pointed topos is equivalent to SetSet

Although you proved it as a corollary to a theorem that used excluded middle (at least as you stated it), I don’t think that the corollary requires it. Once we get the embedding of SS in SetSet, we just need to show that it is essentially surjective, which we can do because SS is cocomplete.

Actually, I think you and I had an email exchange about the counterexample about a year ago. Let 𝒰\mathcal{U} be a truth value (identified with the subset {⋆|𝒰}⊆{⋆}=1\{\star | \mathcal{U}\} \subseteq \{\star\} = 1) such that ¬¬𝒰\not\not\mathcal{U} is valid and 𝒰\mathcal{U} is indecomposable and projective, and let SS be the topos Set/𝒰Set/\mathcal{U}. Then SS is a Grothendieck topos, hence cocomplete and locally small, and the assumptions on 𝒰\mathcal{U} ensure that it is well-pointed, but in an intuitionistic theory they don’t imply that 𝒰=⊤\mathcal{U}=\top.

well-pointed Set-topoi

Once we get the embedding of SS in SetSet, we just need to show that it is essentially surjective, which we can do because SS is cocomplete.

In addition to giving a counterexample, it might be helpful to say more explicitly how this argument fails. If SS is a well-pointed SetSet-topos (meaning it admits a geometric morphism to SetSet, which is equivalent to being locally small and cocomplete), then for any set XX there is a canonical map η:X→S(1,∐X1)\eta\colon X\to S\left(1,\coprod_X 1\right) which takes x∈Xx\in X to the xthx^{th} coprojection jk:1→∐X1j_k\colon 1\to \coprod_X 1. (This is the unit of the geometric morphism S→SetS\to Set.) However, intuitionistically, this map need be neither injective nor surjective. The counterexample shows this, but let’s also look at how the usual proof uses PEM.

For injectivity, we need to show that if jx=jyj_x=j_y, then x=yx=y. If x≠yx\neq y, then since coproducts in a topos are disjoint, the pullback of jxj_x and jyj_y is the initial object 00. But since jx=jyj_x=j_y, their pullback is also 11. By well-pointedness, 00 is not isomorphic to 11, so by contradiction, x=yx=y.

For surjectivity, we need to show that any map k:1→∐X1k\colon 1\to \coprod_X 1 is equal to jxj_x for some xx. Now for each xx, the pullback of kk and jxj_x is a subobject of 1, call it UxU_x. Since a well-pointed topos in an ambient Boolean logic is Boolean and two-valued, each UxU_x must be either 00 or 11. Since coproducts in a topos are stable under pullback, 1=∐XUx1 = \coprod_X U_x. But then if Ux=0U_x=0 for every xx, we would have 1=∐X0=01 = \coprod_X 0 = 0, contradicting well-pointedness. Thus there must be an xx with Ux=1U_x=1, which implies that k=jxk=j_x.

Re: A Dual for Set?

Thanks, Mike. (I assume that you mean these results actually, rather than apparently, require PEM, in that they are equivalent to it, rather than just are implied by it.) What implications this has for my own thoughts on foundations will take some time to sort out.

Re: A Dual for Set?

I didn’t mean that those statements necessarily are equivalent to PEM—only that for each of those statements there exist (necessarily non-Boolean) topoi in which which they are false. I know for a fact that some of those statements do not imply full PEM. For instance, “every Dedekind real is a Cauchy real” is true in the absence of PEM whenever you have countable choice (which many constructivists like to assume). The last statement about toposes equivalent to SetSet also does not imply PEM. However, “every finitely indexed set is finite” does imply PEM, see here. I don’t know offhand about the others.

Re: A Dual for Set?

Personally, I don’t see how the constancy of sets leads one to accept the axiom of choice (or even excluded middle), any more than you see how the constancy of sets leads one to accept V=LV=L.

Yes, it does seem to be the same sort of argument. Booleanness, Choice, and CH can all be regarded as features of a topos that make it more like SetSet… if you assume that SetSet has those properties to begin with!

If you’re going to accept the full axiom of choice, why not go all they way and accept V=LV=L?

FWIW, Maddy argues (or, at least, presents an argument given by some people) that the axiom of choice is true in LL for “the wrong reasons.” In II.3.4 she says that the axiom of choice is supposed to act as a “maximizing” principle “implying the existence of complex, probably undefinable sets like a well-ordering of the reals”. By contrast (the argument goes), instead of choice implying the well-ordering theorem as in the “intuitive” order of things, in LL the axiom of choice is true because LL itself can be definably well-ordered. That is, in LL the axiom of choice “is not true because there are complex sets; rather it is true because there is an artificially simple well-ordering.”

Reminds me of Jerry Bona’s quip that “The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn’s Lemma?”

Re: A Dual for Set?

In II.3.4 she says that the axiom of choice is supposed to act as a “maximizing” principle “implying the existence of complex, probably undefinable sets like a well-ordering of the reals”.

But isn't it just as well a “minimizing” principle, implying the nonexistence of certain sets, such as a set XX, a set YY, and a surjection f:X→Yf\colon X \to Y that does not split? As you alluded in a previous comment, the difference between these is somewhat arbitrary. If minimality (only projective sets exist) is your motivation for choice, then V=LV = L seems to be the right reason.

Re: A Dual for Set?

But isn’t it just as well a “minimizing” principle

I was just repeating what she said. (-: It does look like whether something is “maximizing” or “minimizing” depends on where you stand.

The shepherd drives the wolf from the sheep’s throat, for which the
sheep thanks the shepherd as a liberator, while the wolf denounces him
for the same act as the destroyer of liberty. – Abraham Lincoln

Re: effect of foundations on “real” mathematics

The argument is that once the problem of deciding whether there is a proof of 0 = 1 in NBG is converted into existence of a solution of some fixed Diophantine equation (in some fixed ℤm\mathbb{Z}^m), then we can build a computer – an actual computer or Turing machine, with a specific number of internal states, let’s say 1000, which will work on a specified enumeration of points in ℤm\mathbb{Z}^m and systematically check them one by one to see whether one of them satisfies the equation (and once it does so, it halts); it leaves a check mark for each point which doesn’t satisfy the equation. By the busy beaver argument, it would seem that if it hasn’t halted after S(1000) check marks, then it’s never going to halt. And therefore, it would never find a solution to the Diophantine solution, even if we let it continue running forever. And therefore, that no proof of 0 = 1 in NBG exists!

I don’t really see what’s so creepy about this — it just seems obvious. Maybe I’m missing the point. But to me it seems that the stuff about Diophantine equations and NBG is just a distraction. You can turn any question about whether a proposition follows from some recursively enumerable set of axioms in first-order logic into the question about whether some Diophantine equation has a solution. But why bother? It amounts to writing programs in a pathetically inconvenient programming language. It seems we can boil the problem down to this:

Take your favorite axiom system stronger than Peano arithmetic. Write a computer program to systematically list all the theorems this system proves and ring a bell if it finds a proof of both PP and not(P)not(P) for some proposition PP.

If the system is inconsistent, the bell will eventually ring. If it’s consistent, the bell will never ring.

Is this a decision procedure for the consistency of such axiomatic systems?

No: you don’t know how long to wait before you can be sure the bell will never ring.

Of course if you believe in classical logic you believe there exists some time with the following property: if the bell doesn’t ring by then, it never will. But it’s easy to prove that if your system is consistent, you can’t write any program that computes an upper bound on this time.

Re: effect of foundations on “real” mathematics

JB wrote: “Take your favorite axiom system stronger than Peano arithmetic. Write a computer program to systematically list all the theorems this system proves and ring a bell if it finds a proof of both P and not(P) for some proposition P.”
————————————

Aren’t you talking about formal systems complex enough to fall under Godel Inc.? I ask because I wondered why you stipulated Peano Aritmetic? Systems much weaker than PA support Godel Inc. Also I think that theorems not provable cannot be listed (which doesn’t disagree with you). Originally, Godel used PM, Principia Mathematica.

Re: effect of foundations on “real” mathematics

Aren’t you talking about formal systems complex enough to fall under Gödel Inc.?

Yes.

I ask because I wondered why you stipulated Peano Arithmetic? Systems much weaker than PA support Gödel Inc.

Yeah, but fewer people have heard of those other systems.

When I learned this stuff in school, I think my teachers used
the system Q, also known as Robinson arithmetic. And that’s very nice, because it drops mathematical induction, which is an axiom schema — so it just has finitely many axioms, but Gödel’s theorems still apply.

But if I said “Robinson arithmetic”, fewer people would understand what I was trying to get at.

Re: effect of foundations on “real” mathematics

John wrote:

I don’t really see what’s so creepy about this — it just seems obvious.

Well, gee, thanks. I guess I’m just stupid then.

But to me it seems that the stuff about Diophantine equations and NBG is just a distraction. You can turn any question about whether a proposition follows from some recursively enumerable set of axioms in first-order logic into the question about whether some Diophantine equation has a solution. But why bother? It amounts to writing programs in a pathetically inconvenient programming language.

Not that it matters at this point, since I think Daniel has explained the matter to my satisfaction (and you summarized it sort of), but the point of those machinations was to cover some conceivable objections I thought one could possibly raise, having to do with issues of logical complexity (nested alternations of ∃\exists and ∀\forall), which I suspected were irrelevant to my concerns but which I didn’t feel like addressing either. So it was just a maneuver to forestall possible objections.

If you found it distracting, feel free to ignore it.

But it’s easy to prove that if your system is consistent, you can’t write any program that computes an upper bound on this time.

Yes, that’s more or less the point. To me it was a little weird at first to think that NBG is powerless to provide a definitive upper bound on a specific constant like S(1000)S(1000), S(50)S(50), whatever it is, since these are provably well-defined according to Rado’s work, but the creepiness factor has definitely abated since I’ve gotten more used to it. In fact, it’s practically gone by now.

Re: effect of foundations on “real” mathematics

To me it was a little weird at first to think that NBG is powerless to provide a definitive upper bound on a specific constant like S(1000)S(1000), S(50)S(50), whatever it is, since these are provably well-defined according to Rado’s work

You don’t need anything so fancy as Turing machines and the busy beaver to find a specific constant on which NBG is powerless to provide a definitive upper bound (assuming NBG is consistent). Just define NN to be the length of the shortest proof of 0=10=1 in NBG, if one exists, and 00 otherwise. At least assuming classical logic, NN is a perfectly well-defined natural number, but if NBG could prove an upper bound on it then it could prove its own consistency by additionally checking all proofs up to that length.

Re: effect of foundations on “real” mathematics

Re: effect of foundations on “real” mathematics

John wrote:

I don’t really see what’s so creepy about this — it just seems obvious.

Todd wrote:

Well, gee, thanks. I guess I’m just stupid then.

Humph. I’d never suggest that. And you obviously know tons more logic than I do. Maybe I’ve just spent more time on this sort of puzzle. When you were learning useful stuff like category theory, I was wasting my time on computability and decidability.

Here’s something similar that really did weird me out:

Pick a recursively enumerable consistent set of axioms extending Peano arithmetic. And pick a programming language. Then there’s a constant KK for which you can’t prove that any particular integer requires a program of length ≥K\ge K to print it out.

Note: only finitely many integers can be printed out by programs of length <K\lt K. After all, there are only finitely many such programs! And this is easy to prove. So, we can certainly use our axioms to prove that infinitely many integers can’t be printed by programs of length <K\lt K. But we can’t prove this for any particular integer.

In more rhapsodic terms: we know things can be arbitrarily complicated, but there’s a limit on how complicated we can prove any particular thing is.

The constant KK depends on the programming language and the axiom system, but given this information we can compute an upper bound on it. I suspect it’s not very big. Unfortunately I can never get anyone to tell me an upper bound, and I’m too lazy to work it out myself!

Re: effect of foundations on “real” mathematics

When you were learning useful stuff like category theory, I was wasting my time on computability and decidability.

I didn’t know that about your past (although you did mention once that you wanted to do a senior thesis under Simon Kochen on quantum theory and logic, so I guess you had been doing some logic and presumably some recursion theory prior to that). Yes, as may be obvious from some comments, my knowledge of recursion theory is admittedly slender.

Unfortunately I can never get anyone to tell me an upper bound, and I’m too lazy to work it out myself!

Sorry, what was that link to again? You were trying to ask people over google groups or something?

Re: effect of foundations on “real” mathematics

Todd wrote:

I didn’t know that about your past…

I’ll take this as an excuse to ramble on a bit about my interest in logic.

When I was about 10 or so, my father checked van Heijenoort’s sourcebook on logic, From Frege to Gödel, out of the public library and gave it to me. It had a big impact on me, even though I didn’t understand much. I later discovered that my dad thought the title was From Frege to Googol. He knew I liked big numbers, and he thought this was a book about those.

Starting roughly around then, I became quite concerned about how we can know anything — and what it means to know something. My dad also had a book of essays by Ayer, and this pushed me in the direction of logical positivism. This was later tempered by an interest in Nietzsche, Buddhism, and Taoism.

In my junior year of high school I didn’t take any math classes, because I’d already finished calculus. That made me feel bad, so in my senior year I took some math classes at George Washington University. The two that made a big impact were taught by David Nelson: one on the propositional calculus and predicate calculus, and another on axiomatic set theory. I think both were taught out of Kleene’s Mathematical Logic. I remember the beginning of the second one, where Nelson held a few pieces of chalk in his hand and said “We think we know what a set is: something like this. But it’s not so simple.”

In my freshman year at college I took a course taught by Paul Benaceraff using Boolos and Jeffrey’s book Computability and Logic. It was very fun and easy, probably because there were a lot of nonmathematicians taking it.

Sometime shortly thereafter I took a course on G&oum;del’s incompleteness theorems taught by Saul Kripke. He made us do lots of homeworks full of huge well-formed formulas, because he didn’t want to raise a generation of students who took the Church–Turing thesis on faith. We had to actually write down the predicate PP in Robinson arithmetic such that P(n)P(n) iff nn was the Gödel number of a proposition that was provable in Robinson arithmetic. Luckily we were allowed to use abbreviations for various simpler functions and predicates required to define this one. But I remember a friend seeing my homeworks with enormous pencil-written formulas, full of smeared erasures where I’d made corrections, and saying that it looked depressing.

I graduated in 3 years. In my 2nd year I asked Simon Kochen if he would supervise a ‘junior thesis’ on mathematical logic and quantum mechanics. He asked me if I knew the spectral theorem. I said no, and he said “Well, then, I’m afraid I’m not really interested…” As I told you earlier, this may have been why I went into functional analysis. His remark pissed me off so much that I spent the summer before my senior year reading Reed and Simon’s Methods of Modern Mathematical Physics I: Functional Analysis, learning the spectral theorem. I fell in love with functional analysis, took a grad course on it from Elliot Lieb in my senior year, and went on to work with Irving Segal in grad school.

The next year I asked Edward Nelson if he would supervise a senior thesis on this topic. He asked me to explain it to him; I did, and he said he wasn’t interested. Grrr! I wasn’t having much luck talking to these guys. But I stubbornly persisted and managed to get John Burgess in the philosophy department to be my advisor.

I wrote the thesis, handed it to him, and then he told me I’d reinvented a bunch of standard stuff: for example, the definition of a computable function from the real numbers to the real numbers, or from a separable Hilbert space to a separable Hilbert space. He told me to read Moschovakis’ Descriptive Set Theory and use their notation and theorems instead of developing everything from scratch. So I did: I rewrote my thesis that way.

Meanwhile I was taking a class on axiomatic set theory that did the proof that the axiom of choice was consistent with ZFZF, using Jech’s Set Theory. But I didn’t enjoy this very much, and so I never went on to learn forcing. Philosophically I’d become much less interested in how we can know mathematical truths and much more concerned with how to decide what I should do — i.e., what if any basis there was for any sort of morality, or even any decision whatsoever. The independence of the axiom of choice seemed of piddling importance compared to this.

This problem threw me into a kind of depression — or perhaps my depression led me to dwell heavily on this problem. I spent my years in graduate school battling with this while simultaneously turning away from mathematical logic towards mathematical physics. I was pretty miserable lots of the time.

I solved this problem about 2 years after grad school, but that’s another story. I only got interested in logic again when I learned of its relation to category theory.

Re: effect of foundations on “real” mathematics

Well, the main point of that link was to provide people here with enough information to compute the bound:

For any finite consistent first-order axiom system AA extending the
usual axioms of arithmetic, let K(A)K(A) be the constant such that
no bit string can be proved, using AA, to have algorithmic entropy
greater than K(A)K(A). We can’t compute K(A)K(A) exactly, but there’s a
simple upper bound for it. As Gacs explains (in his Corollary 5.2), for some constant cc we have:

K(A)<L(A)+2log2L(A)+cK(A) \lt L(A) + 2 log_2 L(A) + c

where L(A)L(A) denotes the length of the axiom system AA, encoded as
bits as efficiently as possible. I believe the constant cc is
computable, though of course it depends on details like what
universal Turing machine you’re using as your computer.

What I want to know is, how big in practice is this upper
bound on K(A)? I think it’s not very big! The main problem
is to work out a bound on cc.

The secondary point was to note that I’ve been asking about this for at least 12 years!

You were trying to ask people over google groups or something?

‘Google Groups’? Pshaw! This was the heyday of the usenet newsgroup sci.physics.research, when ‘Google Groups’ probably didn’t exist — or at least Google hadn’t taken over the world yet.

Re: effect of foundations on “real” mathematics

Well, the main point of that link was to provide people here with enough information to compute the bound:

[…]

The secondary point was to note that I’ve been asking about this for at least 12 years!

You were trying to ask people over google groups or something?

‘Google Groups’? Pshaw! This was the heyday of the usenet newsgroup sci.physics.research, when ‘Google Groups’ probably didn’t exist — or at least Google hadn’t taken over the world yet.

And you’ll note who I was talking to back then!

It’s clear that when I click on the link you originally provided, I’m seeing something different from what you intended. What I see looks like some front page for Google groups. So no, I can’t note who you were talking with (someone over at sci.physics.research?): I have no idea who or what you’re talking about! Don’t know what other people who clicked your link saw.

But then you embed another link (within the quote you just gave, I guess from whatever you intended to link to) with the name ‘Gacs’ – that sends me to a page from some lecture notes of Peter Gacs, which I can read fine.

Re: A Dual for Set?

I’m really finding it hard to believe that there is an undecidable statement (assuming, say, transfinite induction) regarding solutions to a finite set of integer equations.

It may be hard to believe, but it’s pretty well-known to be true.

Take any statement that’s undecidable in your favorite axiom system — like “ZFC is consistent”, which is undecidable in ZFC. Then you can write down a Diophantine equation which has a solution iff this statement is true. And people have actually done it!

For example, Theorem 1 in James Jones’ paper ‘Undecidable Diophantine equations’ gives an explicit Diophantine equation in 12 unknowns and four integer parameters x,z,u,yx,z,u,y such that the equation has a solution if and only if x∈Wz,u,yx \in W_{z,u,y}. Here the set Wz,u,yW_{z,u,y} can be any recursively enumerable set you want — you just have to pick the integers z,u,yz,u,y correctly.

For example, by picking these numbers right, you can get Wz,u,yW_{z,u,y} to be the list of Gödel numbers of provable theorems in ZFC. Then take xx to be the Gödel number of the proposition ‘1=01 = 0’. So then you’ve got a Diophantine equation in 12 variables that has a solution iff ZFC is inconsistent!

Or, if you prefer, you can replace ZFC by ZFC supplemented with some axiom asserting the existence of some sort of ridiculously large cardinal. And then you’ve got a Diophantine equation that has a solution iff it’s inconsistent for such a cardinal to exist!

More generally: no matter how absurdly esoteric or undecidable some question about the foundations of mathematics may be, there’s a Diophantine equation in 12 variables whose solvability depends on this question. And we can write it down explicitly.

(By the way, there’s no need for the sets Wz,u,yW_{z,u,y} to be indexed by three integers. One will do. And it would have been cuter if Jones had stated the result that way.)

(Also by the way: where I said ‘you just have to pick the integers z,u,yz,u,y correctly’, I should add that there’s nothing mysterious about how to do this. They may be big, but they’re computable.)

Re: A Dual for Set?

I think the problem may be that I’m misunderstanding the word undecidable. If all it means is that one can’t prove that a solution doesn’t exist, I’m fine with that. I was interpreting it to mean that the existence of a solution was independent of ZFC (or whatever).

Re: A Dual for Set?

I think the problem may be that I’m misunderstanding the word undecidable. If all it means is that one can’t prove that a solution doesn’t exist, I’m fine with that. I was interpreting it to mean that the existence of a solution was independent of ZFC (or whatever).

Those are the same thing. Being independent just means that you can’t prove it or disprove it. By the Completeness Theorem, that’s the same as saying that it’s true in some models and false in others.

Re: A Dual for Set?

Re: A Dual for Set?

Aaron Bergman wrote: “So, you are saying that there exists a Diophantine equation such that a solution exists in some models of ZFC but not in others? I’m back to not believing then.”

SH: Recall that Godelian Inc. says that any system sufficiently complex will either be inconsistent or incomplete: there will be true but unprovable statements in that formal mathematical system. Now Godel says you can fix that by adding axioms which will establish consistency. Here is an example; ZFC is standardly known to be incomplete.

Handbook of Mathematical Logic. J. Barwise editor, p. 343:

“Our first large cardinal axiom states that there is an inaccessible cardinal.
The first thing to show is that it really is a new axiom, i.e., that it is not
provable in ZFC. We sketch how this is done.
What we need is a model of ZFC in which the new axiom is false. Now if there is no inaccessible cardinal, the class of all sets furnishes such a model. Now suppose that a is the least inaccessible cardinal. Since a and R(a) satisfy the three principles of Section 3, we suspect that R(a) is a model of ZFC. This is indeed the case; the proof rather resembles our derivation of the axioms of ZFC from the principles of Section 3. The fact that the new axiom does not hold in R(a) follows (with some work) from the fact that there is no inaccessible cardinal in R(a).

Does the new axiom have any interesting consequences? There is at least one. A famous theorem of Godel says that the consistency of ZFC cannot be proved in ZFC. This consistency can be proved from the new axiom. The crucial fact has already been stated: if a is inaccessible, then R(a) is a model of ZFC. Once we have a set which is a model of ZFC, it is easy to prove that ZFC is consistent.”

SH: This is an example of Godel’s comment that adding a new axiom can change which statements are true but not provable in some particular formal axiomatic system. However, Godel notes that when you add such an axiom to fix one problem, in the new system, the Incompleteness will continue to hold for different true but unprovable statements.

The Matijasevich-Robinson-Davis-Putnam result which was used to answer Hilbert’s tenth problem was used by Torkel Franzen to avoid self-referential statements and prove Godelian Incompleteness close to the precise scope of meaning. The Wikipedia writes:

“Hilbert’s challenge sought an algorithm which finds all solutions of a Diophantine equation. … Matiyasevich showed this problem to be unsolvable by mapping a Diophantine equation to a recursively enumerable set and invoking Gödel’s Incompleteness Theorem.”

Re: A Dual for Set?

Aaron wrote:

I think the problem may be that I’m misunderstanding the word undecidable. If all it means is that one can’t prove that a solution doesn’t exist, I’m fine with that. I was interpreting it to mean that the existence of a solution was independent of ZFC (or whatever).

Mike explained it already, but I want to emphasize: we can write down a Diophantine equation for which the existence of solutions is independent of ZFC (or whatever).

Indeed, the situation may be worse than you realize. You may think you know what the ‘standard model’ of arithmetic is: the natural numbers

0,1,2,3,4,…0,1,2,3,4,\dots

with their usual operations. But those “…\dots” conceal a lot of mystery!

You see, there are also lots of ‘nonstandard’ models of arithmetic with extra natural numbers besides the ‘standard’ ones.
They obey all the axioms of Peano arithmetic, but they’re different.
In particular, there are Diophantine equations that have solutions in some of these ‘nonstandard’ models but not the ‘standard’ one. And we can write down such equations explicitly.

And no matter how many axioms you throw in to Peano arithmetic, as long as they are consistent and recursively enumerable, they cannot rule out all these other ‘nonstandard’ models. If the ‘standard’ natural numbers obey these axioms, so will infinitely many ‘nonstandard’ models. And there will always be Diophantine equations with solutions in these ‘nonstandard’ models but not the ‘standard’ one.

You may say “Forget those nonstandard models! I only care about the standard one!”

But there’s no way for you to write down axioms that single out the ‘standard’ one!

Re: A Dual for Set?

But isn’t the canonical copy of the naturals in ZF always standard? I naively assumed that in ZF, although you could get non-standard arbitrary sets, our favorite von-Neumann ordinals up to \omega would always be standard.

Re: A Dual for Set?

Re: A Dual for Set?

The Other Tom wrote:

But isn’t the canonical copy of the naturals in ZF always standard?

I’m not sure what you mean by the ‘canonical copy’. Toby gave you one answer. My intuition says that every model of the Peano axioms contains the standard model as a submodel. If so, the standard model is canonical in some sense: namely, it’s minimal. So — for example — if a Diophantine equation has a solution in the standard model, it has a solution in all other models.

I hope some expert comments on this intuition of mine.

But even if this intuition is right, it still leaves us unable to write down a recursively enumerable set of axioms extending Peano arithmetic that picks out the standard model among all the rest.

Can we say anything about the meaning of the provability predicate when it’s given a non-standard natural as an argument?

e.g. for a Godel sentence G, coded by the (standard) natural m, there must be a model where Prov(n,m) holds for a non-standard natural m. Can we ascribe any meaning there?

Somewhere I think I saw a very nice introduction to nonstandard models of Peano arithmetic which comments on this issue. Was it my old undergraduate textbook, Boolos and Jeffrey’s Computability and Logic?

For now I’ll just quote the Wikipedia on how to build some nonstandard models:

A second application of the compactness theorem shows that any theory that has arbitrarily large finite models, or a single infinite model, has models of arbitrary large cardinality (this is the Upward Löwenheim–Skolem theorem). So, for instance, there are nonstandard models of Peano arithmetic with uncountably many ‘natural numbers’. To achieve this, let T be the initial theory and let κ\kappa be any cardinal number. Add to the language of T one constant symbol for every element of κ\kappa. Then add to T a collection of sentences that say that the objects denoted by any two distinct constant symbols from the new collection are distinct (this is a collection of κ2\kappa^2 sentences). Since every finite subset of this new theory is satisfiable by a sufficiently large finite model of T, or by any infinite model, the entire extended theory is satisfiable. But any model of the extended theory has cardinality at least κ\kappa.

When interpreted as a proof within a first-order set theory, such as ZFC, Dedekind’s categoricity proof for PA shows that each model of set theory has a unique model of the Peano axioms, up to isomorphism, that embeds as an initial segment of all other models of PA contained within that model of set theory. In the standard model of set theory, this smallest model of PA is the standard model of PA; however, in a nonstandard model of set theory, it may be a nonstandard model of PA. This situation cannot be avoided with any first-order formalization of set theory.

The countable nonstandard models of Peano arithmetic are pretty freaky:

It is natural to ask whether a countable nonstandard model can be explicitly constructed. It is possible to explicitly describe the order type of any countable nonstandard model: it is always ω + η (ω* + ω), which can be visualized as a copy of the natural numbers followed by a dense linear ordering of copies of the integers. However, a theorem by Stanley Tennenbaum, proved in 1959, shows that there is no countable nonstandard model of PA in which either the addition or multiplication operation is computable. This result shows it is difficult to be completely explicit in describing the addition and multiplication operations of a countable nonstandard model of PA.

Re: A Dual for Set?

I found your last post quite instructive. I collect mathematical logic and computability books, so from “Computability and Logic”, Boolos et al, perhaps this is a good quote.

“Note that for a correct ∀-rudimentary sentence ∀x A(x), we can conclude that each numerical instance A(0), A(1), A(2), … is provable from the axioms of Q, but this is not to say that ∀xA(x) itself is provable from the axioms of Q, and in general it is not. There are nonstandard interpretations of the language of arithmetic on which all the axioms of Q come out true, but some very simple ∀-universal sentences that are correct or true on the standard interpretation come out false. Works on set theory develop an extremely natural nonstandard model of Q, called the system of ordinal numbers, for which, among others, laws as simple as 1 + x = x + 1 fail. It would take us too far aﬁeld to stop to develop this model here, but some of its features are hinted at by the nonstandard interpretations of Q indicated in the problems at the end of the chapter. As we have already said, the fact that Q is a weak theory makes the following theorem (which automatically applies to any theory T containing Q) a strong theorem:

16.14 Lemma. Every rudimentary function is representable in Q (and by a rudimentary formula).”

Re: A Dual for Set?

I was looking at that Wikipedia page the other night when thinking about this, but, as best I can tell, it neglects the final Peano axiom (maybe I should mumble something about second order here?). My understanding is that with all the axioms included, PA describes the natural numbers up to isomorphism. There seem to be weaker things out there, but I’m not really sure why I should care.

Re: A Dual for Set?

as best I can tell, it neglects the final Peano axiom (maybe I should mumble something about second order here?). My understanding is that with all the axioms included, PA describes the natural numbers up to isomorphism.

You shouldn’t mumble something about “second order”—you should shout it from rooftops! Saying that the second-order Peano axioms determine ℕ\mathbb{N} up to isomorphism doesn’t say much more than saying that any model of ZFC contains a well-defined set called ℕ\mathbb{N}, since second-order axioms require the choice of a background set theory in order to be any stronger than first-order ones. But different choices of that background set theory (e.g. different choices of your model of ZFC) will result in different sets of natural numbers. I mentioned this here and here.

Re: A Dual for Set?

If I give you a solution to a Diophantine equation, I can phrase that solely using the Peano axioms (having established that every natural number is some successor of “zero”). Does it not therefore have to hold in all models thereof?

Re: A Dual for Set?

Aaron Bergman wrote:
If I give you a solution to a Diophantine equation, I can phrase that solely using the Peano axioms (having established that every natural number is some successor of “zero”). Does it not therefore have to hold in all models thereof?

—————————————————————

The Wiki gives an example of adding to the Peano axioms. Is that still a model of PA? How about if you remove an axiom?

Wiki:
In 1977, Paris and Harrington proved that the Paris-Harrington principle, a version of the Ramsey theorem, is undecidable in the axiomatization of arithmetic given by the Peano axioms but can be proven to be true in the larger system of second-order arithmetic.

Handbook of Mathematical Logic:
“Our first large cardinal axiom states that there is an inaccessible cardinal. The first thing to show is that it really is a new axiom, i.e., that it is not provable in ZFC. …
Does the new axiom have any interesting consequences? There is at least one.
This consistency can be proved from the new axiom.” SH: Is this still a model of ZFC?

——————————————————————

SH: What is a model? Planetmath:

“It is important to underline the qualitative difference between specifying a model actually and only in principle, since only for a restricted class of cases is it possible to actually specify a model, namely that class of cases in which the domain of the interpretation is finite. Thus it is possible to actually produce a realization of the abstract structure of a group, by choosing a finite group specifiable by a product table that one can actually compute. Such a model shows that the structure has a realization. The problem arises first when one meets with relatively simple systems of axioms for which a finite model can not be specified. …

But an infinite domain is not a concrete totality which we can consider to be an immediate object of perception, and so its introduction needs to be justified as much as the abstract theory that it was supposed to justify.

It would seem prima facie that the implicit definition of the integers, by means of axioms such as de Dedekind-Peano axioms, would be a paradigm case for the introduction of infinite totalities. But such a definition would be in turn dependent on an abstract axiomatic theory.

**A model for such a theory would again be required and we would be therefore unable to justify the introduction of an infinite totality.”**
—————————————
Wiki:
“ZFC minus the axiom of infinity already implies the Peano axioms which, as you know, are subject to Godel’s incompleteness theorems. Thus, in particular, ZFC without the axiom of infinity is consistent and strictly weaker than ZFC with the axiom of infinity. Moreover, if one gets the details right, ZFC minus the axiom of infinity plus its negation is equiconsistent with Peano arithmetic.”
——————————————————————-

SH: The next example shows that the fundamental assumption for PA that you made with Successor, is still an arbitrary choice. Why does Successor seem so basic?
It’s because humans evolved in an environment where their perception interpreted reality as a string of cause and effects, causality, which was tied to their notion of time passing in a series of events, one after another. But this implicated self-evident mathematical truth would not be so clear if we evolved in an environment where time didn’t move only forward.
Do you think that non-locality and Feynman’s conjecture about electrons moving backwards in time support an axiomatic assumption of the successor concept as a primitive? Probably not, if our original physical perceptions of reality also included causality and time fluctuations.

—————————————————————-

Dan E. Willard:
“We will study several weak axiom systems that use the Subtraction and Division primitives (rather than Addition and Multiplication) to formally encode the theorems of Arithmetic. Provided such axiom systems do not recognize Multiplication as a total function, we will show that it is feasible for them to verify their Semantic Tableaux, Herbrand, and Cut-Free consistencies. If our axiom systems additionally do not recognize Addition as a total function, they will be capable of recognizing the consistency of their Hilbert-style deductive proofs. … A variation of Lob’s Theorem is also valid for systems α which recognize none of Addition, Multiplication or even Successor as total functions.”

Re: A Dual for Set?

If I give you a solution to a Diophantine equation, I can phrase that solely using the Peano axioms (having established that every natural number is some successor of “zero”). Does it not therefore have to hold in all models thereof?

If you give me an actual solution (assuming the existence of some ‘actual’ mathematical reality), then yes, it holds in all models. But some models have solutions to additional equations; these solutions are nonstandard.

If you don't believe in the existence of any ‘actual’ mathematical reality, then the previous paragraph still makes sense relative to appropriate background assumptions. But it has no absolute meaning.

Let us go back to assuming the existence of some ‘actual’ mathematical reality (or just fix some background assumptions). Even so, just because you can prove the existence of a solution, that doesn't mean that you can write it down as a numeral, that is as the result of applying the successor operation to zero for some number of times. You can, of course, prove that you can write it down as a numeral, but that doesn't mean that you actually can, even in principle; the question is whether your proof is valid. After all, it's one thing to be a mathematical realist, and yet another to be a realist who believes that some given axiom system is correct.

Assume that PAPA is consistent. Then some person PP who believes that PA+¬Con(PA)PA + \neg{Con(PA)} is correct has strange but consistent beliefs. Give PP a Diophantine equation EE (with all coefficients and exponents expressed directly as numerals) whose solubility shows the existence of PAPA; then PP may announce that PP has proved that EE has a solution. But PP will never be able to write down that solution explicitly as a numeral. And you, knowing that PAPA is consistent, will never accept their proof that EE has a solution. Nevertheless, there is a model of PAPA in which EE has a solution, indeed a model in which PP's proof is valid. It is a nonstandard model, with nonstandard numbers.

Re: A Dual for Set?

I’ll come back to the rest later, but in the meantime, it would be helpful for me to clarify the following point. When people in this thread say a model of PA, does that include the second order induction axiom?

Re: A Dual for Set?

When people in this thread say a model of PA, does that include the second order induction axiom?

I think no, just the first-order induction axiom. That is what I'm doing.

What is a model of PAPA with the second-order induction axiom? You can use Henkin semantics, and then this is equivalent to a model of a first-order theory, sometimes known as second-order arithmetic or elementary analysis (although I'm not sure if either of these has a fixed standard meaning, or if so exactly what they are), which includes a rudimentary set theory (far weaker than ZFCZFC) and is slightly stronger than PAPA. The same basic ideas apply.

Or you can take a model in sets themselves, in which case what is true of models depends very much on what you think is true of sets. As such, this is not particularly useful for studying the logical foundations of mathematics. Even for metamathematics in general, the nice theorems, such as those relating syntactic consistency to the existence of a model, don't work the same way. (I'm not sure of the details here.)

Re: A Dual for Set?

The problem is that I’m just a poor physicist here, and when I hear someone tell me that there exists a Diophantine equation whose solution set is independent of ZFC, I think of a Diophantine equation as an algebraic equation with integer coefficients, and I think of integers as those things that satisfy the Peano axioms I learned at CTY many summers ago and which include the second order induction axiom for the express purpose of getting rid of annoying nonstandard integers.

Now, it seems like a fun exercise to think about Diophantine equations over some first order version of Peano arithmetic, but I’m not sure I see why it’s necessary. Is there anything wrong with the integers that I know and love?

Re: A Dual for Set?

The problem is that I’m just a poor physicist here, and when I hear someone tell me that there exists a Diophantine equation whose solution set is independent of ZFC, I think of a Diophantine equation as an algebraic equation with integer coefficients, and I think of integers as those things that satisfy the Peano axioms I learned at CTY many summers ago and which include the second order induction axiom for the express purpose of getting rid of annoying nonstandard integers.

OK, and then you're understanding it in the way that we mean. One note: the solution set may be independent of PAPA (better: whether the solution set is inhabited —that is whether there is a solution— is undecidable in PAPA) and yet decidable in ZFCZFC. Yet there are other equations whose solution set is (if ZFCZFC is consistent at all) undecidable in ZFCZFC.

The coefficients (and exponents) in this Diophantine equation may be written explicitly as standard numerals, with a string of successor operations applied to zero. (Or you can ban coefficients and exponents, using only repeated addition and multiplication.) Somewhere in here, somebody quoted a reference that did this, although it was using a finite system of Diophantine equations, and one of the exponents was as large as 5605^{60} (which means that nobody has ever actually written it out literally as a string of successor operations applied to zero, much as we might think this possible in principle).

It is the solution that may be nonstandard; the equation itself is complicated but explicit.

Re: A Dual for Set?

it seems like a fun exercise to think about Diophantine equations over some first order version of Peano arithmetic, but I’m not sure I see why it’s necessary.

It’s probably not necessary for you at all, as a physicist.

This whole discussion began with a question asking for examples from “real mathematics” in which the choice of a model for a foundational theory matters, or equivalently a statement in “real mathematics” which is independent of a foundational theory such as ZFC. I think the solvability of a particular Diophantine equation is undoubtedly a question of “real mathematics,” and it can be independent of ZFC, so that’s why I brought it up. However, problems in “real mathematics” which do depend on foundations are certainly quite rare, and I’m not aware of any of them that really have any bearing on physics.

which include the second order induction axiom for the express purpose of getting rid of annoying nonstandard integers.

As we’ve been saying, the second-order induction axiom only has that effect relative to the ambient model of set theory (and it only makes sense relative to some ambient model of set theory). It prevents models of PA that contain integers which are “nonstandard” from the point of view of the ambient set theory, but that ambient set theory might contain integers that it thinks are “standard” (and so its second-order induction axiom can’t get rid of them), but which a different model of ZFC thinks are “nonstandard”.

Is there anything wrong with the integers that I know and love?

Yes and no. There’s nothing wrong with working with “the integers” in the way you’re familiar with; that can probably all be formalized inside (say) ZFC, and ZFC can prove that “the integers” are uniquely defined, so that in any model of ZFC there is a uniquely characterized set called “the integers.” Different models of ZFC might have different sets that they call “the integers,” but all of them will satisfy the PA axioms (including second-order induction relative to the ambient model of sets), so anything you prove about “the integers” will be true about the set called “the integers” in any model of ZFC. The point is just that there are some statements about “the integers”, like the solvability of certain Diophantine equations, that you can’t prove or disprove, because they will be true in some sets of “the integers” and false in others.

Re: A Dual for Set?

One of the initially baffling and frustrating features of this sort of statement is the feeling you get, if you are at all of a Platonic persuasion, that you know perfectly well which natural number system you are talking about, namely the standard one, because you can see it in front of you and point to it. And, clearly, just by inspection, this theorem, while it may be true in some nonstandard natural numbers, is not true in the standard natural numbers—and you know because you can see them and see what they are like—and, besides, you have a proof, a simple, valid proof, that the statement is (say) false in the standard natural numbers.

All of which is perfectly reasonable and a quite correct way of thinking, right up to the point where you encounter a statement whose shortest proof is too complicated (or has a supporting foundational infrastructure which is too complicated) for you to ever understand: for instance, if the proof is so long that it would take you longer than your lifetime to read it, or longer than your lifetime to drive past the journal articles needed to contain it, or (if you are enclined to take things on authority) that the proof and its supporting concepts are so complex that the entire human race, during the entire course of its history, no matter how advanced it becomes and no matter how much artificial aid it is able to call on, hasn’t the slightest possibility of discovering it. Obviously such theorems exist (in the Platonic sense)—there’s no limit to the complexity of theorems. And obviously, since you have no chance of actually encountering a proof of the truth or falsity of the statement—certainly not one you can comprehend—you are going to be a little hazy on whether the statement is, in fact, true or false, in any given model; and since its truth or falsity is a distinguishing feature of whether it refers to standard or non-standard natural numbers, you’re going to be hazy about whether you are talking about the standard natural numbers or not.

Now, of course, if you were very much cleverer (and a very much faster reader) than you are, you could look at the statement, glance through the proof, and say “Yes, but clearly this statement cannot hold for the standard natural numbers, only for non-standard ones.” So this extra cleverness would give you further understanding of (or intuition about) the standard natural numbers, sufficient to distinguish them from non-standard ones to the level of the statement in question. But then there would be other, even harder theorems whose proofs you couldn’t understand, and so it would go on: no matter how clever you are, and no matter how fast you can read, your ability to distinguish standard from non-standard numbers peters out at some level of difficulty.

The game changes a little (though not, of course, fundamentally) if you have, say, a countably infinite number of neurons in your brain and can read infinitely fast (so that it only takes you a finite time to read through all ℵ0\aleph_0 pages of this month’s issue of The Platonic Journal of Mind-Bogglingly Hard Maths and absorb their contents). With this many neurons, you can implement a stronger-than-first order form of logic directly in your brain. For instance, you can understand and remember a conjunction of ℵ0\aleph_0 simple propositions, such as the conjunction of an assertion about each standard natural number individually. But you face a difficulty—apart from having to answer a sackload of snarky letters each day from first-order logicians asking you to prove that you don’t have a non-standard brain, which of course you can’t, even though, in fact, you don’t. The problem is that your more powerful logic has vastly greater expressive power, and can consequently express a vastly greater number of statements, some of them about natural numbers, but many of which (my intuition tentatively suggests ℵ1\aleph_1, but that is really a guess) are too complicated for you to understand, let alone prove or disprove. And so the problem goes on, never resolving itself, no matter how clever you are—even if you have some ludicrously vast cardinal number of neurons and can read incomprehensibly quickly. In fact, the cleverer you are, the more statements there are that you can express but not understand or at least not prove or disprove (but it’s the sense of intuitive understanding that’s the real issue here).

To go back to the world of Diophantine equations: some Diophantine equations are just, ineluctably, more difficult to deal with than others; and there is no limit to the difficulty involved; so you can never grapple with all of them, and consquently you can never fully grasp the quiddity of the standard naturals as opposed to non-standard ones: at some point, things inevitably get a bit fuzzy in your mind.

In a sense, this is an extension of the fact that one is subject to all kinds of natural limits simply by virtue of being a mere human being, except that it’s not really question of the human part of the deal: you will be subject to some kind of limits no matter what kind of being you are.

non-standard math in physics

Aaron Bergman wrote, certainly expressing the feeling of several other readers, too:

I’m just a poor physicist here

Mike Shulman replied

It’s probably not necessary for you at all, as a physicist.

Or is it? Iwas wondering about this a bit more than usually more recently.

I am reminded of the remark Johnstone has in the intro of his old Topos Theory book on how MacLane asserted that topos theory will make category theory be less abstract.

There is no explanation given there, but if I understand correctly, the point is that with topos theory under control, one is bound to discover more models for much abstract nonsense in real life and hence have it become much more concrete nonsense.

Sorry for the overlong introduction, here the overlong middle part:

I suppose even if we accept standard natural numbers and standard math as our ambient context for setting up math (my understanding from what Todd tried to teach me/us us that coming out of the void, we start doing math for instance by assuming that by grace we can all understand naive ETCS and then the rest of math takes place formally inside that.) then we are bound to eventually run into non-standard context as soon as we feel the need to set up certain toposes in which and with which to model whatever we feel need to model, notably physical reality.

Now finally the overlong punchline:

So in particular I may want to set up a topos with which to model the notion of the collection of spaces that i am concerned with in physics. This is clearly a practical question. I am not sure how this will sound, but I am claiming that it helps concretely and not very mysteriously to think about such toposes for doing concrete, hands-on also computational physics.

Anyway, so there is a particular topos here that is very well suited for these modelling needs in physics. And the claim is that more or less implicitly it is being used a lot actually: it is the topos 𝒵\mathcal{Z} or its sibling ℬ\mathcal{B}here.

That topos naturally has a natural numbers object: it’s the natural numbers regarded as a “smooth space” in a certain way. And it turns out that this is a very non-standard natural numbers objects. In fact it is non-standard in the very sense of non-standard analysis.

And this is not only not an esoteric mystery here, but actually quite reasonable and helpful precisely from a physicists point of view: due to the presence of this non-standard natural numbers object for instance distributions (see the last section here) do exist in this topos as ordinary naive functions in exactly the way most physicist do like to think about them.

So somehow, indeed, there is a “non-standard math” context (topos) here, which is in many respects very close to naive physical intuition and very concretely very useful here.

I have to admit that I am not entirely sure what all this discussion here about non-standard foundations and math implies about this particular topos. But I am now seriously interested in finding out and am feeling that learning about it won’t be entirely irrelevant for the physicist inside me.

If nothing else, this obvious easy-to understand sheaf-topos does provide me with a hands-on example to fall back to as a reassuringly sensible example whenever someone forces me to think about non-standard integers!

Re: A Dual for Set?

AB: …does that include the second order induction axiom?
——————————-

I don’t think so, rather the Wiki says an _induction schema_,
“The second-order axiom of induction can be transformed into a weaker first-order induction schema; the first eight of Peano’s axioms together with the first-order induction schema form a first-order axiomatization of arithmetic called Peano arithmetic (PA). This schema avoids quantification over sets of natural numbers, which is impossible in first-order logic.
Although the usual natural numbers satisfy the axioms of PA, there are other non-standard models as well; the compactness theorem implies that the existence of nonstandard elements cannot be excluded in first-order logic. The upward Lowenheim-Skolem theorem shows that there are nonstandard models of PA of all infinite cardinalities. This is not the case for the original (second-order) Peano axioms, which have only one model, up to isomorphism. This illustrates one way the first-order system PA is weaker than the second-order Peano axioms.”
———————————————————————–

Roman Kossak and Jim Schmerl “The Structure of Models of Peano Arithmetic”
“It now seems almost obligatory to include nonstandard models in introductory courses on mathematical logic.
The point that they help to emphasize is the limited expressive power of first order logic: there are mathematical structures (one of them being the most classical of mathematical structures the standard model of arithmetic) which are indistinguishable with respect to their first-order properties but whose isomorphism types are dramatically different. The drama is personified by nonstandard elements. However, the discussion at the introductory level
usually ends here, leaving out the complex picture obtained by a closer scrutiny of the spectrum of isomorphism types of nonstandard models. Once we know that non-standard models exist, it is very natural to ask how different they are from the standard one and also from each other. In other words, we would like to know to what extent the first-order theory of a model of arithmetic determines properties that are not first-order expressible. A priori, there is no guarantee that the possible answers would be relevant to other developments in model theory.
…
Stanley Tennenbaum proved in a famous unpublished paper that in no nonstandard model can either addition or multiplication have a recursive presentation. The MacDowell-Specker Theorem involves a refinement of Skolem’s method to show that every model of Peano Arithmetic (PA) has an elementary end extension.

Rabin proved that every model M of PA has an elementarily equivalent extension which solves a Diophantine equation having coefficients in M but having no solutions in M. Rabin’s result predates Matiyasevich’s solution to Hilbert’s 10th problem from which it followed that an extension is Σ_1 iff it does not solve new Diophantine equations.”

Re: A Dual for Set?

Aaron wrote:

If I give you a solution to a Diophantine equation, I can phrase that solely using the Peano axioms (having established that every natural number is some successor of “zero”). Does it not therefore have to hold in all models thereof?

Note that one thing you can’t say in Peano arithmetic is that every natural number is some successor of zero. And this, in some sense, is the whole point.

You can try to say “For every nn, either n=0n = 0 or n=S0n = S0 or n=SS0n = SS0 or…”, but this is an infinitely long sentence.

You can try to say “For every nn there exists mm such that n=Sm0n = S^m 0”. When you translate this into Peano arithmetic, you’ll wind up saying something like “For every nn there exists mm such that n=0+mn = 0 + m”. And this is provable in Peano arithmetic: just take m=nm = n. But the problem is: mm could be nonstandard.

In short, there’s no way to add an axiom that says “Every natural number is standard” in a way that actually accomplishes anything. You can say it, but the evil powers of infinity can always interpret what you said in a way you didn’t ‘mean’. Which should make you wonder if you actually know what you mean.

Anyway:

If you give me a solution to a Diophantine equation that’s written out explicitly in terms of successors of zero, like how x2+y2=z2x^2 + y^2 = z^2 has the solution x=SSS0x = SSS0, y=SSSS0y = SSSS0, z=SSSSS0z = SSSSS0, we can use Peano arithmetic to prove it’s a solution, so it will be a solution in every model.

That’s not the problem.

The problem is all the Diophantine equations where we can neither use Peano arithmetic to prove they have a solution, nor prove they don’t. Of course if you knew you were in this situation, you’d know the equation does not have a solution in the standard natural numbers. But when you’re in this situation you can never use Peano arithmetic to prove you’re in this situation!

Re: A Dual for Set?

Note that one thing you can’t say in Peano arithmetic is that every natural number is some successor of zero.

I thought the whole point of the induction axiom was to do precisely that.

But beyond that, I (and I would think most people) would define a solution to a Diophantine equation to be something that can be written in terms of successors of zero. I’m perfectly willing to accept that one may never be able to prove whether or not such solutions exist, but surely if such a solution exists in one model, it exists in all of them.

Re: A Dual for Set?

Note that one thing you can’t say in Peano arithmetic is that every natural number is some successor of zero.

I thought the whole point of the induction axiom was to do precisely that.

Its purpose is to do that indirectly, but you can't say that in the language of PAPA. Whether it succeeds in doing this, even indirectly, is debatable.

I would define a solution to a Diophantine equation to be something that can be written in terms of successors of zero.

Then you're in luck (or not in luck, as the case may be), since what you have defined is what a logician would call a standard solution to the equation. And a standard solution is a solution in all models.

But I hope that you appreciate why logicians also consider nonstandard solutions in nonstandard models. There's no way to say, within the language of PAPA, that a number is standard. Relative to some metalanguage (which might actually be PAPA again, but let's assume that it's ZFCZFC to keep from confusing the object language and the metalanguage), you can say that the number is standard, but that metalanguage (or rather, any consistent theory in that language whose theorems are recursively enumerable, such as ZFCZFC if it's consistent) has its own nonstandard models in which ‘standard’ has a nonstandard meaning.

To apply an absolute meaning to words like ‘standard’ (and hence ‘solution’), you need to have some absolute perspective on mathematical truth. And that is what some of us don't believe exists.

This means that induction applies not just to the countable set of predicates that you can actually write down, but to
‘all’ predicates.

When I studied logic, I only learned about first-order logic and the first-order formulation of Peano arithmetic. I think a lot of logic textbooks focus on this approach,
probably because there are a lot of well-known powerful theorems that apply to first-order logic, like the compactness and completeness theorems.

Note that one thing you can’t say in Peano arithmetic is that every natural number is some successor of zero.

Aaron wrote:

I thought the whole point of the induction axiom was to do precisely that.

It tries its best, but all it says is that if a predicate PP satisfies P(0)P(0) and P(n)⇒P(n+1)P(n) \implies P(n+1), it satisfies ∀nP(n)\forall n P(n). This can hold even if there are lots of natural numbers besides the ones you consider ‘standard’. The sneaky extra nonstandard ones just need to do whatever is necessary for this axiom to hold! And there are lots of ways for them to achieve this.

This problem applies, in somewhat different ways, to both the first-order and second-order formulations.

Here’s another way to put the problem:

We all agree that we’re mainly interested in the ‘standard’ natural numbers 0,1,2,…0,1,2,\dots, not any weird extra ‘nonstandard’ ones. The problem is explaining what we mean by this.

Suppose you try to say that a standard natural number is ‘some successor of 0’. But what do you mean by ‘some successor of 0’? Do you mean “the nnth successor of 0 for some natural number nn”? But that’s not good enough to rule out nonstandard natural numbers! After all, nn is the nnth successor of 0, even when nn is nonstandard!

So, you might like to say “nn is standard when it’s the nnth successor of 0 for some standardnn”. But that’s circular.

Re: A Dual for Set?

Some other people, like Mike Shulman, seem to mean a theory in second-order logic.

What did I say that made you think I meant that? (-: I’ve always thought that “Peano Arithmetic” was, by default, first-order, just like the replacement axiom. But some other people have brought up a second-order version, so I felt obliged to say something about it.

To be perfectly clear: I don’t think second-order logic has any foundational status, since for interpretation it depends on a background set theory. It has its uses, but I’d kind of rather it weren’t called “logic” at all but something different. In particular, it doesn’t do anything at all to avoid the philosophical implications of the Incompleteness Theorems; it just “pushes the problem to the meta-level.”

Re: A Dual for Set?

I don’t think second-order logic has any foundational status, since for interpretation it depends on a background set theory.

Agreement with Quine then who described second-order logic as like “set theory in sheep’s clothing”:

Followers of Hilbert have continued to quantify predicate letters, obtaining what they call a higher-order predicate calculus. The values of these variables are in effect sets; and this way of presenting set theory gives it a deceptive resemblance to logic. One is apt to feel that no abrupt addition to the ordinary logic of quantification has been made; just some more quantifiers governing predicate letters already present. In order to appreciate how deceptive this line can be, consider the hypothesis ‘(∃y)(x)(x∈y.↔Fx)(\exists y)(x)(x \in y . \leftrightarrow F x)’. It assumes a set {x:Fx}\{x : F x\} determined by an open sentence in the role of ‘FxF x’. This is the central hypothesis of set theory, and the one that has to be restrained in one way or another to avoid the paradoxes. This hypothesis itself falls dangerously out of sight in the so-called higher-order predicate calculus. It becomes ‘(∃G)(x)(Gx↔Fx)(\exists G)(x)(G x \leftrightarrow F x)’, and thus evidently follows from the genuinely logical triviality ‘(x)(Fx↔Fx)(x)(F x \leftrightarrow F x)’ by an elementary logical inference. Set theory’s staggering existential assumptions are cunningly hidden now in the tacit shift from schematic predicate letter to quantifiable set variable. (Quine, Philosophy of Logic 1970, 68)

Re: A Dual for Set?

Corfield appending a Quine-quotation:

Followers of Hilbert have continued to quantify predicate letters, obtaining what they call a higher-order predicate calculus. The values of these variables are in effect sets; … (Quine, Philosophy of Logic 1970, p. 68).

But predicates are not in effect sets — unless by effect one means extension — predicates are dual to sets.

First vs. second order logic

MS: I don’t think second-order logic has any foundational status, since for interpretation it depends on a background set theory.

First order logic also depends for its interpretation on a background set theory.
You cannot define its notions without the latter. For you need the notion of a sequence to define what it means to be a formula or a proof, and this notion depends on a background set theory.

Thus first order logic hasn’t any foundational status, according to your metalogic.

MS: all mathematics I’ve ever seen can be formalized in first-order logic (such as ZFC). In particular, the incompleteness theorem for first-order logic, and the existence of standard models for second-order theories (relative to a fixed background set theory), can all be proven in an ambient first-order logic.

Similarly, all mathematics I’ve ever seen can be formalized in second-order logic (such as ZF+global choice+an induction like axiom fixing a standard model up to isomorphism). In particular, the incompleteness theorem for first-order theories, and the existence of standard models for second-order theories (relative to a fixed background set theory), can all be proven in an ambient second-order logic.

The most powerful current automatic theorem solvers are based not on first order logic but on higher order logic.
And they can, in principle, give proofs for everything that can be proved at all, including for theories posed in first order logic.

AN: You cannot discuss the limitations of first order logic by means of first order logic itself.

MS: I think this is basically false; all mathematics I’ve ever seen can be formalized in first-order logic.

That all mathematics can be formulated in first order logic (FOL) does not imply that the latter is the true foundation. By the same argument, one can make an equal case for higher order logic (HOL).

Now different foundations make diferent things true in metamathematical studies. FOL and HOL are not equivalent in this respect.

Thus it is a matter of one’s philosophy of real mathematics whether one considers FOL the holy grail of foundations or something that lacks important features
thus as ther essential uniqueness of the natural number system.

TT: show me this model of ZF+choice you think exists quite trivially. You can’t.

The free ZF-algebra with 0 generators is
(given the consistency of ZF) the trivially existing model of ZF, and one for ZF+global choice can be found similarly.

At least, it is as good a model as any model of ZF+choice you can show me.

TT: But what we can agree on is a first-order theory to work within, such as ZFC (or something else, like ETCS).

We can agree (or not) as well on a higher order theory to work with. Whether we agree, and on what we agree, will depend a lot on our philosophy of real mathematics. Given that this is very different between us, it is unlikely that we’ll agree (except for the purpose of a particular discussion on technical issues).

TT: many and perhaps most model theorists and set theorists are platonists: they accept the objective existence of a world of sets.

You may include me and most mathematicians there as well.

One needs something objective to start with, otherwise one cannot get off the ground. FOL is no better starting point than naive set theory with quantification restricted to elements from a set. It is a matter of faith which one to choose, and the latter is the one easier to intuit, easier to defend, and (as shown by the majority of mathematicians) easier to believe.

MS: ever since I learned Gödel’s Incompleteness Theorem, I’ve never really understood why people believe that there exists a “standard” set of natural numbers. I’m not even sure what sort of “existence” is meant here.

AN: I guess the reason is that mathematicians do not think that first order logic (on which Goedel’s result are based) fully describes mathematical intuition.

First order logic does not correctly describe the essential uniqueness of the natural number system that “people” take for granted, while second order logic does. This can with full right be seen as an essential limitation of FOL.

It explains naturally what you cannot understand since you give FOL an elevated place it simply does not have in the eyes of “people”.

Peano Arithmetic & Peano Dots

MS: I don’t think second-order logic has any foundational status, since for interpretation it depends on a background set theory.

AN: First order logic also depends for its interpretation on a background set theory. You cannot define its notions without the latter. For you need the notion of a sequence to define what it means to be a formula or a proof, and this notion depends on a background set theory.

Back in the day when I was making my first foray into Principia Mathematica, one of the first things I noticed was Whitehead and Russell’s overly complex adaptation of what was originally a very simple idea for associating parts of formulas, namely, Peano Dots.

Here’s the catch, you can’t even parse the formulas unless you can count.

That won’t mean much to Peano players who imagine themselves graced with hands of infinite span, but it made a manifest impression on this programmer, when he sat down to play at his Peano, and it laughed right back at him.

Re: A Dual for Set?

as best I can tell, it neglects the final Peano axiom (maybe I should mumble something about second order here?)

and you replied:

You shouldn’t mumble something about “second order” — you should shout it from rooftops!

It sounded like you were willing to stand on a tall building and loudly proclaim your support for a second-order version of the Peano axioms. But now I see you were just trying to make Aaron feel comfy. Now you say:

I’ve always thought that “Peano Arithmetic” was, by default, first-order, just like the replacement axiom. But some other people have brought up a second-order version, so I felt obliged to say something about it.

Okay. I’m relieved that for you, like me, the default modern meaning of “Peano Arithmetic” is first-order. I’m relieved because during this conversation I started worrying that maybe lots of logicians mean the second-order version. After all, it seems the version Peano himself espoused was second-order. The Wikipedia article seems to start with the second-order version and only later talk about the first-order one. But sometimes they’re too historically accurate for their own good: for example, they define ‘Planck’s constant’ to be hh, which is what Planck said — even though every modern physicist says it’s ℏ=h/2π\hbar = h / 2\pi.

Re: A Dual for Set?

Wiki:
“The Peano axioms contain three types of statements. The first four statements are general statements about equality; in modern treatments these are often considered axioms of pure logic. The next four axioms are first-order statements about natural numbers expressing the fundamental properties of the successor operation. The ninth, final axiom is a second order statement of the principle of mathematical induction over the natural numbers.
A weaker first-order system called Peano arithmetic is obtained by replacing this second-order induction axiom with a first-order axiom schema.”

Re: A Dual for Set?

Peano Axioms = second-order

Peano Arithmetic = first-order

Yes, this is the standard usage.

Peano encoded the common intuition about integers, which necessitated second order logic, whereas treatments in logic books prefer a first order treatment because of its nicer theory, and hence defined Peano Arithmetic (PA) as a first order version
preserving most properties of natural numbers, but not their standardness.

In view of the Loewenheim-Skolem theorem, this is an unavoidable defect of any first order treatment.

Re: A Dual for Set?

I have been going through some of Stephen Harris’ comments and fixing this mistake when I see it — and also removing the icky horizontal lines he uses to separate portions of text. Stephen: please know that you can create nice-looking comments like everyone else if you read this.

Re: A Dual for Set?

Yes, I will switch to blockquote. I will also try Wikipedia and see if it comes out blue. You think more like Penrose, in images, you like graphs. I don’t think like that so icky dotted lines just seem functional to me, not aesthetic.
But blockquotes are elegant from my pov also. Thanks for great explanations and the intriguing areas you bring up for all of us to explore.

Re: A Dual for Set?

Ok, will do. I guess I got into this lazy habit from seeing the top/first Google report:

Wiki - Wikipedia, the free encyclopedia

I practiced the blockquote advice too. I have a fundamental question, I haven’t seen it covered in a book, maybe it’s too minor and is mentioned in the classroom?

When does an axiomatic system (I mean quantifiers too etc.) cease to be a model of a named Arithmetic and become a newly named Arithmetic? For instance, when the induction schema is removed from Peano Arithmetic (PA) it becomes Robinson Arithmetic. Is it now no longer a model?

Suppose one of the axioms is removed from PA, so that I believe it is now deemed decidable. Is that still a model of PA, or is it mandatory to rename it to *_Arithmetic?

My intention is to find out if _only_ a nonstandard model of PA creates the situation where Diophantine equations were solvable in one PA model, but not in others, as told to Aaron. I kept thinking that axiomatic structures in general were the most basic way (higher in the hierarchy) to explain changes in the model performance(?). This idea isn’t correct if only a nonstandard model of PA can produce the situation where Diophantine equations were solvable in one model but not in another model. The explanation provided in this thread only mentioned the nonstandard model; is the nonstandard model the only example of a model which can bring about the Diophantine equation situation, a difference in result or capability? Is that why you guys just talked about that(nonstandard)? I’ve been thinking that one could disable or change some other aspect of the underlying PA axiomatic structure, other than nonstandardness, and still have a PA model which produced different behaviors regarding the solving of Diophantine equations. I guess I’ve dug myself a hole.

Re: Rooftops

Read the post Syntax, Semantics, and Structuralism, IWeblog: The n-Category CaféExcerpt: A brief summary of syntax and semantics of type theory and logic, as a basis for continuing discussions about structural and material foundations.
Tracked: October 19, 2009 10:04 PM

Read the post Feferman Set TheoryWeblog: The n-Category CaféExcerpt: (Strong) Feferman set theory provides a set-theoretic foundation for category theory that avoids many of the problems with other approaches such as Grothendieck universes.Tracked: November 30, 2009 6:01 AM