We showed that implies a weak version of choice, , namely, every countable family of non-empty sets of reals admits a choice function. This implies that is regular and suffices to develop classical analysis in a straightforward fashion (in particular, to construct Lebesgue measure and to prove its basic properties).

Coupled with the fact that all sets of reals have the perfect set property, this implies that for any real and therefore is strongly inaccessible in for any real .

We closed the course by showing that, in fact, is a measurable cardinal. We proved this result of Solovay by showing Martin’s result that the “cone measure” is indeed a non-atomic measure on the structure of the Turing degrees and then “pulling back” this measure to .

Finally, given any measurable cardinal , let be a (-complete, non-principal) measure on . Then is a model of choice in which is measurable. In particular,

Since, under choice, any measurable cardinal is strongly inaccessible and the limit of strongly inaccessible cardinals, this shows that has significant consistency strength.

As discussed during Lecture 13, for the theories one encounters when studying set theory, no absolute consistency results are possible, and we rather look for relative consistency statements. For example, the theories “There is a weakly inaccessible cardinal” and “There is a strongly inaccessible cardinal” are equiconsistent. This means that a weak theory (much less than suffices) can prove . Namely: is a subtheory of , so its inconsistency implies the inconsistency of . Assume is inconsistent and fix a (say, Hilbert-style) proof of an inconsistency from . Then a proof of an inconsistency from can be found by showing (by induction on ) that each is a theorem of , and this argument can be carried out in a theory (such as ) where the syntactic manipulations of formulas that this involves are possible.

It is a remarkable empirical fact that the combinatorial statements studied by set theorists can be measured against a linear scale of consistency, calibrated by the so-called large cardinal axioms, of which strongly inaccessible cardinals are perhaps the first natural example. Hypotheses as unrelated as the saturation of the nonstationary ideal or determinacy have been shown equiconsistent with extensions of by large cardinals. One direction (that models with large cardinals generate models of the hypothesis under study) typically involves the method of forcing and will not be further discussed here. The other direction, just as in the very simple example of weak vs strong inaccessibility, typically requires showing that certain transitive classes (such as ) must have large cardinals of the desired sort. We will illustrate these ideas by obtaining large cardinals from determinacy in the last lecture of the course.

We defined the axiom of determinacy . It contradicts choice but it relativizes to the model . This is actually the natural model to study and, in fact, from large cardinals one can prove that .

We illustrated basic consequences of for the theory of the reals by showing that it implies that every set of reals has the perfect set property (and therefore a version of is true under ). Similar arguments give that implies that all sets of reals have the Baire property and are Lebesgue measurable. In the last lecture of the course we will use the perfect set property of sets of reals to show that the consistency of implies the consistency of strongly inaccessible cardinals. (Though this is beyond the scope of this course, by using more sophisticated ideas, one can prove the optimal stronger result that the consistency of implies the consistency of “there exist infinitely many Woodin cardinals”.)

In Relativizations of the question by Theodore Baker, John Gill and Robert Solovay, SIAM J. Comput. 4 (1975), no. 4, 431-442 (available through JSTOR) it is shown that the question of whether P equals NP cannot be solved with the kind of arguments typical in computability theory, since these arguments relativize to Turing machines with oracles. Among the results shown there, two oracles and are found such that and . I discussed these results during 117c, the course on decidability in computability theory.

There has been some recent attempts (not very successful, and not too serious in my opinion) to show that the P vs NP question is independent of or even stronger systems. Apparently, part of the motivation for trying to show independence comes from the results in the Baker-Gill-Solovay paper.

I reproduce below a posting by Timothy Chow to the Foundations of Mathematics list where this motivation is shown lacking.

Here is an amusing observation regarding the idea that the existence of contradictory relativizations of assertions such as is evidence that said assertions are independent of some strong theory. I doubt this observation is new, but I haven’t seen it explicitly before.

We can write down (thanks to Levin, I think) an explicit machine with the property that, if , then solves in polynomial time. (Essentially multitasks over all polytime algorithms until it jackpots.)

Suppose now that a statement such as

“ correctly solves in at most steps on length- inputs”

is independent of (for example). The statement is stronger than the statement that , but we might imagine that if is independent of then something like will also be independent of .

Now is , so if it is independent of then it is true, and therefore . It follows that , by the time hierarchy theorem for example.

This line of reasoning can itself be formalized, and this shows that in some systemcall it that is slightly stronger than , we can prove “if is independent of , then .” This in turn means that if we can prove “ is independent of ” in , then we certainly can’t prove “ is independent of .”

Informally speaking, the upshot is that since we know that , it is probably too much to expect that both “” and “” are provably independent of strong systems. On the other hand, both “” and “” admit contradictory relativizations. So it seems we should should be wary of drawing too tight a connection between contradictory relativizations and logical independence.

The only reference I know for precisely these matters is the handbook chapter MR2768702. Koellner, Peter; Woodin, W. Hugh. Large cardinals from determinacy. In Handbook of set theory. Vols. 1, 2, 3, 1951–2119, Springer, Dordrecht, 2010. (Particularly, section 7.) For closely related topics, see also the work of Yong Cheng (and of Cheng and Schindler) on Harr […]

As other answers point out, yes, one needs choice. The popular/natural examples of models of ZF+DC where all sets of reals are measurable are models of determinacy, and Solovay's model. They are related in deep ways, actually, through large cardinals. (Under enough large cardinals, $L({\mathbb R})$ of $V$ is a model of determinacy and (something stronge […]

Throughout the question, we only consider primes of the form $3k+1$. A reference for cubic reciprocity is Ireland & Rosen's A Classical Introduction to Modern Number Theory. How can I count the relative density of those $p$ (of the form $3k+1$) such that the equation $2=3x^3$ has no solutions modulo $p$? Really, even pointers on how to say anything […]

(1) Patrick Dehornoy gave a nice talk at the Séminaire Bourbaki explaining Hugh Woodin's approach. It omits many technical details, so you may want to look at it before looking again at the Notices papers. I think looking at those slides and then at the Notices articles gives a reasonable picture of what the approach is and what kind of problems remain […]

It is not possible to provide an explicit expression for a non-linear solution. The reason is that (it is a folklore result that) an additive $f:{\mathbb R}\to{\mathbb R}$ is linear iff it is measurable. (This result can be found in a variety of places, it is a standard exercise in measure theory books. As of this writing, there is a short proof here (Intern […]

The usual definition of a series of nonnegative terms is as the supremum of the sums over finite subsets of the index set, $$\sum_{i\in I} x_i=\sup\biggl\{\sum_{j\in J}x_j:J\subseteq I\mbox{ is finite}\biggr\}.$$ (Note this definition does not quite work in general for series of positive and negative terms.) The point then is that is $a< x

The result was proved by Kenneth J. Falconer. The reference is MR0629593 (82m:05031). Falconer, K. J. The realization of distances in measurable subsets covering $R^n$. J. Combin. Theory Ser. A 31 (1981), no. 2, 184–189. The argument is relatively simple, you need a decent understanding of the Lebesgue density theorem, and some basic properties of Lebesgue m […]

Given a class $S$, to say that it can be proper means that it is consistent (with the axioms under consideration) that $S$ is a proper class, that is, there is a model $M$ of these axioms such that the interpretation $S^M$ of $S$ in $M$ is a proper class in the sense of $M$. It does not mean that $S$ is always a proper class. In fact, it could also be consis […]

As the other answers point out, the question is imprecise because of its use of the undefined notion of "the standard model" of set theory. Indeed, if I were to encounter this phrase, I would think of two possible interpretations: The author actually meant "the minimal standard model of set theory", that is, $L_\Omega$ where $\Omega$ is e […]