Background

The Continuum Hypothesis (CH) posed by Cantor in 1890 asserts that $ \aleph_1=2^{\aleph_0}$. In other words, it asserts that every subset of the set of real numbers that contains the natural numbers has either the cardinality of the natural numbers or the cardinality of the real numbers. It was the first problem on the 1900 Hilbert's list of problems. The generalized continuum hypothesis asserts that there are no intermediate cardinals between every infinite set X and its power set.

Cohen proved that the CH is independent from the axioms of set theory. (Earlier Goedel showed that a positive answer is consistent with the axioms).

Several mathematicians proposed definite answers or approaches towards such answers regarding what the answer for the CH (and GCH) should be.

The question

My question asks for a description and explaination of the various approaches to the continuum hypothesis in a language which could be understood by non-professionals.

More background

I am aware of the existence of 2-3 approaches.

One is by Woodin described in two 2001 Notices of the AMS papers (part 1, part 2).

The proposed asnwer $ 2^{\aleph_0}=\aleph_2$ goes back according to François to Goedel. It is (perhaps) mentioned in Foreman's presentation. (I heard also from Menachem Magidor that this answer might have some advantages.)

There is also a very rich theory (pcf theory) of cardinal arithmetic which deals with what can be proved in ZFC.

Remark:

I included some information and links from the comments and answer in the body of question. What I would hope most from an answer is some friendly elementary descriptions of the proposed solutions.

There are by now a couple of long detailed excellent answers (that I still have to digest) by Joel David Hamkins and by Andres Caicedo and several other useful answers. (Unfortunately, I can accept only one answer.)

Update (Fenruary 2011): A new detailed answer was contributed by Justin Moore.

Can you please edit your second sentence starting by "In other words" ? "containing of real numbers" does not seem very clear. Also small typos like "now intermediate" -> "no intermediate" "Eralier"=>"Earlier" Another pedantic remark is that one does not "solve" an hypothesis. One adopts it or one rejects it, eventually replacing it by another. Perhaps should you slightly rephrase your question.
–
ogerardMay 8 '10 at 12:40

3

Gil, you're absolutely right that using "solution" in this and related contexts is common usage, but IMHO it's bad English. One solves a problem, answers a question, and proves or disproves a hypothesis or conjecture. Whenever I hear that a "conjecture" has been "solved" I don't know whether the speaker means the conjecture is now known to be true, false, or undecidable (and I've seen examples in which each of the three was meant). But I'm grumpier about such things than most.
–
Mark MeckesMay 10 '10 at 13:36

9 Answers
9

Since you have already linked to some of the contemporary
primary sources, where of course the full accounts of those
views can be found, let me interpret your question as a
request for summary accounts of the various views on CH.
I'll just describe in a few sentences each of what I find
to be the main issues surrounding CH, beginning with some
historical views. Please forgive the necessary simplifications.

Cantor. Cantor introduced the Continuum Hypothesis
when he discovered the transfinite numbers and proved that
the reals are uncountable. It was quite natural to inquire
whether the continuum was the same as the first uncountable
cardinal. He became obsessed with this question, working on
it from various angles and sometimes switching opinion as
to the likely outcome. Giving birth to the field of
descriptive set theory, he settled the CH question for
closed sets of reals, by proving (the Cantor-Bendixon
theorem) that every closed set is the union of a countable
set and a perfect set. Sets with this perfect set property
cannot be counterexamples to CH, and Cantor hoped to extend
this method to additional larger classes of sets.

Hilbert. Hilbert thought the CH question so
important that he listed it as the first on his famous list
of problems at the opening of the 20th century.

Goedel. Goedel proved that CH holds in the
constructible universe $L$, and so is relatively consistent
with ZFC. Goedel viewed $L$ as a device for establishing
consistency, rather than as a description of our (Platonic)
mathematical world, and so he did not take this result to
settle CH. He hoped that the emerging large cardinal
concepts, such as measurable cardinals, would settle the CH
question, and as you mentioned, favored a solution of the form $2^\omega=\aleph_2$.

Cohen. Cohen introduced the method of forcing and
used it to prove that $\neg$CH is relatively consistent
with ZFC. Every model of ZFC has a forcing extension with
$\neg$CH. Thus, the CH question is independent of ZFC,
neither provable nor refutable. Solovay observed that CH
also is forceable over any model of ZFC.

Large cardinals. Goedel's expectation that large
cardinals might settle CH was decisively refuted by the
Levy-Solovay theorem, which showed that one can force
either CH or $\neg$CH while preserving all known large
cardinals. Thus, there can be no direct implication from
large cardinals to either CH or $\neg$CH. At the same
time, Solovay extended Cantor's original strategy by
proving that if there are large cardinals, then increasing
levels of the projective hierarchy have the perfect set
property, and therefore do not admit counterexamples to CH.
All of the strongest large cardinal axioms considered today
imply that there are no projective counterexamples to CH. This can be seen as a complete affirmation of Cantor's original strategy.

Basic Platonic position. This is the realist view
that there is Platonic universe of sets that our axioms are
attempting to describe, in which every set-theoretic
question such as CH has a truth value. In my experience,
this is the most common or orthodox view in the
set-theoretic community. Several of the later more subtle
views rest solidly upon the idea that there is a fact of
the matter to be determined.

Old-school dream solution of CH. The hope was that
we might settle CH by finding a new set-theoretic principle
that we all agreed was obviously true for the intended
interpretation of sets (in the way that many find AC to be
obviously true, for example) and which also settled the CH
question. Then, we would extend ZFC to include this new
principle and thereby have an answer to CH. Unfortunately,
no such conclusive principles were found, although there
have been some proposals in this vein, such as Freilings
axiom of symmetry.

Formalist view. Rarely held by mathematicians,
although occasionally held by philosophers, this is the
anti-realist view that there is no truth of the matter of
CH, and that mathematics consists of (perhaps meaningless)
manipulations of strings of symbols in a formal system. The
formalist view can be taken to hold that the independence
result itself settles CH, since CH is neither provable nor
refutable in ZFC. One can have either CH or $\neg$CH as
axioms and form the new formal systems ZFC+CH or
ZFC+$\neg$CH. This view is often mocked in straw-man form,
suggesting that the formalist can have no preference for CH
or $\neg$CH, but philosophers defend more subtle versions,
where there can be reason to prefer one formal system to
another.

Pragmatic view. This is the view one finds in
practice, where mathematicians do not take a position on
CH, but feel free to use CH or $\neg$CH if it helps their
argument, keeping careful track of where it is used.
Usually, when either CH or $\neg$CH is used, then one
naturally inquires about the situation under the
alternative hypothesis, and this leads to numerous consistency or independence results.

Cardinal invariants. Exemplifying the pragmatic view, this is a very rich subject
studying various cardinal characteristics of the continuum,
such as the size of the smallest unbounded family of
functions $f:\omega\to\omega$, the additivity of the ideal
of measure-zero sets, or the smallest size family of
functions $f:\omega\to\omega$ that dominate all other such
functions. Since these characteristics are all uncountable
and at most the continuum, the entire theory trivializes
under CH, but under $\neg$CH is a rich, fascinating
subject.

Canonical Inner models. The paradigmatic canonical
inner model is Goedel's constructible universe $L$, which
satisfies CH and indeed, the Generalized Continuum
Hypothesis, as well as many other regularity properties.
Larger but still canonical inner models have been built by
Silver, Jensen, Mitchell, Steel and others that share the
GCH and these regularity properties, while also satisfying
larger large cardinal axioms than are possible in $L$. Most
set-theorists do not view these inner models as likely to
be the "real" universe, for similar reasons that they
reject $V=L$, but as the models accommodate larger and
larger large cardinals, it becomes increasingly difficult
to make this case. Even $V=L$ is compatible with the
existence of transitive set models of the very largest
large cardinals (since the assertion that such sets exist
is $\Sigma^1_2$ and hence absolute to $L$). In this sense,
the canonical inner models are fundamentally compatible
with whatever kind of set theory we are imagining.

Woodin. In contrast to the Old-School Dream
Solution, Woodin has advanced a more technical argument in
favor of $\neg$CH. The main concepts include $\Omega$-logic
and the $\Omega$-conjecture, concerning the limits of
forcing-invariant assertions, particularly those
expressible in the structure $H_{\omega_2}$, where CH is
expressible. Woodin's is a decidedly Platonist position,
but from what I have seen, he has remained guarded in his
presentations, describing the argument as a proposal or
possible solution, despite the fact that others sometimes
characterize his position as more definitive.

Foreman. Foreman, who also comes from a strong
Platonist position, argues against Woodin's view. He writes
supremely well, and I recommend following the links to his
articles.

Multiverse view. This is the view, offered in
opposition to the Basic Platonist Position above, that we
do not have just one concept of set leading to a unique
set-theoretic universe, but rather a complex variety of set
concepts leading to many different set-theoretic worlds.
Indeed, the view is that much of set-theoretic research in
the past half-century has been about constructing these
various alternative worlds. Many of the alternative set
concepts, such as those arising by forcing or by large
cardinal embeddings are closely enough related to each
other that they can be compared from the perspective of
each other. The multiverse view of CH is that the CH
question is largely settled by the fact that we know
precisely how to build CH or $\neg$CH worlds close to any
given set-theoretic universe---the CH and $\neg$CH worlds
are in a sense dense among the set-theoretic universes. The
multiverse view is realist as opposed to formalist, since
it affirms the real nature of the set-theoretic worlds to
which the various set concepts give rise. On the Multiverse
view, the Old-School Dream Solution is impossible, since
our experience in the CH and $\neg$CH worlds will prevent
us from accepting any principle $\Phi$ that settles CH as
"obviously true". Rather, on the multiverse view we are to study all the possible set-theoretic worlds and especially how they relate to each other.

it seems that the multiverse view is the beginning of plurality in set theory. This is analogous to how there used to be only one geometry -- Euclidean -- but the investigation of PP and not PP led to a multiverse of geometries.
–
Colin TanMay 19 '10 at 8:32

1

I agree, Colin; the analogy with geometry is very strong and extends to many facets of how we think about the various geometries.
–
Joel David HamkinsMay 19 '10 at 16:18

1

Is there any name for something like "formalism with a preferred model", where one acknowledges that there is no reason apart from personal preference to study a given system of axioms, but where one also acknowledges that one has a preferred system? It seems somewhere in between the formalism and pragmatism and seems to be a view partially justified by advances in topos theory.
–
Harry GindiMay 20 '10 at 7:21

8

"Formalist view. Rarely held by mathematicians, although occasionally held by philosophers ..." I recently read the following, written by a distinguished philosopher of mathematics. "The formalist's central thought is that arithmetic is not ultimately concerned with an extralinguistic domain of things. Rather, insofar as arithmetic has a proper subject matter, it is the language of arithmetic itself and certain formal relations among its sentences." This was accompanied by a footnote: "The view has few contemporary adherents among philosophers, though mathematicians often find it congenial."
–
gowersFeb 6 '11 at 17:28

8

For what it's worth, I find it congenial myself. In particular, I incline to the view that there is no fact of the matter about whether CH is true.
–
gowersFeb 6 '11 at 17:29

(1) Patrick Dehornoy gave a nice talk at the Séminaire Bourbaki explaining Hugh Woodin's approach. It omits many technical details, so you may want to look at it before looking again at the Notices papers. I think looking at those slides and then at the Notices articles gives a reasonable picture of what the approach is and what kind of problems remain there.

You can find the slides here, under "Recent results about the Continuum Hypothesis, after Woodin". (In true Bourbaki fashion, I heard that the talk was not well received.)

Roughly, Woodin's approach shows that in a sense, the theory of $H(\omega_2)$ decided by the usual set of axioms, ZFC and large cardinals, can be "finitely completed" in a way that would make it reasonable to expect to settle all its properties. However, any such completion implies the negation of CH.

It is a conditional result, depending on a highly non-trivial problem, the $\Omega$-conjecture. If true, this conjecture gives us that Cohen's technique of forcing is in a sense the only method (in the presence of large cardinals) that is required to established consistency. (The precise statement is more technical.)

$H(\omega_2)$, that Dehornoy calls $H_2$, is the structure obtained by considering only those sets $X$ such that $X\cup\bigcup X\cup\bigcup\bigcup X\cup\dots$ has size strictly less than $\aleph_2$, the second uncountable cardinal.

Replacing $\aleph_2$ with $\aleph_1$, we have $H(\omega_1)$, whose theory is completely settled in a sense, in the presence of large cardinals. If nothing else, one can think of Woodin's approach as trying to build an analogy with this situation, but "one level up."

Whether or not one considers that settling the $\Omega$-conjecture in a positive fashion actually refutes CH in some sense, is a delicate matter. In any case (and I was happy to see that Dehornoy emphasizes this), Woodin's approach gives strength to the position that the question of CH is meaningful (as opposed to simply saying that, since it is independent, there is nothing to decide).

(2) There is another approach to the problem, also pioneered by Hugh Woodin. It is the matter of "conditional absoluteness." CH is a $\Sigma^2_1$ statement. Roughly, this means that it has the form: "There is a set of reals such that $\phi$", where $\phi$ can be described by quantifying only over reals and natural numbers. In the presence of large cardinals, Woodin proved the following remarkable property: If $A$ is a $\Sigma^2_1$ statement, and we can force $A$, then $A$ holds in any model of CH obtained by forcing.

Recall that forcing is essentially the only tool we have to establish consistency of statements. Also, there is a "trivial" forcing that does not do anything, so the result is essentially saying that any statement of the same complexity as CH, if it is consistent (with large cardinals), then it is actually a consequence of CH.

This would seem a highly desirable ''maximality'' property that would make CH a good candidate to be adopted.

However, recent results (by Aspero, Larson, and Moore) suggest that $\Sigma^2_1$ is close to being the highest complexity for which a result of this kind holds, which perhaps weakens the argument for CH that one could do based on Hugh's result.

A good presentation of this theorem is available in Larson's book "The stationary tower. Notes on a Course by W. Hugh Woodin." Unfortunately, the book is technical.

(3) Foreman's approach is perhaps the strongest opponent to the approach suggested by Woodin in (1). Again, it is based in the technique of forcing, now looking at small cardinal analogues of large cardinal properties.

Many large cardinal properties are expressed in terms of the existence of elementary embeddings of the universe of sets. This embeddings tend to be "based" at cardinals much much larger than the size of the reals. With forcing, one can produce such embeddings "based" at the size of the reals, or nearby. Analyzing a large class of such forcing notions, Foreman shows that they must imply CH. If one where to adopt the consequences of performing these forcing constructions as additional axioms one would then be required to also adopt CH.

I had to cut my answer short last time. I would like now to say a few details about a particular approach.

(4)Forcing axioms imply that $2^{\aleph_0}=\aleph_2$, and (it may be argued) strongly suggest that this should be the right answer.

Now, before I add anything, note that Woodin's approach (1) uses forcing axioms to prove that there are "finite completions" of the theory of $H(\omega_2)$ (and the reals have $\aleph_2$). However, this does not mean that all such completions would be compatible in any particular sense, or that all would decide the size of the reals. What Woodin proves is that all completions negate CH, and forcing axioms show that there is at least one such completion.

I believe there has been some explanation of forcing axioms in the answer to the related question on GCH. Briefly, the intuition is this: ZFC seems to capture the basic properties of the universe of sets, but fails to account for its width and its height. (What one means by this is: how big should power sets be, and how many ordinals there are.)

Our current understanding suggests that the universe should indeed be very tall, meaning there should be many many large cardinals. As Joel indicated, there was originally some hope that large cardinals would determine the size of the reals, but just about immediately after forcing was introduced, it was proved that this was not the case. (Technically, small forcing preserves large cardinals.)

However, large cardinals settle many questions about the structure of the reals (all first order, or projective statements, in fact). CH, however, is "just" beyond what large cardinals can settle. One could say that, as far as large cardinals are concerned, CH is true. What I mean is that, in the presence of large cardinals, any definable set of reals (for any reasonable notion of definability) is either countable or contains a perfect subset. However, this may simply mean that there is certain intrinsic non-canonicity in the sets of reals that would disprove CH, if this is the case.

(A word of caution is in order here, and there are candidates for large cardinal axioms [presented by Hugh Woodin in his work on suitable extender sequences] for which preservation under small forcing is not clear. Perhaps the solution to CH will actually come, unexpectedly, from studying these cardinals. But this is too speculative at the moment.)

I have avoided above saying much about forcing. It is a massive machinery, and any short description is bound to be very inaccurate, so I'll be more than brief.

An ordinary algebraic structure (a group, he universe of sets) can be seen as a bi-valued model. Just as well, one can define, for any complete Boolean algebra ${\mathbb B}$, the notion of a structure being ${\mathbb B}$-vaued. If you wish, "fuzzy set theory" is an approximation to this, as are many of the ways we model the world by using a probability density to decide the likelihood of events. For any complete Boolean algebra ${\mathbb B}$, we can define a ${\mathbb B}$-valued model $V^{\mathbb B}$ of set theory. In it, rather than having for sets $x$ and $y$ that either $x\in y$ or it doesn't, we assign to the statement $x\in y$ a value $[x\in y]\in{\mathbb B}$. The way the construction is performed, $[\phi]=1$ for each axiom $\phi$ of ZFC. Also, for each element $x$ of the actual universe of sets, there is a copy $\check x$ in the ${\mathbb B}$-valued model, so that the universe $V$ is a submodel of $V^{\mathbb B}$. If it happens that for some statement $\psi$ we have $[\psi]>0$, we have established that $\psi$ is consistent with ZFC. By carefully choosing ${\mathbb B}$, we can do this for many $\psi$. This is the technique of forcing, and one can add many wrinkles to the approach just outlined. One refers to ${\mathbb B}$ as a forcing notion.

Now, the intuition that the universe should be very fat is harder to capture than the idea of largeness of the ordinals. One way of expressing it is that the universe is somehow "saturated": If the existence of some object is consistent in some sense, then in fact such object should exist. Formalizing this, one is led to forcing axioms. A typical forcing axiom says that relatively simple (under some measure of complexity) statements that can be shown consistent using the technique of forcing via a Boolean algebra ${\mathbb B}$ that is not too pathological, should in fact hold.

The seminal Martin's Maximum paper of Foreman-Magidor-Shelah identified the most generous notion of "not too pathological", it corresponds to the class of "stationary set preserving" forcing notions. The corresponding forcing axiom is Martin's Maximum, MM. In that paper, it was shown that MM implies that the size of the reals is $\aleph_2$.

The hypothesis of MM has been significantly weakened, through a series of results by different people, culminating in the Mapping Reflection Principle paper of Justin Moore. Besides this line of work, many natural consequences of forcing axioms (commonly termed reflection principles) have been identified, and shown to be independent of one another. Remarkably, just about all these principles either imply that the size of the reals is $\aleph_2$, or give $\aleph_2$ as an upper bound.

Even if one finds that forcing axioms are too blunt a way of capturing the intuition of "the universe is wide", many of its consequences are considered very natural. (For example, the singular cardinal hypothesis, but this is another story.) Just as most of the set theoretic community now understands that large cardinals are part of what we accept about the universe of sets (and therefore, so is determinacy of reasonably definable sets of reals, and its consequences such us the perfect set property), it is perhaps not completely off the mark to expect that as our understanding of reflection principles grow, we will adopt them (or a reasonable variant) as the right way of formulating "wideness". Once/if that happens, the size of the reals will be taken as $\aleph_2$ and therefore CH will be settled as false.

The point here is that this would be a solution to the problem of CH that does not attack CH directly. Rather, it turns out that the negation of CH is a common consequence of many principles that it may be reasonable to adapt in light of the naturalness of some of their best known consequences, and of their intrinsic motivation coming from the "wide universe" picture.

(Apologies for the long post.)

Edit, Nov. 22/10: I have recently learned about Woodin's "Ultimate L" which, essentially, advances a view that theories are "equally good" if they are bi-interpretable, and identifies a theory ("ultimate L") that, modulo large cardinals, would work as a universal theory from which to interpret all extensions. This theory postulates an $L$-like structure for the universe and in particular implies CH, see this answer. But, again, the theory is not advocated on grounds that it ought to be true, whatever this means, but rather, that it is "richest" possible in that it allows us to interpret all possible "natural" extensions of ZFC. In particular, under this approach, only large cardinals are relevant if we want to strengthen the theory, while "width" considerations, such as those supporting forcing axioms, are no longer relevant.

Since the approach I numbered (1) above implies the negation of CH, I feel I should add that one of the main reasons for it being advanced originally depended on the fact that the set of $\Omega$-validities can be defined "locally", at the level of $H({\mathfrak c}^+)$, at least if the $\Omega$-conjecture holds.

However, recent results of Sargsyan uncovered a mistake in the argument giving this local definability. From what I understand, Woodin feels that this weakens the case he was making for not-CH significantly.

Thanks very much for this answer, Andres. I had heard that Woodin proved an extremal property of CH, but I didn't know what it was. It is presumably your item (2).
–
John StillwellMay 18 '10 at 23:47

Could you give some more details about the connection with the recent work of Sargsyan?
–
Simon ThomasNov 23 '10 at 13:08

Hi Simon. I am not too certain of all the details; I believe that the issue is this: In the context of $AD^+$, say that $\alpha$ is a "local $\Theta$" if it is the $\Theta$ of a hod-model. Woodin had an argument showing that no such $\alpha$ could be overlapped by a strong cardinal. This put serious limitations on the strength of large cardinals that hod-models could contain. In particular, this was the reason why it was expected that "CH + there is an $\omega_1$-dense ideal on $\omega_1$" and "$AD_{\mathbb R}+\Theta$ regular" were expected to have really high consistency strength (continued)
–
Andres CaicedoNov 23 '10 at 15:33

(2) Woodin's local definability argument depended on this limitation of hod models. (I am not sure of the details here.) Grigor's analysis in the context of the core model induction (there are slides of a talk at Boise on his website, and I can email you his thesis, let me know) showed that these "local overlaps" are possible, and deduced as a corollary that "$AD_{\mathbb R}+\Theta$ regular has much lower consistency strength than expected. Grigor in fact has a very detailed analysis of hod models. Without the overlap limitation, the set of $\Omega$-validities ends up being harder to define.
–
Andres CaicedoNov 23 '10 at 15:42

(3) It can be shown that it is $H(\delta_0^+)$-definable, where $\delta_0$ is the smallest Woodin of $V$. But the argument pro-not-CH went by a level by level analysis of the $H(\kappa)$-levels, and this jump (from ${\mathfrak c}$ to a Woodin) is too high to overlook. As far as I understand, this is the nature of the issue.
–
Andres CaicedoNov 23 '10 at 15:45

First I'll say a few words about forcing axioms and then I'll answer your question. Forcing axioms were developed to provide a unified framework to establish the consistency of a number of combinatorial statements, particularly about the first uncountable cardinal. They began with Solovay and Tennenbaum's proof of the consistency of Souslin's Hypothesis (that every linear order in which there are no uncountable families of pairwise disjoint intervals is necessarily separable). Many of the consequences of forcing axioms, particularly the stronger Proper Forcing Axiom and Martin's Maximum, had the form of classification results: Baumgartner's classification of the isomorphism types of $\aleph_1$-dense sets of reals, Abraham and Shelah's classification of the Aronszajn trees up to club isomorphism, Todorcevic's classification of linear gaps in $\mathcal{P}(\mathbb{N})/\mathrm{fin}$, and Todorcevic's classification of transitive relations on $\omega_1$. A survey of these results (plus many references) can be found in both Stevo Todorcevic's ICM article and my own (the later can be found here). These are accessible to a general audience.

What does all this have to do with the Continuum Problem? It was noticed early on that forcing axioms imply that the continuum is $\aleph_2$. The first proof of this is, I believe, in Foreman, Magidor, and Shelah's seminal paper in Martin's Maximum. Other quite different proofs were given by Caicedo, Todorcevic, Velickovic, and myself. An elementary proof which is purely Ramsey-theoretic in nature is in my article " Open colorings, the continuum, and the second uncountable cardinal" (PAMS, 2002).

Since it is often the case that the combinatorial essence of the proofs of these classification results and that the continuum is $\aleph_2$ are similar, one is left to speculate that perhaps there may some day be a classification result concerning structures of cardinality $\aleph_1$ which already implies that the continuum is $\aleph_2$. There is a candidate for such a classification: the assertion that there are five uncountable linear orders such that every other contains an isomorphic copy of one of these five. Another related candidate for such a classification is the assertion that the Aronszajn lines are well quasi-ordered by embeddability (if $L_i$ $(i < \infty)$ is a sequence of Aronszajn lines, then there is an $i < j$ such that $L_i$ embeds into $L_j$). These are due to myself and Carlos Martinez, respectively. See a discussion of this (with references) in my ICM paper.

The question of whether or not $2^{\aleph_{0}} = \aleph_{1}$ is not even considered in Shelah's approach. In fact, this question is regarded as part of the "white noise" which has distracted the attention of set theorists from some striking $ZFC$-results about cardinal exponentiation $\kappa^{\lambda}$ when you consider relatively small exponents $\lambda$ and relatively large bases $\kappa$.

Woodin’s answer: Instead of looking at the statements of new axioms, look at the
metamathematical properties of axiom candidates. There is an asymmetry between
axioms that imply CH and those that imply $\sim CH.$ Woodin’s ­$\Omega$-conjecture.

Shelah's approach from his paper in his "The Generalized Continuum Hypothesis revisited", concerns mainly the generalized continuum hypothesis. His main theorem addresses an appealing variation of the GCH which is based on a revised notion of "power." Let me explain what is this notion of $\lambda^{[\kappa]}$ (in words: $\lambda$ to the revised power of $\kappa$,) which is central to his approach and is also of independent interest. $\lambda^{[\kappa]}$ is the minimum size of a family of subsets of size $\kappa$ of a set $X$ of cardinality $\lambda$ such that every subset of cardinality $\kappa$ of $X$ is covered by less than $\kappa$ members of the family. (Of course we need that $\kappa < \lambda$ and also we need that $\kappa$ is a regular cardinal.)