NZMRI 2017 Abstracts

Denis Hirschfeldt - Reverse Mathematics

Every mathematician knows that if 2+2=5 then Bertrand Russell is the pope. Russell is credited with having given a proof of that fact in a lecture, though from the point of view of classical logic, no such proof is needed, since a false statement implies every statement. Contrapositively, every statement implies a given true statement. But we are often interested in questions of implication and nonimplication between true statements. We have all heard and said things like "Theorems A and B are equivalent." or "Theorem C does not just follow from Theorem D." There is also a well-established practice of showing that a given theorem can be proved without using certain methods. These are often crucial things to understand about an area of mathematics, and can also help us make connections between different areas.

Reverse mathematics is one way to approach such questions formally. It is an attempt to rescue some of the content of Hilbert's Program in the wake of Godel's Second Incompleteness Theorem, by calibrating the strength of theorems of ordinary mathematics, often but not always in terms of a small number of natural axiomatic systems. Although its approach is proof-theoretic, it has deep connections with computability theory. I will give an introduction to reverse mathematics, drawing examples primarily from combinatorics. If there is enough time, I will try to explain some exciting recent results on the reverse-mathematical strength of Ramsey-theoretic principles.

Jack Lutz - Algorithmic Fractal Dimensions

Algorithmic versions of Hausdorff dimension and other fractal dimensions were developed at the beginning of this century. These algorithmic fractal dimensions, which are quantitative refinements of the theory of algorithmic randomness developed since the 1960s, combine computability theory with analytic methods to give measures of the density of information in infinite binary sequences, individual points in Euclidean space, and other data objects. Research by many investigators has shown that algorithmic fractal dimensions shed new light, not only on algorithmic information theory, computability theory, and computational complexity, but also on classical mathematical questions that make no mention of the theory of computing. This tutorial series will give a self-contained survey of these developments.

Maryanthe Malliaris - Ultraproducts and complexity

The ultraproduct construction, developed in the 1960s, is a fundamental
mathematical tool that allows for averaging and amplification
across a sequence of mathematical objects (graphs, groups, fields...).
In the case where the sequence is constant, this construction
is called an ultrapower, and may help shed light on the complexity
of a given object: what in the nature of its basic combinatorial
structure is amenable to amplification and how? The problems,
questions, and theorems which arise from these ideas involve model
theory, set theory, and combinatorics. The tutorial will present
some landmark results and longstanding open questions in this area,
from the sixties to the present, with a focus on recent work.

Antonio Montalban - Vaught's Conjecture

Vaught's Conjecture, a special case of the continuum hypothesis, was posed in 1961 and stands as one of the longest-standing basic open questions in logic. It a model-theoretic statement, but is it connected to other areas of logic too. We will introduce Vaught's Conjecture, describe what we know about is, and explore its connections with other areas of logic.

Hugh Woodin - Ultimate L

The problem of the Continuum Hypothesis is arguably the most well known example of a (formally) unsolvable problem without an obvious answer. But does that mean there is no answer?

The projective sets are those sets of real numbers which can be generated from an open set in finitely many steps by taking images by continuous total functions and complements. The same definition but using continuous functions on the plane, defines the projective subset of the plane, etc.

The classical questions studied concerning the projective sets include both the Continuous Hypothesis and the Axiom of Choice but localized to the projective sets. More precisely, suppose A is an uncountable projective set. Must A have cardinality that of the set of all real numbers? This is the Continuum Hypothesis localized to the projective sets.

Suppose A is a projective subset of the plane. Is there a function with projective graph such that (x,f(x)) is in A whenever the vertical section of A given by x is nonempty? This is the Axiom of Choice localized to the projective sets.

Both of these questions are formally unsolvable on the basis of the ZFC axioms for Set Theory. In fact this is demonstrated by the same constructions of Godel and Cohen which showed that Continuum Hypothesis is unsolvable, no refinement of their constructions is required. However unlike the problem of the Continuum Hypothesis, these questions about the projective sets have been answered, and that answer is yes for both questions.

Given this success (which spanned 70 years), perhaps one can hope that similar methods, which involve large cardinals, could lead to a resolution of the problem of the Continuum Hypothesis itself. Unfortunately, the Continuum Hypothesis is a much harder problem and for a variety of reasons resolving this problem seemed completely hopeless using anything remotely related to what was used to resolve the indicated questions about the projective sets.

But that conviction, while “obviously correct”, is completely wrong. Now it seems extremely likely that there is a solution to the Continuum Hypothesis along similar lines (but the similarity is a different one). This solution involves exploiting a transfinite generation of the projective sets, and it resolves not only the Continuum Hypothesis but all the questions which have been shown to be unsolvable by Cohen’s method. It identifies a single additional axiom which if added to the standard ZFC axioms achieves that goal.