Friday, 28 March 2014

In his Two New
Sciences (1638), Galileo presents a puzzle about infinite collections of
numbers that became known as ‘Galileo’s paradox’. Written in the form of a
dialogue, the interlocutors in the text observe that there are many more positive
integers than there are perfect squares, but that every positive integer is the
root of a given square. And so, there is a one-to-one correspondence between the
positive integers and the perfect squares, and thus we may conclude that there
are as many positive integers as there are perfect squares. And yet, the
initial assumption was that there are more positive integers than perfect
squares, as every perfect square is a positive integer but not vice-versa; in
other words, the collection of the perfect squares is strictly contained in the
collection of the positive integers. How can they be of the same size then?

Galileo’s conclusion is that principles and concepts
pertaining to the size of finite
collections cannot be simply transposed, mutatis mutandis, to cases of
infinity: “the attributes "equal," "greater," and
"less," are not applicable to infinite, but only to finite,
quantities.” With respect to finite collections, two uncontroversial principles
hold:

Part-whole: a
collection A that is strictly contained in a collection B has a strictly
smaller size than B.

One-to-one: two
collections for which there exists a one-to-one correspondence between their
elements are of the same size.

What Galileo’s paradox shows is that, when moving to
infinite cases, these two principles clash with each other, and thus that at least one
of them has to go. In other words, we simply cannot transpose these two basic
intuitions pertaining to counting finite collections to the case of infinite
collections. As is well known, Cantor chose to keep One-to-one at the expenses of Part-whole,
famously concluding that all countable infinite collections are of the same
size (in his terms, have the same cardinality); this is still the reigning
orthodoxy.

In recent years, an alternative approach to measuring infinite sets is being developed by the mathematicians Vieri Benci (who initiated the project) Mauro Di Nasso, and Marco Forti. It is also being further explored by a number of people – including
logicians/philosophers such as Paolo Mancosu, Leon Horsten and my colleague
Sylvia Wenmackers. This framework is known as the theory of numerosities, and
has a number of theoretical as well as more practical interesting features. The
basic idea is to prioritize Part-whole
over One-to-one; this is accomplished
in the following way (Mancosu 2009, p. 631):

Informally the
approach consists in finding a measure of size for countable sets (including
thus all subsets of the natural numbers) that satisfies [Part-whole]. The new ‘numbers’ will be called ‘numerosities’ and
will satisfy some intuitive principles such as the following: the numerosity of
the union of two disjoint sets is equal to the sum of the numerosities.

Basically, what the theory of numerosities does is to
introduce different units, so that on these new units infinite sets comes out
as finite. (In other words, it is a clever way to turn infinite sets into
finite sets. Sounds suspicious? Hum…) In practice, the result is a very robust,
sophisticated mathematical theory, which turns the idea of measuring infinite
sets upside down.

The philosophical implications of the theory of numerosities
for the philosophy of mathematics are far-reaching, and some of them have been
discussed in detail in (Mancosu 2009). Philosophically, the mere fact that there
is a coherent, theoretically robust alternative to Cantorian orthodoxy raises
all kinds of questions pertaining to our ability to ascertain what numbers
‘really’ are (that is, if there are such things indeed). It is not surprising
that Gödel, an avowed Platonist, considered the Cantorian notion of infinite
number to be inevitable: there can be only one correct account of what infinite
numbers really are. As Mancosu points
out, now that there is a rigorously formulated mathematical theory that
forsakes One-to-one in favor of Part-whole, it is far from obvious that
the Cantorian road is the inevitable one.

As mathematical theories, Cantor’s theory of infinite
numbers and the theory of numerosities may co-exist in peace, just as Euclidean
and non-Euclidean geometries live peacefully together (admittedly, after a
rough start in the 19th century). But philosophically, we may well
see them as competitors, only one of which can be the ‘right’ theory about
infinite numbers. But what could possibly count as evidence to adjudicate the
dispute?

One motivation to abandon Cantorian orthodoxy might be that
it fails to provide a satisfactory framework to discuss certain issues. For
example, Wenmackers and Horsten (2013) adopt the alternative approach to treat certain
foundational issues that arise with respect to probability distributions in
infinite domains. It is quite possible that other questions and areas where the
concept of infinity figures prominently can receive a more suitable treatment
with the theory of numerosities, in the sense that oddities that arise by
adopting Cantorian orthodoxy can be dissipated.

On a purely conceptual, foundational level, the dispute
might be viewed as one between Part-whole
and One-to-one, as to which of the
two is the most fundamental principle when it comes to counting finite collections – which would then be
generalized to the infinite cases. They are both eminently plausible, and this
is why Cantor’s solution, while now widely accepted, remains somewhat
counterintuitive (as anyone having taught this material to students surely
knows). Thus, it is hard to see what could possibly count as evidence against
one or the other

Now, after having thought a bit about this material
(prompted by two wonderful talks by Wenmackers and Mancosu in Groningen
yesterday), and somewhat to my surprise, I find myself having a lot of sympathy for
Galileo’s original response. Maybe what holds for counting finite collections
simply does not hold for measuring infinite collections. And if this is the
case, our intuitions concerning the finite cases, and in particular the
plausibility of both Part-whole and One-to-one, simply have no bearing on
what a theory of counting infinite collections should be like. There may well
be other reasons to prefer the numerosities approach over Cantor’s approach (or vice-versa),
but I submit that turning to the idea of counting finite collections is not
going to provide relevant material for the dispute in the infinite cases. In fact, from this point
of view, an entirely different way of measuring infinite collections, where
neither Part-whole nor One-to-one holds, is at least in
principle conceivable. In what way the term ‘counting’ would then still apply
might be a matter of contention, but perhaps counting infinities is a totally
different ball game after all.

Call for Papers: We welcome submissions from scholars (in
particular, young scholars, i.e. early career researchers or
post-graduate students) on any area of the foundations of mathematics
(broadly construed). Particularly desired are submissions that address
the role of set theory in the foundations of mathematics, or the
foundations of set theory (universe/multiverse dichotomy, new axioms,
etc.) and related ontological and epistemological issues. Applicants
should prepare an extended abstract (maximum 1,500 words) for blind
review, and send it to sotfom [at] gmail [dot] com. The successful
applicants will be invited to give a talk at the conference and will be
refunded the cost of accommodation in Vienna for two days (7-8 July).

Set theory is taken to serve as a foundation for mathematics. But it is well-known that there are set-theoretic statements that cannot be settled by the standard axioms of set theory. The Zermelo-Fraenkel axioms, with the Axiom of Choice (ZFC), are incomplete. The primary goal of this symposium is to explore the different approaches that one can take to the phenomenon of incompleteness. One option is to maintain the traditional “universe” view and hold that there is a single, objective, determinate domain of sets. Accordingly, there is a single correct conception of set, and mathematical statements have a determinate meaning and truth-value according to this conception. We should therefore seek new axioms of set theory to extend the ZFC axioms and minimize incompleteness. It is then crucial to determine what justifies some new axioms over others. Alternatively, one can argue that there are multiple conceptions of set, depending on how one settles particular undecided statements. These different conceptions give rise to parallel set-theoretic universes, collectively known as the “multiverse”. What mathematical statements are true can then shift from one universe to the next. From within the multiverse view, however, one could argue that some universes are more preferable than others. These different approaches to incompleteness have wider consequences for the concepts of meaning and truth in mathematics and beyond. The conference will address these foundational issues at the intersection of philosophy and mathematics. The primary goal of the conference is to showcase contemporary philosophical research on different approaches to the incompleteness phenomenon. To accomplish this, the conference has the following general aims and objectives: (1) To bring to a wider philosophical audience the different approaches that one can take to the set-theoretic foundations of mathematics. (2) To elucidate the pressing issues of meaning and truth that turn on these different approaches. (3) To address philosophical questions concerning the need for a foundation of mathematics, and whether or not set theory can provide the necessary foundation.

Tuesday, 25 March 2014

In a previous post, I gave an overview of the alternative to expected utility theory that Lara Buchak formulates and defends in her excellent new book, Risk and Rationality (Buchak 2013). Buchak dubs the alternative risk-weighted expected utility theory. It permits agents to have risk-sensitive attitudes. In this post and the next one, I wish to argue that risk-weighted expected utility theory is right about the constraints that rationality places on our external attitudes, but wrong about the way our internal attitudes ought to combine to determine those external attitudes (for the internal/external attitude terminology, as well as other terminology in this post, please see the previous post): that is, I agree with the axioms Buchak demands our preferences must satisfy, but I disagree with the way she combines probabilities, utilities, and risk attitudes to determine those preferences. I wish to argue that, in fact, we ought to combine our internal attitudes in exactly the way that expected utility theory suggests. In order to maintain both of these positions, I will have to redescribe the outcomes to which we assign utilities. I do this in the next post. In this post, I want to argue that all the effort that we will go to in order to effect this redescription is worth it. That is, I want to argue that there are good reasons for thinking that an agent's internal attitudes ought to be combined to give her external attitudes in exactly the way prescribed by expected utility theory. (These three posts will together provide the basis for my commentary on Buchak's book at the Pacific APA this April.)

Friday, 21 March 2014

(It took me much longer than I had anticipated to get back to this paper, but here is the final part of my paper on axiomatizations of arithmetic and the first-order/second-order divide. Part I is here; Part II is here; Part III is here. As always, comments are welcome!)

Given the (apparent) impossibility of
tackling the descriptive and deductive projects at once with one and the same
underlying logical system – what Tennant (2000) describes as ‘the impossibility
of monomathematics’ – what should we conclude about the general project of
using logic to investigate the foundations of mathematics? And what should we
conclude about the first-order vs. second-order divide? I will discuss each of
these two questions in turn.

If the picture sketched in the previous
sections is one of partial failure, it can equally well be seen as a picture of
partial success. Indeed, a number of first-order mathematical theories can be
made to be categorical with suitable second-order extensions (Read 1997). And
thus, as argued by Read, there is a sense in which the completeness project of
the early days of formal axiomatics has
been achieved (despite Gödel’s results), namely in the descriptive sense
countenanced by Dedekind and others.

Moreover, categoricity failure must not be
viewed as a complete disaster, if one bears in mind Shapiro’s (1997) useful
distinction between algebraic and nonalgebraic theories:

Roughly, non-algebraic theories are theories which appear at first sight to be about a unique model: the intended model of the theory. We have seen examples of such theories: arithmetic, mathematical analysis… Algebraic theories, in contrast, do not carry a prima facie claim to be about a unique model. Examples are group theory, topology, graph theory… (Horsten 2012, section 4.2)

In this vein, proofs of (non-)categoricity
can be viewed as a means of classifying algebraic and non-algebraic theories
(Meadows 2013). This means that the descriptive (non-algebraic) project of
picking out a previously chosen mathematical structure and describing it in
logical terms has developed into the more general descriptive project of
studying theories and groups of theories not only insofar as they instantiate
unique structures (i.e. non-algebraic as well as algebraic versions of the
descriptive project).

On the deductive side, things may seem less
rosy at first sight. In a sense, first-order logic is not only descriptively
inadequate: it is also deductively inadequate, given the impossibility of a
deductively complete first-order theory of the natural numbers, and the fact
that first-order logic itself is undecidable (though complete). It does have a
better behaved underlying notion of logical consequence when compared to
second-order logic, but it still falls short of delivering the deductive power
that e.g. Frege or Hilbert would have hoped for. In short, first-order logic
might be described as being ‘neither here nor there’.

However, if one looks beyond the confines of
first-order or second-order logic, developments in automated theorem proving
suggest that the deductive use as described by Hintikka is still alive and
kicking. Sure enough, there is always the question of whether a given
mathematical theorem, formulated in ‘ordinary’ mathematical language, is
properly ‘translated’ into the language used by the theorem-proving program.
But automated theorem proving is in many senses a compelling instantiation of
Frege’s idea of putting chains of reasoning to test.

Recently, the new research program of
homotopy type-theory promises to bring in a whole new perspective to the
foundations of mathematics. In particular, its base logic, Martin-Löf’s
constructive type-theory, is known to enjoy very favorable computational
properties, and the focus on homotopy theory brings in a clear descriptive
component. It is too early to tell whether homotopy type-theory will indeed
change the terms of the game (as its proponents claim), but it does seem to
offer new prospects for the possibility of unifying the descriptive perspective
and the deductive perspective.

In sum, what we observe currently is not a
complete demise of the original descriptive and deductive projects of pioneers
such Frege and Dedekind, but rather a transformation of these projects into
more encompassing, more general projects.

As for the first-order vs. second-order
divide, it may be instructive to look in more detail into the idea of
second-order extensions of first-order theories, specifically with respect to
arithmetic. Some of these proposals can be described as ‘optimization projects’
that seek to incorporate the least amount of second-order vocabulary so as to
ensure categoricity, while producing a deductively well-behaved theory. In
other words, the goal of an optimal tradeoff between expressiveness and
tractability may not be entirely unreasonable after all.

One such example is the framework of ‘ancestral logic’ (Avron 2003, Cohen 2010). Smith (2008) argues on plausible
conceptual grounds that our basic intuitive grasp of arithmetic surely does not
require the whole second-order conceptual apparatus, but only the concept of
the ancestral of a relation, or the
idea of transitive closure under iterable operations (my parents had
parents, who in turn had parents, who themselves had parents, and so on). Another
way to arrive at a similar conclusion is to appreciate that what is needed to
establish categoricity by extending a first-order theory is nothing more than
the expressive power required to formulate the induction schema, or
equivalently the last, second-order axiom in the Dedekind/Peano axiomatization
(the one needed to exclude ‘alien intruders’). Here again, the concept of the
ancestral of a relation is a plausible candidate (Smith 2008, section 3; Cohen
2010, section 5.3).

Extensions of first-order logic with the concept
of the ancestral yield a number of interesting systems (Smith 2008, section 4;
Cohen 2010, chapter 5). These systems, while not being fully axiomatizable
(Smith 2008, section 4), enjoy a number of favorable proof-theoretical
properties (Cohen 2010, chapter 5). Indeed, they are vastly ‘better behaved’
from a deductive point of view than full-blown second-order logic – and of
course, they are categorical.

Significant for our purposes is the status of
the notion of the ancestral, straddled between first-order and second-order
logic. Smith argues that the fact that this notion can be defined in
second-order terms does not necessarily mean that it is an essentially
higher-order notion:

In sum, the claim is that the
child who moves from a grasp of a relation to a grasp of the ancestral of that
relation need not thereby manifest an understanding of second-order
quantiﬁcation interpreted as quantiﬁcation over arbitrary sets. It seems, rather,
that she has attained a distinct conceptual level here, something whose grasp requires
going beyond a grasp of the fundamental logical constructions regimented in ﬁrst-order
logic, but which doesn’t takes as far as an understanding of full second-order quantiﬁcation.
(Smith 2008)

What this suggests is that the first-order vs.
second-order divide itself may be too coarse to describe adequately the
conceptual building blocks of arithmetic. It is clear that purely first-order
vocabulary will not yield categoricity, but it would be misguided to view the
move to full-blown second-order logic as the next ‘natural’ step. In effect, as
argued by Smith, the concept of the ancestral of a relation is essentially
neither first-order nor second-order, properly speaking. So maybe the problem lies
precisely in the coarse first-order vs. second-order dichotomy when it comes to
the key concepts at the foundations of arithmetic (such as the concept of the
ancestral, or Dedekind’s notion of chains). We may need different, intermediate
categories to classify and analyze these concepts more accurately.

4.
Conclusions

My starting point was the observation that
first-order Peano Arithmetic is non-categorical but deductively well-behaved,
while second-order Peano Arithmetic is categorical but deductively ill-behaved.
I then turned to Hintikka’s distinction between descriptive and deductive
approaches for the foundations of mathematics. Both approaches were represented
in the early days of formal axiomatics at the end of the 19th
century, but the descriptive approach was undoubtedly the predominant one; Frege
was then the sole representative of the deductive approach.

Given the (apparent?) impossibility of
combining both approaches in virtue of the orthogonal desiderata of
expressiveness and tractability, one might conclude (as Tennant (2000) seems to
argue) that the project of providing logical foundations for mathematics itself
is misguided from the start. But I have argued that a story of partial failure
is also a story of partial success, and that both projects (descriptive and
deductive) remain fruitful and vibrant. I have also argued that an
investigation of the conceptual foundations of arithmetic seems to suggest that
the first-order vs. second-order dichotomy is in fact too coarse, as some key
concepts (such as the concept of the ancestral of a relation) seem to inhabit a
‘limbo’ between the two realms.

One of the main conclusions I wish to draw
from these observations is that there is no such thing as a unique project for
the foundations of mathematics. Here we focused on two distinct projects,
descriptive and deductive, but there may well be others. While it may seem that
these two perspectives are incompatible, there is both the possibility of
‘optimization projects’, i.e. the search for the best trade-off between
expressive and deductive power (e.g. ancestral arithmetic), and the possibility
that an entirely new approach (maybe homotopy type-theory?) may even dissolve
the apparent impossibility of fully engaging in both projects at once. It is
perhaps due to an excessive focus on the first-order vs. second-order divide
that we came to think that the two projects are incompatible.

At any rate, the choice of formalism/logical framework
will depend on the exact goals of the formalization/axiomatization. Here, the
focus has been on the expressiveness-tractability axis, but there may well be other
relevant parameters. Now, if we acknowledge that there may be more than one
legitimate theoretical goal when approaching mathematics with logical tools
(and here we discussed two, prima facie equally legitimate approaches: descriptive and deductive), then there is no reason why there should be a unique, most
appropriate logical framework for the foundations of mathematics. The picture
that emerges is of a multifaceted, pluralistic enterprise, not of a uniquely
defined project, and thus one allowing for multiple, equally legitimate
perspectives and underlying theoretical frameworks. A plurality of goals suggests a form of logical pluralism, and thus, perhaps there is no real ‘dispute’ between first-order and second-order logic in this
domain.

This is to invite young researchers (PhD students and post-docs) to apply for the upcoming summer school

"Proof, Truth, Computation. Modern Foundations of Mathematics and Contemporary Philosophy".

Application by female scientists is particularly encouraged.

The event will take place from 21st to 25th July 2014 (arrival 20th July afternoon, departure 25th July after noon) in the Benedictine nunnery Frauenwoerth on the Fraueninsel in Chiemsee between Munich and Salzburg:

To get an idea of this summer school, especially of its interdisciplinary character, please see the material provided at end of this message. Junior participants will be particularly expected to contribute to the questions and answers sessions and to the round table discussions.

PhD students need to send a CV of at most 2 pages, a brief letter of motivation and one letter of reference. Postdocs only need to send a CV of at most 2 pages. All applicants need to tell whether they also apply for funding and, if so, to which extent. Only a limited amount of funding is available. Applicants for funding are expected to stay for the whole week, and to tell the extent to which they can be funded by other sources.

If your application for funding is successful, then you will be offered reimbursement of the travel and lodging expenses that you cannot cover from other sources. This will require that you choose the cheapest travel option, and that you book your trip by 31st March 2014 in case of flights and by the earliest possible date in case of long-distance trains. We hope to be able to contribute partially to your subsistence expenses (meals).

Meals (breakfast, lunch and dinner - the latter two excluding drinks) for four and a half days will be EUR 180. PhD students and postdocs are expected to share double rooms, for EUR 125 each person for the whole week (5 nights).

Mathematical methods are about to shape some branches of contemporary philosophy just as they have formed most of the natural and many of the social sciences. The thread of the school we propose is to mirror this development, known as mathematical philosophy or formal epistemology; to highlight the challenges that arise from it; and to display its repercussions in mathematics. As for theoretical computer science, a quite comparable spin-off of mathematics, the principal counterpart within mathematics is mathematical logic.

Since many of the objects of study lie beyond the typical commitment of contemporary mathematics, it is decisive to include non-classical issues such as predicativity and constructivity. Proof theory does indeed play a pivotal role: as the area of mathematical logic that is closest to the understanding of logic as the science of formal languages and reasoning, it is predestined for interaction both with philosophical and computer science logic.

A hot topic that crosses over wide ranges of the school, and is most prominently represented within, is whether axiomatic theories of truth and of related notions, such as provability and knowledge, are possible at all in the stress field between syntax and semantics. Rational belief and rational choice, epistemic issues of principal philosophical relevance, are put under mathematical scrutiny by applying probabilism: that is, the thesis that a rational agent's degrees of belief should conform to the axioms of probability theory.