Wednesday, 27 August 2014

The focus of this conference is on different approaches to the foundationsof mathematics. The interaction between set-theoretic and category-theoreticfoundations has had significant philosophical impact, and represents a shiftin attitudes towards the philosophy of mathematics. This conference willbring together leading scholars in these areas to showcase contemporaryphilosophical research on different approaches to the foundations ofmathematics. To accomplish this, the conference has the following generalaims and objectives. First, to bring to a wider philosophical audience thedifferent approaches that one can take to the foundations of mathematics.Second, to elucidate the pressing issues of meaning and truth that turn onthese different approaches. And third, to address philosophical questionsconcerning the need for a foundation of mathematics, and whether or noteither of these approaches can provide the necessary foundation.

Date and Venue: 12-13 January 2015 - Senate House, University of London.

Call for Papers: We welcome submissions from scholars (in particular, youngscholars, i.e. early career researchers or post-graduate students) on anyarea of the foundations of mathematics (broadly construed). Particularlydesired are submissions that address the role of and compare differentfoundational approaches. Applicants should prepare an extended abstract(maximum 1’500 words) for blind review, and send it to sotfom [at] gmail[dot] com, with subject `SOTFOM II Submission'.

Monday, 25 August 2014

It is no news to anyone that the concept of consistency is a
hotly debated topic in philosophy of logic and epistemology (as well as
elsewhere). Indeed, a number of philosophers throughout history have defended
the view that consistency, in particular in the form of the principle of
non-contradiction (PNC), is the most fundamental principle governing human
rationality – so much so that rational debate about PNC itself wouldn’t even be
possible, as famously stated by David Lewis. It is also the presumed privileged
status of consistency that seems to motivate the philosophical obsession with
paradoxes across time; to be caught entertaining inconsistent beliefs/concepts
is really bad, so blocking the emergence of paradoxes is top-priority.
Moreover, in classical as well as other logical systems, inconsistency entails
triviality, and that of course amounts to complete disaster.

Since the advent of dialetheism, and in particular under the
powerful assaults of karateka Graham Priest, PNC has been under pressure.
Priest is right to point out that there are very few arguments in favor of the
principle of non-contradiction in the history of philosophy, and many of them
are in fact rather unconvincing. According to him, this holds in particular of
Aristotle’s elenctic argument in Metaphysics gamma. (I agree with him that the argument there does not go through, but we
disagree on its exact structure. At any rate, it is worth noticing that, unlike
David Lewis, Aristotle did think it was possible to debate with the opponent of
PNC about PNC itself.) But despite the best efforts of dialetheists, the principle of
non-contradiction and consistency are still widely viewed as cornerstones of the
very concept of rationality.

However, in the spirit of my genealogical approach to
philosophical issues, I believe that an important question to be asked is:
What’s the big deal with consistency in the first place? What does it do for
us? Why do we want consistency so badly to start with? When and why did we
start thinking that consistency was a good norm to be had for rational
discourse? And this of course takes me back to the Greeks, and in particular
the Greeks before Aristotle.

Variations of PNC can be found stated in a few authors
before Aristotle, Plato in particular, but also Gorgias (I owe these passages
to Benoît Castelnerac; emphasis mine in both):

You have accused me in the indictment we have heard of two most
contradictory things, wisdom and madness, things which cannot exist in the same
man. When you claim that I am artful and clever and resourceful, you are
accusing me of wisdom, while when you claim that I betrayed Greece, you accused
me of madness. For it is madness to attempt actions which are impossible,
disadvantageous and disgraceful, the results of which would be such as to harm
one’s friends, benefit one’s enemies and render one’s own life contemptible and
precarious. And yet how can one have confidence in a man who in the course
of the same speech to the same audience makes the most contradictory assertions
about the same subjects? (Gorgias, Defence of Palamedes)

You cannot be believed,
Meletus, even, I think, by yourself. The man appears to me, men of Athens, highly insolent and
uncontrolled. He seems to have made his deposition out of insolence, violence
and youthful zeal. He is like one who composed a riddle and is trying it out:
“Will the wise Socrates realize that I am jesting and contradicting myself, or
shall I deceive him and others?” I think he contradicts himself in the affidavit,
as if he said: “Socrates is guilty of not believing in gods but believing in
gods”, and surely that is the part of a jester. Examine with me, gentlemen, how
he appears to contradict himself, and you, Meletus, answer us. (Plato, Apology 26e- 27b)

What
is particularly important for my purposes here is that these are dialectical
contexts of debate; indeed, it seems that originally, PNC was to a great extent
a dialectical principle. To lure the opponent into granting contradictory
claims, and exposing him/her as such, is the very goal of dialectical
disputations; granting contradictory claims would entail the opponent being discredited
as a credible interlocutor. In this sense, consistency would be a derived
norm for discourse: the ultimate goal of discourse is persuasion; now, to be
able to persuade one must be credible; a person who makes inconsistent claims
is not credible, and thus not persuasive.

As
argued in a recent draft paper by my post-doc Matthew Duncombe, this general
principle applies also to discursive thinking for Plato, not only for
situations of debates with actual opponents. Indeed, Plato’s model of
discursive thinking (dianoia) is of an internal dialogue with an
imaginary opponent, as it were (as to be found in the Theaetetus
and the Philebus). Here too, consistency will be related to
persuasion: the agent herself will not be persuaded to hold beliefs which turn
out to be contradictory, but realizing that they are contradictory may well
come about only as a result of the process of discursive thinking (much
as in the case of the actual refutations performed by Socrates on his
opponents).

Now,
as also argued by Matt in his paper, the status of consistency and PNC for
Aristotle is very different: PNC is grounded ontologically, and then
generalizes to doxastic as well as dialogical/discursive cases (although one of
the main arguments offered by Aristotle in favor of PNC is essentially
dialectical in nature, namely the so-called elenctic argument). But because
Aristotle postulates the ontological version of PNC -- a thing a
cannot both be F and not be F at the same time, in the same way -- it is
difficult to see how a fruitful debate can be had between him and the modern
dialethists, who maintain precisely that such a thing is after all possible in
reality.

Instead,
I find Plato’s motivation for adopting something like PNC much more plausible,
and philosophically interesting in that it provides an answer to the
genealogical questions I stated earlier on. What consistency does for us is to
serve the ultimate goal of persuasion: an inconsistent discourse is prima facie
implausible (or less plausible). And so, the idea that the importance of consistency is subsumed to
another, more primitive dialogical norm (the norm of persuasion) somehow
deflates the degree of importance typically attributed to consistency in the
philosophical literature, as a norm an sich.

Besides
dialetheists, other contemporary philosophical theories might benefit from the
short ‘genealogy of consistency’ I’ve just outlined. I am now thinking in particular of
work done in formal epistemology by e.g. Branden Fitelson, Kenny Easwaran (e.g. here), among others,
contrasting the significance of consistency vs. accuracy. It seems to me that
much of what is going on there is also a deflation of the significance of
consistency as a norm for rational thought; their conclusion is thus quite
similar to the one of the historically-inspired analysis I’ve presented here,
namely: consistency is over-rated.

MCMP Workshop "Bridges 2014"

New York City, 2 and 3 Sept, 2014

The Munich Center for Mathematical Philosophy (MCMP) cordially invites you to "Bridges 2014" in the German House, New York City, on 2 and 3 September, 2014. The 2-day trans-continental meeting in mathematical philosophy will focus on inter-theoretical relations thereby connecting form and content of this philosophical exchange. The workshop will be accompanied by an open-to-public evening event with Stephan Hartmann and Branden Fitelson on 2 September, 2014 (6:30 pm).

We use theories to explain, to predict and to instruct, to talk about our world and order the objects therein. Different theories deliberately emphasize different aspects of an object, purposefully utilize different formal methods, and necessarily confine their attention to a distinct field of interest. The desire to enlarge knowledge by combining two theories presents a research community with the task of building bridges between the structures and theoretical entities on both sides. Especially if no background theory is available as yet, this becomes a question of principle and of philosophical groundwork: If there are any – what are the inter-theoretical relations to look like? Will a unified theory possibly adjudicate between monist and dualist positions? Under what circumstances will partial translations suffice? Can the ontological status of inter-theoretical relations inform us about inter-object relations in the world? Our spectrum of interest includes: reduction and emergence, mechanistic links between causal theories, belief vs. probability, mind and brain, relations between formal and informal accounts in the special sciences, cognition and the outer world.

Program and Registration

Due to security regulations at the German House registering is required (separately for workshop and evening event). Details on how to register and the full schedule can be found on the official website:

Sunday, 24 August 2014

In this article, the relationship between second-order comprehension and unrestricted mereological fusion (over atoms) is clarified. An extension $\mathsf{PAF}$ of Peano arithmetic with a new binary mereological notion of ``fusion'', and a scheme of unrestricted fusion, is introduced. It is shown that $\mathsf{PAF}$ interprets full second-order arithmetic, $Z_2$.

Roughly this shows:

First-order arithmetic + mereology = second-order arithmetic.

This implies that adding the theory of mereological fusions can be a very powerful, non-conservative, addition to a theory, perhaps casting doubt on the philosophical idea that once you have some objects, then having their fusion also is somehow "redundant". The additional fusions can in some cases behave like additional "infinite objects"; positing their existence allows one to prove more about the original objects.

Friday, 22 August 2014

In the first part of this post, I considered the challenge to decision theory from what L. A. Paul calls epistemically transformative experiences. In this post, I'd like to turn to another challenge to standard decision theory that Paul considers. This is the challenge from what she calls personally transformative experiences. Unlike an epistemically transformative experience, a personally transformative experience need not teach you anything new, but it does change you in another way that is relevant to decision theory---it leads you to change your utility function. To see why this is a problem for standard decision theory, consider my presentation of naive, non-causal, non-evidential decision theory in the previous post.

Tuesday, 19 August 2014

Mathematics has been much in the news recently, especially with the announcement of the latest four Fields medalists (I am particularly pleased to see the first woman, and the first Latin-American, receiving the highest recognition in mathematics). But there was another remarkable recent event in the world of mathematics: Thomas Hales has announced the completion of the formalization of his proof of the Kepler conjecture. The conjecture: “what is the best way to stack a collection of spherical objects, such as a display of oranges for sale? In 1611 Johannes Kepler suggested that a pyramid arrangement was the most efficient, but couldn't prove it.” (New Scientist)

We are pleased to announce the completion of the Flyspeck project, which has constructed a formal proof of the Kepler conjecture. The Kepler conjecture asserts that no packing of congruent balls in Euclidean 3-space has density greater than the face-centered cubic packing. It is the oldest problem in discrete geometry. The proof of the Kepler conjecture was first obtained by Ferguson and Hales in 1998. The proof relies on about 300 pages of text and on a large number of computer calculations.

The formalization project covers both the text portion of the proof and the computer calculations. The blueprint for the project appears in the book "Dense Sphere Packings," published by Cambridge University Press. The formal proof takes the same general approach as the original proof, with modifications in the geometric partition of space that have been suggested by Marchal.

So far, nothing very new, philosophically speaking. Computer-assisted proofs (both at the level of formulation and at the level of verification) have attracted the interest of a number of philosophers in recent times (here’s a recent paper by John Symons and Jack Horner, and here is an older paper by Mark McEvoy, which I commented on at a conference back in 2005; there are many other papers on this topic by philosophers). More generally, the question of the extent to which mathematical reasoning can be purely ‘mechanical’ remains a lively topic of philosophical discussion (here’s a 1994 paper by Wilfried Sieg on this topic that I like a lot). Moreover, this particular proof of the Kepler conjecture does not add anything substantially new (philosophically) to the practice of computer-verifying proofs (while being quite a feat mathematically!). It is rather something Hales said to the New Scientist that caught my attention (against the background of the 4 years and 12 referees it took to human-check the proof for errors): "This technology cuts the mathematical referees out of the verification process," says Hales. "Their opinion about the correctness of the proof no longer matters."

Now, I’m with Hales that ‘software intensive mathematics’ (to borrow Symons and Horner’s terminology) is of great help to offload some of the more tedious parts of mathematical practice such as proof-checking. But there are a number of reasons that suggest to me that Hales’ ‘optimism’ is a bit excessive, in particular with respect to the allegedly expendable role of the human referee (broadly construed) in mathematical practice, even if only for the verification process.

Indeed, and as I’ve been arguing in a number of posts, proof-checking is a major aspect of mathematical practice, basically corresponding to the role I attribute to the fictitious character ‘opponent’ in my dialogical conception of proof (see here). The main point is the issue of epistemic trust and objectivity: to be valid, a proof has to be ‘replicable’ by anyone with the relevant level of competence. This is why probabilistic proofs are still thought to be philosophically suspicious (as argued for example by Kenny Easwaran in terms of the notion of ‘transferability’). And so, automated proof-checking will most likely never replace completely human proof-checking, if nothing else because the automated proof-checkers themselves must be kept ‘in check’ (lame pun, I know). (Though I am happy to grant that the role of opponent can be at least partially played by computers, and that our degree of certainty in the correctness of Hales’ proof has been increased by its computer-verification.)

Moreover, mathematics remains a human activity, and mathematical proofs essentially involve epistemic and pragmatic notions such as explanation and persuasion, which cannot be taken over by purely automated proof-checking. (Which does not mean that the burden of verification cannot be at least partially transferred to automata!) In effect, a good proof is not only one that shows that the conclusion is true, but also why the conclusion is true, and this explanatory component is not obviously captured by automata. In other words, a proof may be deemed correct by computer-checking, and yet fail to be persuasive in the sense of having true explanatory value. (Recall that Smale’s proof of the possibility of sphere eversion was viewed with a certain amount of suspicion until models of actual processes of eversion were discovered.)

Finally, turning an ‘ordinary’ mathematical proof* into something that can be computer-checked is itself a highly theoretical, non-trivial, and essentially informal endeavor that must itself receive a ‘seal of approval’ from the mathematical community. While mathematicians hardly ever disagree on whether a given proof is or is not valid once it is properly scrutinized, there can be (and has been, as once vividly described to me by Jesse Alama) substantive disagreement on whether a given formalized version of a proof is indeed an adequate formalization of that particular proof. (This is also related to thorny issues in the metaphysics of proofs, e.g. criteria of individuation for proofs, which I will leave aside for now.)

A particular informal proof can only be said to have been computer-verified if the formal counterpart in question really is (deemed to be) sufficiently similar to the original proof. (Again, the formalized proof may have the same conclusion as the original informal proof, in which case we may agree that the theorem they both purport to prove is true, but this is no guarantee that the original informal proof itself is valid. There are many invalid proofs of true statements.) Now, evaluating whether a particular informal proof is accurately rendered in a given formalized form is not a task that can be delegated to a computer (precisely because one of the relata of the comparison is itself an informal construct), and for this task the human referee remains indispensable.

And so, I conclude that, pace Hales, the human mathematical referee is not going to be completely cut out of the verification process any time soon. Nevertheless, it is a welcome (though not entirely new) development that computers can increasingly share the burden of some of the more tedious aspects of mathematical practice: it’s a matter of teamwork rather than the total replacement of a particular approach to proof-verification by another (which may well be what Hales meant in the first place).

-----------------------------

* In some research programs, mathematical proofs are written directly in computer-verifiable form, such as in the newly created research program of homotopy type-theory.

Sunday, 17 August 2014

Tim Blais, a McGill University physics student made this really great a capella version of "Bohemian Rhapsody", called "Bohemian Gravity", with physics lyrics explaining superstring theory, like "Manifolds must be Kahler!" (lyrics here).

Thursday, 14 August 2014

I have never eaten Vegemite---should I try it? I currently have no children---should I apply to adopt a child? In each case, one might imagine, whichever choice I make, I can make it rationally by appealing to the principles of decision theory. Not according to L. A. Paul. In her rich and fascinating new book, Transformative Experience, Paul issues two challenges to orthodox decision theory---they are based upon examples such as these.

(In this post and the next, I'd like to tryout some ideas concerning Paul's challenges to orthodox decision theory. The idea is that some of them will make it into my contribution to the Philosophy and Phenomenological Research book symposium on Transformative Experience.)

An article "Worlds Without Domain" arguing against the idea that possible worlds have domains. The abstract is: "A modal analogue to the "hole argument" in the foundations of spacetime is given against the conception of possible worlds having their own special domains".