Abstract: We propose a critique of normativism, deﬁned as the idea that human thinking reﬂects a normative system against which it should be measured and judged. We analyze the methodological problems associated with normativism, proposing that it invites the controversial “is-ought” inference, much contested in the philosophical literature. This problem is triggered when there are competing normative accounts (the arbitration problem), as empirical evidence can help arbitrate between pescriptive theories, but not between normative systems. Drawing on linguistics as a model, we propose that a clear distinction between normative systems and competence theories is essential, arguing that equating them invites an “is-ought” inference: to wit, supporting normative “ought” theories with empirical “is” evidence. We analyze in detail two research programmes with normativist features – Oaksford and Chater’s rational analysis and Stanovich and West’s individual differences approach – demonstrating how, in each case, equating norm and competence leads to an is-ought inference. Normativism triggers a host of research biases in the psychology of reasoning and decision making: focusing on untrained participants and novel problems, analyzing psychological processes in terms of their normative correlates, and neglecting philosophically signiﬁcant paradigms when they do not supply clear standards for normative judgement. For example, in a dual-process framework, normativism can lead to a fallacious “ought-is” inference, in which normative responses are taken as diagnostic of analytic reasoning. We propose that little can be gained from normativism that cannot be achieved by descriptivist computational-level analysis, illustrating our position with Hypothetical Thinking Theory and the theory of the suppositional conditional. We conclude that descriptivism is a viable option, and that theories of higher mental processing would be better off freed from normative considerations.

It's a very interesting paper, which raises in particular the question of the status of formal models as normative models for human reasoning, and this is, I believe, highly relevant for the M-Phi community. The list of commentaries includes many philosophers and philosophically-inclined psychologists, such as: Dora Achourioti, Andy Fugard , Keith Stenning, Igor Douven, Stephen Stich, Niki Pfeifer, Nick Chater and Mike Oaksford, Joelle Proust, Gerhard Schurz, Keith Stanovich, and myself, among many others.

Monday, 24 October 2011

It works with English, German, and Spanish input, but details are only given in German: the index normally ranges from 0 to 1; it is unfit to assess the quality of arguments; however, it is somehow designed to spot buzzword and verbose writing. As a note of caution, it is expected to be somewhat harsh with items from contemporary scientific literature. That said, testing paper abstracts is quite fun, and maybe sobering. You might well get something like the following (disclosure: I did...):

Score: .51Comment: "Something's fishy. Obviously you want to sell something, or you're trying to impress somebody. Are you sure that you have a real message, and if so: who would understand it?"

Monday, 17 October 2011

My name is David Corfield and I'm very grateful to have been invited to join this blog as a contributor. With five years of blogging behind me at the n-Category Café, I relish the opportunity to talk with a new audience. My rate of blogging may have slowed, especially as my adminstrative load has increased - I'm now Head of Philosophy at the University of Kent - but I'm looking forward to writing some posts here.

I have interests in a variety of approaches to mathematical philosophy, including the statistical learning theory I picked up from my time at the Max Planck Institute for Biological Cybernetics in Tübingen, but the main idea I would like to promote to the audience here is that category theory is worth exploring as a resource for the mathematical philosopher.

I have recently published a couple of articles which examines the light category theory can throw on familiar infinite structures. In Understanding the Infinite I: Niceness, Robustness, and Realism, I look at the phenomenon where an infinite entity is defined by a universal property, and through this inherits 'for free' a range of other nice properties. In Understanding the Infinite II: Coalgebra, I look at the duality between minimally and maximally defined entities in the context of the duality between 'algebra' and 'coalgebra'.

Perhaps had I known of Shaughan Lavine (1994) Understanding the Infinite, Harvard University Press, I might have opted for a different title.

There's much to do to understand the relationship between category theory and the traditional foundational branches, which have drawn most philosophical attention. Recently, I posed a question on MathOverflow concerning category theory and Joel Hamkins' set theoretic multiverse. The answer by Joel there shows just the sort of joint investigation needed. A few years ago at the Café, we had a discussion on the relationship between category theory and model theory.

Thursday, 13 October 2011

We are happy to announce that the Ninth Annual Formal Epistemology Workshop (FEW 2012) will be held in Munich, May 29 - June 1, 2012. This year's meeting is sponsored by the Munich Center for Mathematical Philosophy. The meeting will take place at the (stunningly beautiful) Nymphenburg Palace (compliments of the Carl Friedrich von Siemens Foundation).

We are accepting submissions for contributed papers. The deadline for submissions is January 15, 2012. Notifications will be sent out by March 15, 2012. Please send submissions to Branden Fitelson. A selection of papers presented at FEW 2012 will be published in a special issue of Erkenntnis.

Some funding will be available for graduate student participation. Please contact Hannes Leitgeb for more information.

There will be two special (afternoon) sessions at this year's FEW. The first will be a special session on Logic & Rationality, which will include talks by David Christensen and Robbie Williams, and the second will be a memorial session for Horacio Arló-Costa, which will include talks (pertaining to Horacio's various seminal philosophical contributions) by Cristina Bicchieri, Eric Pacuit, Rohit Parikh, and Paul Pedersen.

We will also have two (two part) tutorials, presented by Jeff Paris (inductive probability), and Charlotte Werndl (determinism, indeterminism, and underdetermination).

Friday, 7 October 2011

Unfortunately, I have not received any submissions for posts on inspiring female figures in the world of M-Phi. But somebody did send me the link to a cool Ada Lovelace t-shirt available at the ThinkGeek website (how fitting...).

Thursday, 6 October 2011

Ultra-finitism is not discussed much in standard philosophy of mathematics literature. In the philosophy of mathematics literature, what seems to amount to the same view is called "strict finitism". Here are four pieces on the topic:

Discussions of strict finitism have often attempted to refute the view by invoking Wang's paradox, noting that "being feasible" is a vague notion susceptible to a sorites paradox.

My own view is that strict finitism tends to be ignored for a much simpler reason: it seems confused - it conflates types and tokens (see premise (U) below). Tokens are, for example, physical inscriptions of linguistic expressions. A token $t$ of the canonical numeral "$SS......S0$" is a physical inscription of it. But numerals themselves are not tokens. Linguistic expressions are finite sequences of symbol types. So, the numeral $\underline{n}$ is a finite sequence of length $n+1$. This is an abstract object, not a physical one.

Philosophers sceptical of abstract entities reject mathematical reality in its entirety; so the appropriate position for someone with such views is nominalism (or, as it has come to be know more recently, fictionalism): i.e., the view that there are no numbers, or sets, or functions, etc. (W.V. Quine & Nelson Goodman 1947; Hartry Field 1980). Of course, all will agree that there are tokens. But nominalists do not think numbers are tokens. There is a very large literature on nominalism/fictionalism.

So, if I understand ultra-finitism correctly, the argument for ultra-finitism is something like this:

(P) There is an upper limit on how many tokens there are, or even feasibly could be.
(U) All numbers are tokens
------------------------------------------------------
(C) One must be dubious about large numbers.

The crucial assumption in ultra-finitism is the reductionist assumption (U): a reduction of numbers to (physical) tokens. Considerations about the non-feasibility of exponentiation justify physical premise (P), not reductionist premise (U); and everyone agrees with (P), unless they "go modal" (as Quine once put it, "the cure is worse than the disease").
Three questions:

1. Is this reconstruction right? If not, is there a better one?
2. If it is right, what is the justification for the ultra-finitist assumption (U)?
3. Perhaps ultra-finitists are nominalists/fictionalists, but unfamiliar with the philosophical literature on the topic?

Tuesday, 4 October 2011

This Ada Lovelace Day on October 7, share your story about a woman — whether an engineer, a scientist, a technologist or mathematician — who has inspired you to become who you are today. Write a blog post, record a podcast, film a video, draw a comic, or pick any other way to talk about the women who have been guiding lights in your life. Give your heroine the credit she deserves!

I was wondering if people would be interested in writing a post on an inspiring female figure in mathematical philosophy, philosophical logic etc.? I'm already going to be writing a post on a scientist for NewAPPS, and it would be great if M-Phi could also contribute to Ada Lovelace day. An obvious name that comes to mind is Ruth Barcan-Marcus, but there are surely other inspiring female figures in our field.

M-Phi bloggers can write a post directly, but non-bloggers are also welcome to contribute. If you would like to contribute, please send your submission (between 500 and 1000 words) to cdutilhnovaes [youknowwhat] yahoo [dot] com until Thursday evening. I look forward to many submissions! :)

Monday, 3 October 2011

Last week, the foundations of mathematics community was shaken by yet another claim of the inconsistency of Peano Arithmetic (PA). This time, it was put forward by Edward Nelson, professor of mathematics in Princeton, who claimed to have found a proof of the inconsistency of PA. A few months ago, quite some stir had been caused when Fields medalist V. Voevodsky seemed to be saying that the consistency of PA was an open question; but Nelson’s claim was much more radical; he claimed to have proved that PA was outright inconsistent! (Here is a great post by Jeff Ketland with a crash-course on PA and a discussion of ways in which it might be inconsistent.)

Nelson announced his results on the FOM mailing list on September 26th 2011, providing links to two formulations of the proof: one in book form and one in short-summary form. Very quickly, a few math-oriented blogs had posts up on the topic; we all wanted to understand the outlines of Nelson’s purported proof, and most of us bet all our money on the possibility that there must be a mistake in the proof. External evidence strongly suggests that PA is consistent, in particular in that so many robust mathematical results would have to be revised if PA were inconsistent (not to mention several proofs of the consistency of arithmetic in alternative systems, such as Gentzen’s proofs -- see here).

Indeed, it did not take long for someone to find an apparent loophole in Nelson’s purported proof, and not just someone: math prodigy and Fields medalist Terence Tao (UCLA), who is considered by many as the most brilliant mathematician currently in activity. The venue in which Tao first formulated his reservations was somewhat original: on the G+ thread opened by John Baez on the topic. (So those who dismiss social networks as a pure waste of time have at least one occurrence of actual top-notch science being done in a social network to worry about!) At the same time, Daniel Tausk, a professor of mathematics at the University of São Paulo, had identified the same mistake in Nelson’s argument, and alerted Nelson of the problem in private communication. Judging from his replies (which can be read in the FOM archive as well as in The n-Category Café post), Nelson did not immediately understand the objection, but within a few days consensus had emerged that the mistake identified by Tao and Tausk was irreparable. Then, on October 1st, Nelson graciously acknowledged the correctness of the objections and withdrew the claim of having a proof of the inconsistency of PA. As has been remarked by many commentators, it is certainly a sign of mathematical greatness to acknowledge one’s own mistakes so promptly!

While it must have taken true mathematical insight to see what was wrong with the purported proof, once the mistake was spotted, it was actually not overly difficult to understand – if not in all its technical details, which presuppose a good understanding of Chaitin’s theorem, at least in its basic idea: it was based on the unwarranted assumption that the hierarchy of sub-theories built for the proof is going to map neatly into the hierarchy of the Kolmogorov complexity of each of these sub-theories. In the game of mathematics, tacitly relying on assumptions is not a good move, even less so when it turns out that the assumption is false.

From the point of view of the social practice of mathematics, this episode raises a number of interesting questions, which can be framed from the point of view of Jody Azzouni’s insightful analysis of how and why mathematics is unique as a social practice. Azzouni argues that mathematics is one of the most homogeneous groups of human practices around, and in particular one which seems particularly favorable for the emergence of consensus. Indeed, mathematicians virtually always agree on whether a purported proof is or is not a valid mathematical proof; it may take a while for enough members of the community to understand its details (as with Deolalikar’s purported proof of $P \not= NP$ last year, but this was mostly because the latter was rather poorly formulated), but past this initial period, consensus virtually always emerges. (To be clear, there can be disagreement on whether a proof is elegant, interesting etc., but not as to its correctness and incorrectness.) This remains a very puzzling feature of mathematical practice, which at least to some people suggests that Platonism is the only plausible philosophical account of mathematical truth.

The swiftness with which consensus emerged as to what was wrong with Nelson’s purported proof is yet another example of the strong tendency towards consensus in mathematical practice – and arguably, it cannot be fully explained as a merely socially imposed kind of consensus, due to homogeneous ‘indoctrination’ by means of mathematical education. That it only took a few days for consensus to emerge naturally also depends crucially on the technologies of information dissemination currently at our disposal – mailing lists, home-pages, blogs, social networks; but more generally, the episode has been a dramatic but classical occurrence of a well known pattern in mathematics as a social practice.

The comparison with another dramatic claim, put forward for the first time more than 80 years ago, may prove to be informative. Indeed, the reception of Gödel’s incompleteness results (which has been extensively studied by Dawson and Mancosu, among others) also illustrates the emergence of a consensus, but this time it favored the highly surprising claim being put forward; in last week’s episode, by contrast, the highly surprising claim has been refuted.

In September 1930, Gödel made the first public announcement of his incompleteness result. In first instance, he had proved only the first incompleteness theorem, and as is well known, from the first theorem the second incompleteness theorem was proved independently by Gödel himself and by John von Neumann. But von Neumann was one of the few who immediately understood the result (Bernays and Carnap seem to have struggled); in fact, he may have been the first to see the far-reaching consequences it had. In particular, while Gödel was still rather cautions regarding the impact of his results for Hilbert’s program, von Neumann immediately drew the right conclusions. However, quite a few mathematicians were so taken aback by Gödel’s results that they were convinced that there must have been a mistake somewhere in the proof; just as in last week’s episode, many of them set out to find out the ‘loophole’ in Gödel’s proof, but on that occasion to no avail. As narrated in this wonderful paper by P. Mancosu, the news of Gödel’s results quickly spread, but in the absence of the WWW or similar platforms, many people did not have immediate access to off-prints of the paper where the results were presented. For example, before seeing the actual results in detail, mathematician Leon Chwistek suspected that Gödel’s proof would be based on misunderstandings, but once he read the actual paper, he stood corrected:

In my last letter I have raised some doubts concerning Dr Gödel’ s work that have completely disappeared after a more attentive study of the problem. I had at first thought that there was a tacit introduction of non-predicative functions which makes the use of a rigorous symbolical procedure impossible and I even feared the possibility of an antinomy. Now I see that this is out of the question. The method is truly wonderful and it fits pure type theory thoroughly. (Letter to Kaufmann, in Mancosu 1999, 44)

Both the consensus that eventually emerged concerning the correctness of Gödel’s surprising results, and the consensus swiftly established as on what was wrong with Nelson’s purported proof last week are examples of the amazing and quite unique tendency to converge towards consensus in mathematical practice. In many senses, the most pressing task for the philosophy of mathematics, practice-based or otherwise, is to formulate a satisfactory account of this almost eerie fact about mathematics.

Sunday, 2 October 2011

The recent discussion of Edward Nelson's claim to have a found a proof that Peano arithmetic, $PA$, is inconsistent has been very interesting in many ways. The proof has turned out to contain a major flaw and Professor Nelson has very graciously withdrawn the claim. I was not able to follow the alleged proof because I don't know enough about Chaitin's Theorem, or about the properties of the system $Q_0^{\ast}$ being examined. But the episode set me thinking about what might lie behind an inconsistency in $PA$, despite the fact that we have many standard mathematical proofs that $PA$ is consistent (indeed, true).

For readers who are a bit rusty on the properties of first-order arithmetic, here is some background and attempted explanation. $PA$ is a basic mathematical theory which functions, for mathematical logicians, roughly as E. Coli functions for microbiologists. $PA$ is a theory expressed in a first-order formalized language $L_A$ (with identity) with

four primitive non-logical symbols: $0$, $S$, $+$, $\times$

Each of these is, in effect, functional. Along with the variables, $x_1, x_2, \dots$, one can define the terms of the language by saying that $t$ is a term if $t$ is $0$, or a variable, or is obtained by applying $S$, $+$ or $\times$ to previous terms. (This is a recursive definition.) Because $L_A$ has only one primitive predicate symbol, namely $=$, the atomic formulas of $L_A$ are equations, of the form

$t = u$,

where $t$ and $u$ are terms. For reasons of metalogical simplicity, it is usual to assume that the only logical primitives, beyond $=$, are $\neg$, $\rightarrow$ and $\forall$. Other truth-functional connectives can be defined (e.g., $\phi \vee \theta$ is short for $\neg \phi \rightarrow \theta$) and $\exists x \phi$ is short for $\neg \forall x \neg \phi$.

The $L_A$-formulas are defined, again recursively, by saying that $\phi$ is a formula of $L_A$ just if either $\phi$ is an atomic formula (i.e., an equation) or is obtained from previous formulas by applying connectives or a quantifier (i.e., $\forall x$).

The axioms of $PA$ are then specified as the following six individual axioms, and one axiom scheme.

where $\phi(x)$ is a formula of $L_A$. The formula $\phi(x)$ may contain other free variables (called "parameters"). $\phi(0)$ means the result of substituting the term $0$ for all free occurrences of $x$ in $\phi$.

The axioms of $PA$ are then the above six and all instances of the Induction Scheme. In addition, there are underlying purely logical axioms, for example,

and, usually, one or two inference rules (e.g., Modus Ponens: from $\phi$ and $\phi \rightarrow \theta$, infer $\theta$). A derivation of $\phi$ in $PA$ is a finite sequence $(\theta_1, \dots, \theta_n)$ such that $\theta_n$ is $\phi$, and each, for $i \in \{1, n\}$, $\theta_i$ is either a logical axiom, or an axiom of $PA$, or is obtained by Modus Ponens on some previous $\theta_k$ and $\theta_p$. If there is a derivation of $\phi$ in $PA$, then $\phi$ is a theorem of $PA$, and we write:

$PA \vdash \phi$

Our specification of the axioms $PA$ is parasitic on our prior understanding of the set $N$ of natural numbers and their properties with respect to successor, addition and multiplication. So, one can define an $L$-interpretation $\mathbb{N}$ by specifying that $dom(\mathbb{N}) = N$, and that $0$ refers to $0$, $S$ refers to successor and that $+$ and $\times$ refer to plus and times.

This gives the interpreted language $(L_A, \mathbb{N})$. It is precisely this interpreted language that we have in mind when writing the axioms formally as opposed to informally. The constraint is that we formalize the informally expressed truths into truths of this language, and keep the interpretation fixed. One can verify that each axiom of $PA$ is true in $\mathbb{N}$. (The verification is in a sense circular. For example, the meta-theoretic assumption required to verify that $\forall x(S(x) \neq 0)$ is true is precisely the fact that $0$ is not a successor of any number. But this is no different from showing that the truth of "snow is white" follows from the assumption that snow is white. To show each axiom true, we assume that very axiom.) Since the inference rules are sound, it follows that each theorem of $PA$ is true. Consequently, since $0 = 1$ is not true, it follows that $0=1$ is not a theorem of $PA$. So, $PA$ is consistent. This reasoning can itself be formalized by adding a new truth predicate symbol to the language $L_A$ and setting out the required properties of truth. This leads to the area of axiomatic truth theories.

How might $PA$ be inconsistent? Here are four guesses:

1. Perhaps there is something wrong with the successor axioms (i.e., $PA1$ and $PA2$).
2. Perhaps there is something wrong with the defining clauses for $+$ and $\times$ (i.e., $PA3-PA6$: these are examples of definition by primitive recursion).
3. Perhaps there is something wrong with the induction scheme.
4. Some combination of the above.

The successor axioms force any model of them to be infinite. If $X$ is a set containing say $a$, and $f : X \rightarrow X$ is injective and such that $f(x) \neq a$ (for all $x$), then $X$ must be infinite. From the point of view of the theory itself, the infinity is only "potential" (the axioms $PA1$ and $PA2$ themselves does not assert the existence of an infinite object: rather, the semantic meta-theory asserts the existence of an infinite object---i.e., a model---satisfying them). So, presumably, for an inconsistency to arise with the successor axioms, there must be some problem with potential infinity. However, I really cannot see a genuine argument here, other than a dogmatic rejection of even potential infinity. (The objects in question are mathematical abstract objects, not concrete tokens.)

Perhaps defining arithmetic operations by primitive recursion is problematic, and potentially inconsistent. (There is a standard mathematical proof that it isn't: Dedekind's recursion theorem.) Perhaps because primitive recursions exhibit a kind of "circularity"? (Roughly, $f(n+1)$ is defined in terms of $f(n)$.) But, again, I cannot see a genuine problem, as the circularity is only apparent: a primitive recursive definition of a function $f$ allows one to compute $f(n)$, for any argument $n$, requiring only the values for smaller numbers.

What has been most often suggested is that the induction scheme might lead to an inconsistency somehow. Aside from general ultra-finitist concerns (which I think are based merely on type/token confusion), it's unclear what exactly might happen to generate an inconsistency. What seems a likely proof idea is that one could find a formula $\phi(x)$ and a term $t$ (no matter how specified) such that one can show:

To explain why we have not "observed" an inconsistency, it might be the case that the shortest derivation $d$ of $0=1$ remains incredibly huge. The reason is that the derivation would have to generate a canonical term $SSS....S0$ (say representing the number $n$), and prove $t = SSS...S0$, and apply Modus Ponens $n$ times to get $\phi(t)$, contradicting (iii). But the size of $n$ might be astronomical.

(where $2$ is defined to mean $SS0$)
I think a similar example was first given by Rohit Parikh in the 1970s, though I haven't seen the original. So, this example might be quite different from Parikh's. If I've worked this out right, then $T$ is inconsistent, but the shortest derivation of an inconsistency has length at least $2^{65,536}$ symbols. (On the other hand, I may be wrong in specifying this: there may be a clever trick which allows one to get round the specification of a canonical numeral $SS...S0$ with $2^{65,536}$ occurrences of $S$.)

George Boolos, in a 1987 paper, "A Curious Inference", gave an example of a theory similar to the one above which is inconsistent, where the number of symbols in the shortest such derivation is an exponential stack of $65,536$ 2s. (Actually, Boolos gave a valid inference whose shortest derivation is of this length, but one can easily convert it to an example of such a theory by negating the conclusion formula.)

But anyway, until further notice, PA is still consistent; we can now go enjoy the weekend in peace.

(And thanks to John Baez for posting on Nelson's withdrawal at G+.)

UPDATE: Here is the message that Nelson just sent around at FOM:

Terrence Tao, at http://golem.ph.utexas.edu/category/2011/09/ and independently Daniel Tausk (private communication) have found an irreparable error in my outline. In the Kritchman-Raz proof, there is a low complexity proof of K(\bar\xi)>\ell if we assume \mu=1, but the Chaitin machine may find a shorter proof of high complexity, with no control over how high.

My thanks to Tao and Tausk for spotting this. I withdraw my claim.

The consistency of P remains an open problem.

Ed Nelson

This seems to be a good example of what J. Azzouni has described as the 'uniqueness' of mathematics as a social practice: in just a few days, a consensus has emerged as to what was wrong with Nelson's purported proof, including Nelson himself. I cannot think of any other field of inquiry where consensus on a substantive, serious issue/challenge would emerge with the same swiftness. There really is something special about mathematics...