Wednesday, 26 February 2014

The Munich Center for Mathematical Philosophy (MCMP) and the Chair of Philosophy of Science at the Faculty of Philosophy, Philosophy of Science and Study of Religion at LMU Munich seek applications for the following positions:

The MCMP hosts a vibrant research community of faculty, postdoctoral fellows, doctoral fellows, master students, and visiting fellows. It organizes at least two weekly colloquia and a weekly internal work in-progress seminar, as well as various other activities such as workshops, conferences, summer schools, and reading groups. Several of our research projects are conducted in collaboration with scientists. The successful candidates will partake in all of MCMP's academic activities and enjoy its administrative facilities and financial support. The official language at the MCMP is English and fluency in German is not mandatory.

We especially encourage female scholars to apply. The LMU in general, and the MCMP in particular, endeavor to raise the percentage of women among its academic personnel. Furthermore, given equal qualification, preference will be given to candidates with disabilities.

1. One Assistant Professorship (non-tenure-track)

Candidates are expected to conduct research in at least one of the following fields: modeling and simulation in philosophy, philosophy of statistics, philosophy of psychology, philosophy of the social sciences, philosophy of economics, formal epistemology, (formal or non-formal) philosophy of science, and (formal or non-formal) social epistemology. Applicants with a background in a formal, natural or social science are especially encouraged to apply.

The position is for three years with the possibility of extension for another three years. Note that there is no tenure-track option. The appointment will be made within the German A13 salary scheme (under the assumption that the civil service requirements are met), which means that one has the rights and perks of a civil servant. The starting date is October 1, 2014, but a later starting date is also possible. (Please let us know if you wish to start at a later date.)

The appointee will be expected (i) to do philosophical research in at least one of the specified fields, (ii) to teach five hours a week, and (iii) to take on management and supervision tasks. The successful candidate will have a PhD in philosophy and some teaching experience in philosophy.

Applications (including a cover letter that addresses, amongst others, one's academic background and research interests, a CV, a list of publications, a list of taught courses, a sample of written work of no more than 5000 words, a description of planned research projects of 1000-1500 words, and concrete ideas for grant proposals which can be sent to German or European funding agencies) should be sent by email (ideally everything requested in one PDF document) to office.hartmann@lrz.uni-muenchen.de by March 25, 2014. Hard copy applications are not accepted. Additionally, two confidential letters of reference addressing the applicant's qualifications for academic research should be sent to the same address from the referees directly. Please let us know in the cover letter whether or not you are also considering a Postdoctoral Fellowship (see below).

2. Three Postdoctoral Fellowships

In this round of advertisements, we are especially interested in candidates who work in the following areas: modeling and simulation in philosophy, philosophy of statistics, philosophy of psychology, philosophy of the social sciences, philosophy of economics, formal epistemology, (formal or non-formal) philosophy of science, and (formal or non-formal) social epistemology, but we are also open to candidates who apply formal methods in metaphysics, moral and political philosophy or any other part of philosophy. We would also like to encourage scholars who do experimental work (ideally in combination with formal work) to apply. Applicants with a background in a formal, natural or social science are especially encouraged to apply. We are also interested in scientists who plan to work on a project of philosophical relevance and whose work relates to the research interests of the MCMP.

The fellowships are open for candidates with a PhD in philosophy or a related science. The postdoctoral stipends are for two years, and they should be taken up by October 1, 2014, but a later starting date is also possible. (Please let us know if you wish to start at a later date.) Each stipend will amount to EUR 2400 of monthly salary (normally tax-free, but excluding insurance). Additionally, the MCMP helps its fellows with the costs that arise from attending conferences (fees, traveling, accommodation). There is the possibility, though no obligation, to do some teaching in either English or German.

Applications (including a cover letter that addresses, amongst others, one's academic background and research interests, a CV, a list of publications, a list of taught courses, a sample of written work of no more than 5000 words, and a description of a planned research project of 1000-1500 words) should be sent by email (ideally everything requested in one PDF document) to office.hartmann@lrz.uni-muenchen.de by March 25, 2014. Hard copy applications are not accepted. Additionally, two confidential letters of reference addressing the applicant's qualifications for academic research should be sent to the same address from the referees directly.

Thursday, 13 February 2014

(This is the third part of the series of posts with sections of the paper on axiomatizations of arithmetic and the first-order/second-order divide that I am working on at the moment. Part I is here, and Part II is here.)
=============================2. The deductive use

Hintikka describes the deductive use of logic for investigations in the foundations of mathematics in the following terms:

In order to facilitate, systematize, and criticize mathematicians’ reasoning about the structures they are interested in, logicians have isolated various valid inference patterns, systematized them, and even constructed various ways of mechanically generating an infinity of such inference patterns. I shall call this the deductive use of logic in mathematics. (Hintikka 1989, 64)

So the main difference between the descriptive and the deductive uses, as Hintikka conceives of them, seems to be that the objects of the descriptive use are the mathematical structures themselves, whereas the object of the deductive use is the mathematician’s reasoning about these very structures. This is an important distinction, but it would be a mistake to view the deductive use merely as seeking to emulate the actual reasoning practices of mathematicians. Typically, the idea is to produce a rational reconstruction that does not necessarily mirror the actual inferential steps of an ordinary mathematical proof, but which shows that the theorem in question indeed follows from the assumptions of the proof, through evidently valid inferential steps.

Frege’s Begriffsschrift project is arguably the first (and for a long time the only) example of the deductive use of logic in mathematics; one of his main goals was to create a tool to make explicit all presuppositions which would ‘sneak in unnoticed’ in ordinary mathematical proofs. Here is the famous passage from the preface of the Begriffsschrift where he presents this point:

To prevent anything intuitive from penetrating here unnoticed, I had to bend every effort to keep the chain of inferences free of gaps. In attempting to comply with this requirement in the strictest possible way I found the inadequacy of language to be an obstacle; no matter how unwieldy the expressions I was ready to accept, I was less and less able, as the relations became more and more complex, to attain the precision that my purpose required. This deficiency led me to the idea of the present ideography. Its first purpose, therefore, is to provide us with the most reliable test of the validity of a chain of inferences and to point out every presupposition that tries to sneak in unnoticed, so that its origin can be investigated. (Frege 1879/1977, 5-6, emphasis added)

Again, it is important to bear in mind that Frege’s project (and similar projects) is not that of describing the actual chains of inference of mathematicians in mathematical proofs. It is a normative project, even if he is not a revisionist who thinks that mathematicians make systematic mistakes in their practices (as Brouwer would later claim). He wants to formulate a tool that could put any given chain of inferences to test, and thus also to isolate presuppositions not made explicit in the proof. If these presuppositions happen to be true statements, then the proof is still valid, but we thereby become aware of all the premises that it in fact relies on.

For the success of this essentially epistemic project, the language in question should preferably operate on the basis of mechanical procedures, so that the test in question would always produce reliable results, i.e. ensuring that no hidden contentual considerations be incorporated into the application of rules (Sieg 1994, section 1.1). It is thus clear why Frege’s project required a deductively well-behaved system, one with a precisely formulated underlying notion of deductive consequence. Indeed, in the Grundgesetze Frege criticizes Dedekind’s lack of explicitness concerning inferential steps – incidentally, not an entirely fair criticism, given the different nature of Dedekind’s project.

It is well known that Frege’s deductive concerns were not particularly influential in the early days of formal axiomatics (and it is also well known that his own system in fact does not satisfy this desideratum entirely). In effect, in the works of pioneers such as Dedekind, Peano, Hilbert etc., a precise and purely formal notion of deductive consequence was still missing (Awodey & Reck 2002, section 3.1). It was only with Whitehead & Russell’s Principia Mathematica, published in the 1910s, that the importance of this notion started to be recognized (among other reasons, because they were the first to take Frege’s deductive project seriously). What this means for the present purposes is that Hintikka’s notion of the deductive use of logic in the foundations of mathematics is virtually entirely absent in the early days of applications of logic to mathematics, i.e. the final decades of the 19th century and the first decade of the 20th century – with the very notable exception of Frege, that is.

However, with the ‘push’ generated by the publication of Principia Mathematica, the deductive approach became increasingly pervasive in the 1910’s, reaching its pinnacle in Hilbert’s meta-mathematical program in the 1920s. Hilbert, whose earlier work in geometry represents a paradigmatic case of the descriptive use of logic, famously proposed a new approach to the foundations of mathematics in the 1920s, one in which meta-mathematical questions were to be treated as mathematical questions themselves.

Hilbert’s program was not a purely deductive program as Frege’s had been. Indeed, the general idea was to treat axiomatizations/theories as mathematical objects in themselves so as to address meta-mathematical questions, but this required that not only the axioms but also the rules of inference within the theories be fully specified. Moreover, one of the key questions motivating Hilbert’s program, the famous Entscheidungsproblem, and more generally the idea of a decision procedure for all of mathematics, has a very distinctive deductive flavor: is there a decision procedure which would allow us, for every mathematical statement, to ascertain whether it is or it is not a mathematical theorem?

So the golden era of the deductive use of logic in the foundations of mathematics started in the 1910s, after the publication of Principia Mathematica, and culminated in the 1920s, with Hilbert’s program. Naturally, Gödel’s discovery that there can be no complete and computable axiomatization of the first-order theory of the natural numbers in the early 1930s (and later on, Turing’s and Church’s negative answers to the Entscheidungsproblem) was a real cold shower for such deductive aspirations. Indeed, the advent of model-theory in the late 1930s and 1940s can be viewed as a return to the predominance of the descriptive project at the expenses of the deductive project.

Currently, both projects survive in different guises, but it is fair to say that the general optimism regarding the reach of each of them in the early days of formal axiomatics, especially the deductive project, has somewhat diminished. Moreover, the extent to which expressiveness and tractability come apart has become even more conspicuous with the realization that decidable logical systems tend to be expressively very weak, even weaker than first-order logic (which is not decidable).

Wednesday, 12 February 2014

(This is the second part of the series of posts with sections of the paper on axiomatizations of arithmetic and the first-order/second-order divide that I am working on at the moment. Part I is here, and I pick up immediately where I left yesterday.)
========================================

As mentioned above, what Hintikka presents as the descriptive use
of logic in mathematics was the predominant approach in the early days of
formal axiomatics, in the second half of the 19th century (the
notable exception being Frege, more on whom shortly). And indeed, Dedekind’s
famous letter to Keferstein (published in (Van Heijenoort 1967)) is quite possibly one of the most vivid
illustrations of the descriptive approach in action; because this approach was
still something of a novelty at the time, Dedekind carefully explains his
procedure when formulating an axiomatization for arithmetic (basically what is
now known, by a twist of fate, as Peano Arithmetic).

How did my essay come to be
written? Certainly not in one day; rather, it is a synthesis constructed after
protracted labor, based upon a prior analysis of the sequence of natural
numbers just as it presents itself, in experience, so to speak, for our
consideration. What are the mutually independent fundamental properties of the
sequence N, that is, those properties
that are not derivable from one another but from which all others follow?
(Letter to Keferstein, p. 99/100)

Thus, a pre-existing structure, the sequence
of natural numbers, presents itself ‘for our consideration’, so that we can
attempt to determine what its basic properties are. Dedekind then lists what
appear to be the key properties of this structure, such as that it is composed
of individuals called numbers, which in turn stand in a very special kind of
relation to one another (the successor relation). One might think that, by
offering an (apparently) exhaustive list of properties which, taken together,
seem to describe the basic facts about this structure, the axiomatization would
be complete also in the sense of picking out an unique referent, namely the
intended structure, the sequence of the natural numbers. But Dedekind quickly
adds that this is (unfortunately) not the case:

I have shown in my reply, however, that these facts are still
far from being adequate for completely characterising the nature of the number
sequence N. All these facts would
hold also for every system S that,
besides the number sequence N,
contained a system T, of arbitrary
additional elements t [satisfying
certain conditions previously stated]. But such a system S is obviously something quite different from our number sequence N, and I could so choose it that
scarcely a single theorem of arithmetic would be preserved in it. What, then,
must we add to the facts above in order to cleanse our system S again of such alien intruders t as disturb every vestige of order and
to restrict it to N? (Letter to
Keferstein, p. 100)

In other words: while the properties he had
just listed are all indeed present in the sequence N, they do not seem to exhaust
the relevant properties of this structure, because they are equally true of
other structures which are demonstrably very different from N. So an axiomatization guided only by
these properties is not categorical because it does not uniquely refer to the
intended sequence N; indeed, it also
refers to structures strictly containing N
but also containing additional, disruptive elements. How doe we get rid of these
alien intruders?

For our purposes, it is important to notice
that the properties listed by Dedekind up to this point have in common the fact
that they can all be expressed by means of (what we now refer to as) purely
first-order terminology. To be sure, that there may be an important distinction
between first- and higher-order logics is something that became widely acknowledged
only in the 1930s. Until then, first-order logic was not recognized as a
privileged, particularly stable fragment of the logical system developed for
the logicist project of Russell and Whitehead. So Dedekind had no reason to
notice this peculiarity about these properties, or to make an effort to exclude
higher-order terminology.

Dedekind then notices that solving the issue
of the alien intruders t was the
hardest part of his enterprise, as the problem is:

How can I, without presupposing
any arithmetic knowledge, give an unambiguous conceptual foundation to the
distinction between the elements n
[the legitimate numbers] and the elements t
[the alien intruders]? (Letter to Keferstein, p. 101)

The solution to this conundrum is offered by
the technical notion of chains, which
he had introduced in previous work. He explains this crucial notion in the
following terms:

… an element n of S belongs to the
sequence N if and only if n is an element of every part K of S that possesses the following two
properties: (i) the element 1 belongs to K
and (ii) the image φ(K)
is a part of K. (Letter to Keferstein,
p. 101, emphasis in the original)

(The function φ had been introduced previously.) Dedekind’s notion of chain can be glossed in modern terminology as “the minimal closure of a set A in a set B containing A under a function f on B (where being “minimal” is conceived of in terms of the general notion of intersection).” (Reck 2011, section 2.2) What matters for our purposes is that the notion of chain involves quantification over sets of elements n, and thus cannot be expressed solely with first-order terminology.

Dedekind correctly claims that the notion of chain offers a satisfactory solution to the problem of the alien intruders t. But as he had no reason to be parsimonious in his use of higher-order terminology, one might think that there might perhaps be other solutions to the problem of alien intruders not requiring the move to second-order quantification, which he did not consider. However, it is now known that first-order axiomatizations of arithmetic are inherently non-categorical (as per the usual Löwenheim-Skolen considerations). Second-order terminology is indeed required for an axiomatization to describe only the ‘intended structures’ – the sequence of the natural numbers and structures isomorphic to it – and not what are known as the non-standard models of arithmetic.

The need for second-order terminology to achieve categoricity in the case of axiomatizations of arithmetic is an illustration of the general point that the descriptive use of logic for mathematics, as defined by Hintikka, will generally require quite expressive logical languages. Arguably, first-order languages will systematically fail to deliver the expressive power required for the precise description of non-trivial mathematical structures, and this may be one of the causes for the (purported) inadequacy of first-order logic to account for ‘ordinary’ mathematical practice (Shapiro 1985).

Tuesday, 11 February 2014

I am currently working on a short paper on axiomatizations of arithmetic and the first-order/second order divide. I usually don’t post (parts of) papers as blog posts, but it occurred to me that in this particular case, the different sections of the paper would make for more or less self-contained (hopefully interesting!) blog posts. So I’ll be posting them as I write, and I expect 5 blog posts to result (3 ready so far). As always, comments are more than welcome; in a sense, I’m hoping readers will help me improve this paper – I believe in collective, distributed cognition! So thanks in advance.

=================================

0. Introduction

It is a well-known fact that first-order Peano Arithmetic (PA1) is not categorical, i.e. it does not uniquely describe the sequence of the natural numbers that is typically viewed as the ‘intended model’ of arithmetic. Indeed, PA1 equally describes structures that strictly contain the sequence of the natural numbers but are not isomorphic to it, and these are known as the non-standard models of arithmetic. It is equally well known that second-order Peano Arithmetic (PA2) in turn, is categorical in that it is satisfied only by the intended model of arithmetic, the sequence of natural numbers, and by other models isomorphic to the intended one.

However, what PA2 offers in terms of obtaining categoricity, it takes away in terms of deductive power. Because second-order logic has an ill behaved (non-axiomatizable) underlying notion of logical consequence, any second-order theory will inherit its deductive limitations. Thus, apparently we cannot have our arithmetical cake and eat it: we can either have categoricity (with (PA2)) or a deductively stronger account of arithmetical theorems (with (PA1)), but not both. In fact, the conflict between the desiderata of expressive power and of deductive power with respect to axiomatizations of arithmetic is an instantiation of a more general phenomenon, namely the conflict between expressiveness and tractability (Levesque & Brachman 1987).

These facts have received a number of philosophical interpretations. Tennant (2000) describes it as a ‘pre-Gödelian predicament’, and argues that it represents the impossibility of the project of ‘monomathematics’. Read (1997) is less pessimistic and observes that, contrary to what many seem to think, Gödel’s incompleteness results do not represent the total failure of Frege’s logicist project because categoricity for arithmetic can be obtained with a logical (albeit second-order) axiomatization of arithmetic (as had been shown already by Dedekind). Hintikka (1989) draws on these observations to distinguish two different uses of logic for the foundations of mathematics – the descriptive use and the deductive use – and three senses of completeness: semantic completeness, deductive completeness, and descriptive completeness (categoricity).

In what follows, I take Hintikka’s distinction between descriptive and deductive uses of logic in (the foundations of) mathematics as my starting point to discuss what the impossibility of having our arithmetical cake and eating it (i.e. of combining deductive power with expressive power to characterize arithmetic with logical tools) means for the first-order logic vs. second-order logic debate. It is often argued (as discussed in (Rossberg 2004)) that the problematic status of the second-order consequence relation is sufficient to exclude second-order logic from the realm of what counts as ‘logic’. However, this criticism presupposes that the deductive use must take precedence over the descriptive use, a claim that is both historically and philosophically contentious. I argue that, if logical systems are viewed first and foremost as tools to be applied for the investigation of different subject matters, and if different applications are prima facie equally legitimate (for example, Hintikka’s descriptive and deductive uses), then the descriptive incompleteness of first-order logic with respect to arithmetic is just as serious as the deductive limitations of second-order logic (in this case, not restricted to arithmetic). These observations support a form of instrumental logical pluralism: there is no such thing as the one true logic, but only different logics appropriate for different applications.

The paper proceeds as follows. In the first two sections, I discuss Hintikka’s descriptive use and deductive uses of logic in mathematics: I illustrate the former with Dedekind’s search for a categorical characterization of arithmetic, and the latter with Frege’s search for a tool that would allow for gap-free formulations of mathematical theorems. I then elaborate on the first-order vs. second-order divide, and on the general project of using logical tools to investigate the foundations of mathematics. I conclude by offering some remarks on the so-called ‘dispute’ between first- and second-order logic and on the implications of the present analysis for debates on logical pluralism.

1. The descriptive use

Hintikka describes the descriptive use of logic for investigations on the foundations of mathematics in the following terms:

The uses of logical notions … for the purpose of capturing certain structures, viz., the different structures studied in various mathematical theories. The pursuit of this task typically leads to the formulation of axiom systems which use the logical concepts just mentioned for different mathematical theories. (Hintikka 1989, 64)

In this sense, logical notions have a descriptive use in mathematics insofar as they are used to describe certain mathematical structures such as the sequence of the natural numbers, geometric systems of points and lines etc. Indeed, most of the early uses of axiomatization in the foundations of mathematics aimed at (complete) descriptions of portions of mathematical theory by means of accurate descriptions of the underlying mathematical structures(Awodey and Reck 2002). Some examples are Dedekind and Peano on arithmetic, and Hilbert and Veblen on geometry.

In such cases, the axiomatizer starts with a specific, presumably unique mathematical structure in mind, and then attempts to describe this structure accurately and completely by means of logical notions. If such a description exhausts all of the relevant properties of the structure in question, then the axiomatization will not only describe the structure accurately, but also uniquely: it will be a description of nothing other than the intended mathematical structure in question (or structures identical to it according to the relevant parameters, e.g. isomorphism). An axiomatization that achieves this kind of completeness – which in turn yields uniqueness – is said to be categorical.

It is immediately apparent that the crucial feature of a system of logical notions for this descriptive use is expressivepower: the more expressive the language is, the more fine-grained the description is likely to be, as the language will have the resources to express a greater number of the relevant properties of the structure. It is also clear that this descriptive approach goes well with (though it does not necessitate) a Platonic conception of mathematical structures, according to which they have some sort of antecedent, independent existence. The task of the mathematician is then to describe and investigate the pre-determined properties of these structures. (Naturally, how she has epistemic access to the properties of these abstract structures is a notoriously thorny problem in the epistemology of mathematics, the famous ‘Benacerraf challenge’.)

Set theory is taken to serve as a foundation for mathematics. But it is well-known that there are set-theoretic statements that cannot be settled by the standard axioms of set theory. The Zermelo-Fraenkel axioms, with the Axiom of Choice (ZFC), are incomplete. The primary goal of this symposium is to explore the different approaches that one can take to the phenomenon of incompleteness.

One option is to maintain the traditional “universe” view and hold that there is a single, objective, determinate domain of sets. Accordingly, there is a single correct conception of set, and mathematical statements have a determinate meaning and truth-value according to this conception. We should therefore seek new axioms of set theory to extend the ZFC axioms and minimize incompleteness. It is then crucial to determine what justifies some new axioms over others.

Alternatively, one can argue that there are multiple conceptions of set, depending on how one settles particular undecided statements. These different conceptions give rise to parallel set-theoretic universes, collectively known as the “multiverse”. What mathematical statements are true can then shift from one universe to the next. From within the multiverse view, however, one could argue that some universes are more preferable than others.

These different approaches to incompleteness have wider consequences for the concepts of meaning and truth in mathematics and beyond. The conference will address these foundational issues at the intersection of philosophy and mathematics. The primary goal of the conference is to showcase contemporary philosophical research on different approaches to the incompleteness phenomenon.

To accomplish this, the conference has the following general aims and objectives:

To bring to a wider philosophical audience the different approaches that one can take to the set-theoretic foundations of mathematics.

To elucidate the pressing issues of meaning and truth that turn on these different approaches.

To address philosophical questions concerning the need for a foundation of mathematics, and whether or not set theory can provide the necessary foundation

Call for Papers: We welcome submissions from scholars (in particular, young scholars, i.e. early career researchers or post-graduate students) on any area of the foundations of mathematics (broadly construed). Particularly desired are submissions that address the role of set theory in the foundations of mathematics, or the foundations of set theory (universe/multiverse dichotomy, new axioms, etc.) and related ontological and epistemological issues. Applicants should prepare an extended abstract (maximum 1’500 words) for blind review, and send it to sotfom [at] gmail [dot] com. The successful applicants will be invited to give a talk at the conference and will be refunded the cost of accommodation in Vienna for two days (7-8 July).

2nd International Summer School in Philosophy of Physics on PROBABILITIES IN PHYSICS

Some of the most fundamental results from physics come in a probabilistic guise. For instance, quantum mechanics can only provide probabilities for the outcomes of measurements; various ensembles in statistical mechanics are characterized by their probability distributions, and many other models in physics are random in character. But why are probabilities so ubiquitous in physics? How can we interpret the probabilities from physical theories? Are they just some „descriptive fluff“ for conveniently representing certain patterns? Or are there ontic chances out there in the world? If so, can we think of them as dispositions or Popperian propensities? Are genuine chances compatible with determinism? And how do probabilities figure in the most prominent interpretations of quantum mechanics? These are some of the questions that we will discuss in this summer school, which addresses graduate students and postdocs from philosophy of physics and related fields.

Sunday, 9 February 2014

I'm very honoured to be part of the Author Meets Critics session on Lara Buchak's new book Risk and Rationality
at the Pacific APA in San Diego this April (Buchak 2014). It is a brilliantly
insightful and closely argued book that I feel sure will be at the
centre of a number of debates in decision theory over the next few
years. At the heart of the book is an alternative to expected utility
theory that is intended to incorporate an agent's attitude to risk as a
component of rational practical decision making along with the utilities
and credences on which orthodox expected utility theory is based. I'd
like to write a series of posts about this alternative to orthodox
expected utility theory that Buchak proposes. So I'll start in this
post with an overview of that alternative. Needless to say, this is no
substitute for reading the book, which is absolutely full of rich
explanation and philosophical insights that demand a lot of reflection.