Thursday, 21 February 2013

LMU Munich explores new forms of collaborative learning – by making use
of Massive Open Online Courses (MOOCs). The MCMP takes part in creating
the first LMU online courses offered on Coursera, a leading platform for
online education. Coursera's classes may be accessed by anyone
anywhere, at no charge and without having to meet any special entrance
requirements. The MOOCs will consist of video lectures, machine-graded
quizzes, collaborative online learning forums, reading lists and seminar
assignments.

The European Research Council (ERC) project Philosophy of Canonical Quantum Gravity announces one-year post-doctoral positions, with the possibility of an extension depending on the final evaluation. The profile of the searched candidate is that of a candidate with a strong background in pure mathematics and mathematical physics interested in
the conceptual and mathematical foundations of theoretical physics.

Profile of the position: The chosen candidate is expected to work on one of the following areas:

Philosophy of spacetime and gravitational theories.

Philosophy of quantum mechanics in the light of symplectic geometry,
geometric quantization and the theory of constrained Hamiltonian
systems.

Application Procedure:
Applicants are invited to submit a dossier by
email (in English or French) including:

• a cover letter explaining the candidate’s research interests and
her/his skills for contributing to an interdisciplinary project at the
intersection between philosophy, physics, and mathematics.

• a curriculum vitae including a publication list;

• university certification of the last candidates’ degree.

• a copy of PhD thesis and a sample of other relevant writings,
published or not;

• the name and electronic address of one knowledgeable scholar who
could provide a recommendation letter if necessary.

Scholars of all nationalities are welcome to apply.

For further
information, please contact Gabriel Catren:
gabrielcatren@gmail.com. The dossier
must be addressed to the following electronic address:
gabrielcatren@gmail.com.Application deadline: Open until filled.

Thursday, 14 February 2013

The workshop is the mid-term event within a three-year
research project funded by the DFG on Models of information search: A
theoretical and empirical synthesis. The project
is part of the wider DFG Priority Program New frameworks of rationality (SSP 1516), launched in 2010 to promote
the collaboration between psychologists, philosophers and other related research
communities towards an integrated approach to the study of human rationality. The meeting is free to attend and will take place in the
Statistics Seminar Room, Ludwigstrasse 31/33, first floor.

I've been asked to review Paolo Mancosu's book The Adventure of Reason-- Interplay between Philosophy of Mathematics and Mathematical Logic, 1900-1940 (OUP, 2010) for Mind. What follows is a draft of my review, so comments and suggestions are most welcome!
------------------------------------------------------

The first half of the 20th century witnessed the birth
and flourishing of a radically novel subdiscipline: mathematical logic. While
mathematics and logic had been on friendly terms at least since the 17th
century (Mugnai 2011), and while there were 19th century precursors
to the idea of applying mathematics to logic (e.g. Boole) and logic to
mathematics (e.g. Frege), it is only in the first half of the 20th
century that mathematical logic became a fully mature subdiscipline/research
program.

The Adventure of Reason offers a compelling narrative of this exciting chapter of the
history of logic and mathematics. Paolo Mancosu is one of the world’s leading
experts on these developments, and while many others have made important
contributions to the topic, it is fair to say that Mancosu has gone a step
beyond with his painstaking work on sources other than the canonical, printed
versions of articles and books. He has done extensive research at a number of
archives both in Europe and North America, examining documents such as letters,
minutes of meetings, informal reports, unpublished transcriptions of lectures –
in short, unpublished material of various sorts. This approach has enabled him
to produce new insights and at times to provide novel answers to interpretive
open questions concerning some of the towering figures in this tradition (such
as the controversy on whether Tarski did or did not accept domain variation in
his definition of logical consequence – chapter 16).

The book essentially consists of re-prints of several of Mancosu’s
articles (some of them co-authored pieces) on a wide range of topics and authors, but all within the framework of
the emergence of mathematical logic in the 20th century and its
tight connections with philosophical discussions on the nature of mathematics
(as the subtitle of the book indicates). It is divided into five parts: Part I
is composed of a long chapter surveying the developments from 1900 to 1935;
Part II focuses on the foundations of mathematics program, in particular
Russell, Hilbert, Bernays and Gödel; Part III is less
‘mainstream’ and is dedicated to the contacts between phenomenology and the philosophy
of the exact sciences, pioneered by Husserl and pursued by less well-known
figures such as Weyl, Becker and Mahnke; Part IV is rather short and concerns
Tarski and Quine on nominalism; Part V focuses on Tarski and the Vienna Circle
on truth and logical consequence, and includes a hitherto unpublished lecture
by Tarski on completeness and categoricity (more on which below).

Among the many innovations introduced by mathematical logicians in
the first half of the 20th century, the emergence of formal systems stands out: these are collections
of axioms and rules of transformation, typically formulated in a specially
designed language (a formal language), which would allow for the precise
investigation of whole portions of mathematics by means of
axiomatizations/formalizations. Starting with the system presented in Whitehead
and Russell’s 1910 PrincipiaMathematica (which in turn was heavily
inspired by Frege’s Begriffsschrift
and by the notation introduced by Peano), there was a crescendo of enthusiasm
and optimism concerning what could be accomplished with this novel tool, which then
culminated in Hilbert’s program in the 1920s (the topic of Part II). The basic
idea was that meta-mathematical
questions such as the consistency of arithmetic would become mathematical questions once adequately
formulated within a convenient formal system, and could thus be investigated
and settled by purely mathematical means.

At first, it seemed that these powerful tools, formal systems,
offered almost limitless possibilities for the investigation of such
foundational issues. (Hilbert: “We must know. We will know.”) But it also
became increasingly clear that the desired meta-properties of these systems, in
particular in view of the goal of describing portions of mathematics completely (in different senses of
‘completely’ – see chapter 1 of the book and Awodey and Reck 2002), could not
be taken for granted. A seemingly well-designed, plausible collection of
axioms, say the axioms of Peano arithmetic, could still fail to deliver a
complete description of the mathematical theory in question. So the properties
of these very systems also had to be investigated in a systematic way – the
meta-meta-level of investigation, so to speak. Now, once thoroughly formalized,
formal systems become mathematical objects themselves, thus allowing for the
application of the same general methodology. So one remarkable feature of
mathematical logic in this period is that it set out to establish its own
limits by means of the very methodology it had inaugurated.

As is well known, the most astonishing limitative results were
proved by Gödel, who in the early
1930s showed (the First Incompleteness Theorem) that any consistent system containing
arithmetic (in a suitable sense of ‘containing’) is inherently incomplete in
that there is a sentence which can neither be proved nor refuted by the system,
though true according to the standard model of arithmetic. The Second
Incompleteness Theorem established that Hilbert’s dream of proving the
consistency of arithmetic within arithmetic itself could not be realized. And
while it became generally accepted that Gödel’s
results represented a fatal blow to Hilbert’s program, this by no means meant
the end of mathematical logic as a research program. In fact, it seemed to
count as a victory rather than as a failure, as the methodology was thus shown
to be able to investigate and delineate its own limits. The emergence of
model-theory with Tarski’s work on truth and logical consequence (Part V) was
also a response to Gödel’s results and
inaugurated a new approach in mathematical logic, alongside the
proof-theoretical approach originally envisaged by Hilbert and collaborators.

Ultimately (and as noted by M. Potter in his review of the book in Philosophia Mathematica, 2012), Mancosu
does not (and does not intend to) offer a radically novel picture of the
development of mathematical logic in the first half of the 20th
century. The broad lines of his narrative are very much the ones largely agreed-upon
in the literature. But starting from the general picture, he fills in the gaps,
thus offering a very detailed account of these developments. Moreover, Mancosu
is to be praised for bringing to the fore the contribution of less well-known
figures such as Becker, Behmann and others, and for his keen eye for how these
developments unfolded historically. His is a narrative focusing on actors and
processes, not only on the results of their endeavors (theories, theorems etc.).
(One of my favorite chapters in the volume is his piece on the immediate
reception of Gödel’s incompleteness
theorems (chapter 7), which describes how, after initial shock and some
skepticism, consensus emerged concerning the correctness of the results and
their wide-ranging implications.)

One may wonder what justifies the publication of a collection of
previously published papers, which moreover have not been substantially altered
for their inclusion in this volume. Now, the book does contain updated
bibliographical lists of works on the topics that have appeared since the
original publication of the articles. It also includes helpful summaries for
each of its five parts, which thus make manifest that the different individual
articles are all part of a larger, general project. At any rate, the fact that
these articles are all conveniently assembled in just one volume represents a useful
resource for all those interested in these topics.

It must be said, however, that the book is not exactly a
page-turner, which may well be due to the fact that, in the end, it remains a
collection of articles rather than a book conceived as such. In fact, it could
(but need not be) viewed as a reference work, with its impressive collection of
‘facts and figures’ (accompanied by a rather detailed index of names, but alas
no index of terms). It is for the most part also a very useful pedagogical
resource, where some difficult concepts and results are explained in an
accessible way (especially in the long survey in chapter 1).

To close this review, let me briefly discuss the two thus far
unpublished chapters of the book, namely a transcription of Tarski’s lecture
“On the Completeness and Categoricity of Deductive Systems”, and Mancosu’s own
discussion of the lecture and its content (chapters 18 and 17, respectively,
the last two chapters of the book). Completeness and categoricity are both
crucial desiderata for formal systems, and are in a sense each other’s duals;
while completeness ensures that all
the relevant facts about a given portion of mathematics can be captured by a
deductive system, categoricity ensures that only
these relevant facts are captured by the system, i.e. that it is not also a
description of structures falling outside the targeted mathematical
domain/theory (‘alien intruders’, in Dedekind’s terms). When completeness
fails, not enough fish are caught; when categoricity fails, too many fish are
caught.

In his lecture, Tarski remarks that, in view of Gödel’s results, absolute completeness is a rare phenomenon; there are
few deductive theories that can be complete (i.e. only those not sufficiently
expressive so as not to allow for a formalization of arithmetic). He then
introduces two weaker concepts of completeness, namely relative completeness
and semantic completeness (the interested reader will have to consult the text
itself for details, as limitations of space prevent me from offering further
technical elaborations), and asks himself what methods could be used to show a
given system to be relatively/semantically complete. Lest one should despair,
Tarski announces the grand news: “We shall see, namely, that the concepts of
relative and semantical completeness are closely related to the concept of
categoricity (due to Veblen) and the investigation of the latter concept does
not require in general any special and subtle methodological investigations.”
(p. 490) Tarski then proves two theorems relating (relative and semantic)
categoricity to (relative and semantic) completeness, and in view of the
abundance of categorical systems, he concludes: “in opposition to absolute
completeness, relative or semantical completeness occurs as a common
phenomenon.” (p. 492) We thus have yet another display of Tarski’s uncanny
ability to convey difficult technical concepts in an accessible way, while at
the same time drawing sophisticated general conclusions from the results he
discusses. A great way to end the book, and accordingly an appropriate way to
end this review.

Thursday, 7 February 2013

The Amsterdam Workshop on Truth is organised by the Institute for Logic, Language, and Computation of the University of Amsterdam.

The workshop will take place from Wednesday 13th to Friday 15th of March 2013.

The workshop is intended to serve as a meeting point for researchers working on the philosophy of truth in order to discuss latest results and work in progress.
It will address a wide range of truth-related topics and it is open to more formal or less formal approaches.

Monday, 4 February 2013

We just relaunched our MCMP website with lots of little improvements and new contents! You might already have checked out our new front page or had a look at everybody's updated profile pages. We also added a video search function to the media page - still an experiment but one more way to explore the MCMP's topics.

As always, we are looking forward to getting your feedback on the new page. And of course we are working on implementing further improvements, stay tuned! Thanks for linking to www.lmu.de/mcmp from your own page and for your "like" on facebook here: www.facebook.com/lmu.mcmp!

Sunday, 3 February 2013

A rather fashionable "nominalistic" view in the last decade in relation to mathematics is a view, or perhaps bundle of views (also sometimes called "fictionalism"), that mathematical reasoning is a kind of "pretense". Rather than thinking it to be the case that 3 + 5 = 8 or that there are infinitely many prime numbers, we somehow merely play a game of pretense with the sentences "3 + 5 = 8" and "there are infinitely many prime numbers". There are echoes here of old-fashioned game formalism, if-thenism and deductivism. And so it is that some philosophers have come to compare mathematics---a fundamental component of human knowledge and science---with Santa Claus.

My response to this kind of pretense theory is usually along the following lines. At the moment, no actual scientist has used the Santa Claus story to compute the Lamb shift, analyse quark confinement or perturbations in binary star systems and so on. So, I would be very interested to know why not!

The basic claims of such pretense theories are in some sense connected to the meanings of mathematical sentences; but they are neither abstract metaphysical claims (e.g., "mathematical facts are necessities which trivially supervene on all contingencies"), or rather abstract epistemological claims (e.g., normative claims, concerning rationality, "oughts", and so on). What is interesting about such claims is that they seem to have some empirical content at the level of cognitive psychology, and therefore may be subjected to empirical investigation.

Abstract
A pretense theory of a given discourse is a theory that claims that we do not believe or assert the propositions expressed by the sentences we token (speak, write, and so on) when taking part in that discourse. Instead, according to pretense theory, we are speaking from within a pretense. According to pretense theories of mathematics, we engage with mathematics as we do a pretense. We do not use mathematical language to make claims that express propositions and, thus, we do not use mathematical discourse to make claims that are either true or false. In this paper I make use of recent findings from cognitive neuroscience and developmental science to suggest that pretense theories of mathematics fail.

In physics, one considers various quantities and functions, usually defined on spacetime, or on some assembly of particles. The quantities and functions are (usually) mixed mathematical entities, with abstract values. Laws of physics are then propositions which have semantic content: they say that these functions are related in some way -- usually via a differential equation. An important kind of value is a physical state. We have some sort of system $S$ (say a point particle, or a rigid body, or an assembly of particles, or a region of gas, etc.), and we think that it can "be" in some range of possible physical states.

I mention this only because it seems to me that there is a persistent mistake in some recent literature about applied mathematics and indispensability, in assuming that physical states are "concrete" entities, presumably of a spectacularly peculiar sort. But they are not "concrete". "Physical" does not imply "concrete". For example, a wavefunction $\Psi$ is physical, but is not a concrete entity: it's a function to $\mathbb{C}$. The wavefunction is the physical state.

For example, for an $N$-particle system in classical mechanics, the physical states are $N$-tuples of ordered pairs,

$((\mathbf{r}_i, \mathbf{v}_i) \mid i \in \{1,\dots, N\})$,

where $\mathbf{r}_i$ is the location of particle $i$, and $\mathbf{v}_i$ is the instantaneous velocity of particle $i$. Although certain kinds of equivalences may have to be taken into account, the points in the state space are the physical states. Taken together, they form a structure, which in a sense is "like" the manifold $\mathbb{R}^{6N}$, because the locations and velocities can be co-ordinatized as triples of reals (i.e., given a co-ordinate chart $\phi$ on space, and a point $\mathbf{r}$ in space, we have $\phi(\mathbf{r}) = (x, y, z) \in \mathbb{R}^3$). In classical mechanics, when one moves to the phase space, the structure is called a symplectic manifold.

This state $\alpha$ is not a concrete thing. What, for example, is the speed of this state $\alpha$? Its location? Such questions are absurd. A sequence $\alpha$ of positions and velocities is not a concrete entity (like, perhaps, the point particle itself). And despite the claims of some fictionalists, a physical state is also not like Santa Claus or Jane Eyre, because physical states are what are our scientific theories are about, quite unlike Santa and Jane Eyre.