Thursday, 28 March 2013

Let's suppose, with Chomsky, that the basic explanatory entities of scientific linguistics are individual languages spoken by individual speakers. These are idiolects, or micro-idiolects, or Chomskyan I-languages, and are somehow "cognized" or implemented by the cognitive systems of individuals.

Then one wonders how to make sense of the collective notions of:

"speech community"

"natural language".

It seems that these notions are intimately connected: for a natural language, such as Punjabi, French, Middle English, Euskadi, etc., is usually identified in terms of some speech community of its speakers.

Here are two attempted explications:

A set $C$ of agents is a speech community if and only if, for any pair $x,y \in C$, the speech behaviour of $x$ and $y$ is mutually interpretable.

$L$ is a natural language if and only if there is a speech community $C$ each member of which uses/cognizes $L$.

The first uses the notion of "speech behaviour being pairwise mutually interpretable", and this has genuine empirical content, in terms of the observable overall ease of social co-operation between agents. The second uses the notion of "speech community" and the hard to define, but crucial, notion of "using/cognizing a language".

Both notions are vague. Mutual interpretability is a rather vague and context-dependent matter; consequently, what sets count as speech communities will inherit this vagueness. And even when there is a speech community, it is usually the case that it is somewhat heterogeneous, and therefore for no idiolect pair $L_1,L_2$ do we have $L_1 = L_2$, strict dictu. So, only at some idealized level is there some single "external" language $L$ that all members of $C$ speak. Rather, this $L$ is an idealization that somehow approximates the varying idiolects $L_1, L_2$, etc., spoken within the speech community.

There are some refinements that might be introduced to the above rough ideas (see the comments below). The most obvious would be treat the "mutual interpretability" relation as a matter of degree. Instead of being modelled by a graph (nodes representing speakers; edges representing mutual interpretability), in which a speech community is a maximal clique, a collection of speakers might be modelled by a weighted graph; and speech community is then like a kind of weighted clique.

Finally, probably one ought to be sceptical about attempts to make precise the notions of "natural language" or "speech community". A similar view can be found in Chomsky's work. For example, in his "Knowledge of Language" (1975):

The notion of ‘language’ as a common property of a speech community is a construct, perfectly legitimate, but a higher-order construct. In the real world, there are no homogeneous speech communities, and no doubt every speaker controls several grammars, in the strict sense in which a grammar is a formal system meeting certain fixed conditions.

My proposals will also not conform to the expectations of those who, in analyzing meaning, turn immediately to the psychology and sociology of language users: to intentions, sense-experience, and mental ideas, or to social rules, conventions, and regularities. I distinguish two topics: first, the description of possible languages or grammars as abstract semantic systems whereby symbols are associated with aspects of the world; and second, the description of the psychological and sociological facts whereby a particular one of these abstract semantic systems is the one used by a person or population. Only confusion comes of mixing these two topics.

The idea is that there is a distinction between:

the description of possible languages or grammars as abstract semantic systems whereby symbols are associated with aspects of the world.

the description of the psychological and sociological facts whereby a particular one of these abstract semantic systems is the one used by a person or population.

For example, in semantic theory, when one describes an interpreted language $L$, one studies a semantic function $\| . \|_{L}$ from $L$-strings to semantic values (meanings, things, extensions, etc.). This is not the same topic as the topic of studying how an agent "grasps", or "implements", that meaning function; or how an agent "uses" or "speaks" or "cognizes" the language $L$.

One might believe that these are the same topic. But they aren't. Or one might believe that, though separate, one can relate them. Or one might believe that the second is, in some sense, "primary", and the first is "derivative". In each case, the view involved is one that needs to established, not merely asserted, in order to avoid Lewis's accusation that one is simply confusing these two topics.

Wednesday, 27 March 2013

On the view I like (which is similar to Lewis's view, in "Languages and Language" (1975)), languages are individuated very finely: so, they're different when there even is the tiniest difference in lexicon, phonology, semantics, etc. The languages spoken by individual speakers are idiolects, or better, micro-idiolects.

This is rather counter-intuitive, of course, because we seem, at least prima facie, to speak the same language as those in our speech community. But closer inspection suggests that we belong to heterogenous speech communities, and we all speak different, but strongly overlapping, and largely mutually interpretable languages. This leads to something like an illusion of a common, shared, language. While we may share fragments (overlapping parts), we generally never share exactly the same language with any other speaker.

Normativity concerns "oughts". The abstract view of languages here leads to two notions of normativity related to language.

Idiolectic Normativity

For my own idiolect, there are norms specified by the idiolect I do in fact cognize: these dictate phonology, syntax and use. Let's call this idiolectic normativity. If I mispronounce one of my own words, or garble some syntax, or produce a spelling error (my idiolect is equipped with an orthography too), then I have made a linguistic error relative to my own idiolect. These would usually count as performance errors.

Since these are deviations from the pronunciation, syntax, etc., norms of my idiolect, they are errors. They're not sins or anything like that. They're, for the most part, harmless errors.

Collective Normativity

On the other hand, I belong to various speech communities, and there is no shared common language. For speakers $s_1, s_2, \dots$, there are idiolects $L_1, L_2, \dots$. These are, to some degree, overlapping and mutually interpretable. However, clearly there are further norms in play in conversation with other speakers of language in my speech community. For example, perhaps one might argue that one "ought" to obey the principle

"same words; same meaning"

in conversation with others. Perhaps one ought to match a certain dominant form of pronunciation of other speakers. Perhaps one ought to agree with one's interlocutor on "temporary baptisms" of new words. Perhaps one ought, if need be, to accept and revise meanings assigned to technical vocabulary by relevant experts. (That is, treat certain word-meaning pairs as authoritative.)

How should we understand these "collective" norms? These norms (better: proposed norms) are the topic of heated arguments about "prescriptivism" in linguistics. Are they primarily linguistic norms, as idiolectic norms are? Or are they social co-ordination norms? It seems to me that these norms are social co-ordination norms, rather than linguistic norms per se.

Accents

An example would be accents or dialects.

My accent when I was very young was a working-class Birmingham accent, "Brummie" (for those who don't know the UK). But this accent gradually faded when I attended secondary school, from age 11 to 16, and by age 16, my accent had become similar to what it is now (roughly, grammar-school, "BBC English"). Certainly I had no intention of my accent undergoing this change. But it certainly occurred for social reasons; for Birmingham accents are regarded as extremely ugly in the UK (along with West Country accents) and there was occasional negative pressure on me because of this. And, in fact, it improves social co-ordination, in somewhere like the UK, because it hides one's class background.

The normative question is this: ought one to do this? Are there good normative reason for shifting one's idiolect like this?

First, when I used to say, in my younger self's Brummie accent,

Oimgooing out

instead of,

I'm going out

then I wasn't making a mistake. That was my idiolect then, and "Oim gooing out" is precisely what was specified as the correct pronunciation! Still, if I were now to say,

Oim gooing out

at, say, a Governing Body meeting at Pembroke College, then my UK colleagues will laugh at me (US and other non-UK colleagues will likely not understand me). Of course, I might also jokingly say this (as I have done with a colleague also from the Midlands). But, overall, it would reduce social co-ordination to return to my youthful idiolect.

Collective Norms Concerning Language are Social Co-operation Norms

So, it seems that such norms are social ones, rather than linguistic ones. They concern social co-operation and social conformity. In sociolinguistics, one can study such norms and how they interact with the linguistic norms of speaker's idiolects. But it doesn't seem to be a specifically linguistic norm to speak one language rather than another.

This isn't to deny that there are such norms, some defensible, some not. For example, the norm of revising, if necessary, one's idiolect to accept an authoritative sound-meaning pair from an expert seems very reasonable. In such a case, one is not linguistically misusing a word; rather, one is, in some sense, socially misusing the word. (Perhaps even "misusing" is not right here.) The linguistically correct meaning of "disinterested" can be anything you like: you can use it to mean "bored" or "hungry". But the socially correct meaning of "disinterested" is "unbiased" or "not influenced by considerations of personal advantage", because this is the meaning assigned to it by relevant experts (that is, legal and journalistic professions) within the large, and heterogenous, speech community of English speakers.

I'm interested in formulating identity, or individuation, conditions for languages.
In this post, I'll reason to the (absurd) conclusion that English = German by using the following assumption:

Assumption:
Given a speech community, as the language behaviour evolves, the language spoken retains its identity.

Thought Experiment: let $C$ be a speech community, all speaking English (as currently understood, in terms of phonology, syntax, semantics and pragmatics). Let $C$ evolve forwards in time, with small shifts in language behaviour until, at a later time, say 100 years later, all speakers in $C$ speak German (as currently understood, in terms of phonology, etc.).
By the Assumption above, we conclude,

English = German.

The moral of this reductio ad absurdum is, I think, that the Assumption above is not true. Languages should be individuated very finely (both temporally and modally). Differences---even very small ones---in lexicon, phonology, etc., must count as different languages.

Fine-Grained Language Individuation:
$L_1 = L_2$ if and only if $L_1$ and $L_2$ have identical phonology, syntax, semantics and pragmatics.

Then, the analysis of the Thought Experiment above is what language the speakers speak, or "cognize", shifts.

Monday, 25 March 2013

Reduction
and emergence play a central role in the relations of scientific theories
and disciplines. For instance, a reducible theory is in some sense replaceable but also supported by its reducing theory. In
contrast, a theory that describes emergent phenomena arguably stands alone
in both respects. Unfortunately, the discussion about reduction and
emergence suffers from two uncertainties at once. On the one hand the
concepts of reduction and especially emergence are not precisely defined,
on the other hand there are few if any uncontentious cases of reduction or
emergence in the sciences. This stalemate can be overcome by a thorough
analysis of relations between and within scientific theories. These
relations can then serve as a basis for explications of reduction and
emergence that are applicable in the sciences. In this vein, we invite
proposals for talks that address the inter- or intratheoretic relations of
specific theories or provide precise notions of such relations for the
application in the sciences.

We invite submissions of extended abstracts of 1000 words by 15 May 2013.
Decisions will be made by 15 June 2013.

Sunday, 24 March 2013

A non-academic friend, Michael Ezra, asked me what mathematical philosophy is, and so I said I'd try and explain; or, at least, explain how I think of it. This is the first post. In the second, I will try and give some examples to illustrate.
----------------------

1. Explaining what mathematical philosophy is

First, I see it as analogous to mathematical physics or mathematical economics. In physics, one want to understand how physical processes---things moving around, heating up and cooling, etc.---work, and in economics one wants to understand how economies, trade, firms, etc., tick.

Mathematics is introduced in these domains, obviously. For example, we formulate the laws of nature like this:

Here, the physical quantities are mathematical fields. (Functions on spacetime to some abstract space, such as $\mathbb{R}^n$ or a Hilbert space. In a fancier geometric setting, physical fields are sections of a fibre bundle.) What the exact role of mathematics here is is controversial. Clarifying its role is intimately tied up with debates about the Indispensability Argument and the nature of applied mathematics.

Second, in philosophy one wants to do something, but what this something is is pretty controversial. Well, look at some philosophical problems or puzzles: these can usually be expressed in a way that seems very intuitive, and non-mathematical. For example,

How do I know I'm not a brain in a vat, and what I take to be the case, isn't?

Why are some patterns of reasoning valid, and others invalid?

Why should I think the future will resemble the past?

If I say "My current statement now is false", is my statement true or false?

Are moral statements like descriptions of facts, or more like expressions of my tastes and attitudes?

We learn about the numbers, 0, 1, 2, and so on, as children. How to add them and multiply them and apply them to counting things around us. Are these numbers entities of some kind? Or just marks on paper?

If I could have worn a different jumper today, does that mean there is another possible world in which I am wearing that jumper?

Suppose, when you were asleep, God picked up all the matter in the universe and moved it 1 metre in some direction relative to space. There is no noticeable difference. Does this imply that space doesn't exist?

Captain Kirk is beamed down to the planet. But the transporter malfunctions, and two copies of Captain Kirk materialize on the surface. Which one is the real Captain Kirk?

2. "Über-theory" and "Meta-theory"

On one view, which I call the über-theoretic view, what this something that philosophy is doing is concerned, in a very general way, with:

how everything hangs together.

Quoting Wiflrid Sellars,

The aim of philosophy, abstractly formulated, is to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term. (Sellars, 1962, "Philosophy and the Scientific Image of Man").

What are facts, and states of affairs? Are facts "composed" of constituents?

Is reality organized into levels of dependence? Does it, or could it, have a "bottom level"?

Are there moral facts and properties?

On another view, which I call the meta-theoretic view, philosophy is concerned with,

understanding theories, representations and concepts.

How do theories, representations and concepts relate to each other and to the things of which they are theories and which they represent? How should we analyse the concept of representation itself? We are concerned with concepts such as:

existence

identity

abstractness

structure

possibility

necessity

meaning

reference

truth

consequence

infinity

part-of.

We may attempt to analyse such concepts (i.e., provide "if and only if" definitions, which are analytic, and avoid counterexamples); we may attempt to "explicate" such concepts; we may attempt merely to relate such concepts to others, emphasizing their conceptual interdependence.

I make no argument here as to whether über-theory and meta-theory are exhaustive classifications, or non-overlapping. (But I think they are overlapping.) Here is an earlier post of mine on über-theory and meta-theory.

I would identify Bertrand Russell as the classic figure here, particularly his Principles of Mathematics (1903), which I mentioned also a few months ago shortly after defending the achievements of analytic metaphysics. There were antecedents, of course - e.g., Frege, Bolzano, Leibniz. But Russell has a special significance. Perhaps Russell is to modern mathematical philosophy roughly what Albert Einstein is to modern mathematical physics. (This is not, of course, to diminish the significance of, say, Newton, Maxwell, Lorentz and Poincare!)

After Russell, Rudolf Carnap was the practitioner par excellence of the second, meta-theoretic, approach, while David Lewis was the practitioner par excellence of the first, über-theoretic approach. (In addition, W.V. Quine, Hilary Putnam and Saul Kripke are very important, for both approaches.)

I just realized I had never considered before whether there was any common thread that runs through the whole of my work. If there is one, then it is on the more methodological side really: I like to apply mathematical methods in order to solve philosophical problems. I call this ‘mathematical philosophy’. Very
occasionally one has some cool mathematical theorem, and one then looks for the right sort of problem to which it could be applied. But in the great majority of cases one simply comes across a philosophical theory or argument or thesis or maybe even just a clever example, and some mathematical structure presents itself—well, ‘presents itself’ after a lot of work!

I was lucky enough to be able to work alongside Hannes and others for a year and half at MCMP before moving to Oxford at the end of 2012. It really is a very intellectually stimulating environment (and also extremely welcoming and friendly, because of Hannes's incredible levels of goodwill, hard work and decency.)

4. Why should mathematics play any role at all?

One might wonder where all the mathematics comes in!

In the case of über-theory, one might initially wonder why mathematics might be relevant at all. Well, as it turns out, explaining what properties and relations are does immediately relate to mathematics, because the extension of a property is a set, and the extension of a relation is a set of ordered tuples. And the theory of sets and ordered tuples is a part of mathematics---some would say, the foundational branch. If one is interested in space and time, then our best theories of space and time are highly mathematicized theories: to understand such things, one needs to know about manifolds, co-ordinate charts, tensor fields, fibre bundles, topology, and so on. Similar points can be made in connection with causation, modality and other topics.

In the case of meta-theory, it is clearer, because meta-theory and logic are so intimately related; and logic and mathematics are intimately related. Meta-theory is relates closely to semantic theory (broadly understood), and in semantics one is concerned with all kinds of semantic relationships between syntactical entities (for example, connectives, names, predicates, variables, intensional operators) and what they denote, or refer to, or mean, etc. Probably the most important mathematical tool in meta-theory is the notion of a model, and the methods of model theory. And model theory is a branch of mathematics.

5. There is no mathematical substitute for philosophy

Surely one can't definitively solve philosophical problems using mathematics. Isn't that some kind of cheating? Or trick?

On this matter, Saul Kripke once wrote,

There is no mathematical substitute for philosophy.

I think Kripke is right. The only way to clarify Kripke's aphorism this is to look at some examples. Fortunately, I will have some examples to show you!

But, briefly, consider a slightly different approach: this might be called the applied logic approach to philosophy. Although the philosopher cannot solve their problems outright, they can relate certain doctrines---e.g., metaphysical doctrines---to others, by relations of implication, consistency, inconsistency, and so on. Roughly, things like:

A number of possibilities arise which bring in mathematics. First, even the proper formulation of doctrines D1, D2 and D3 may require mathematical language; and, second, establishing connections like this may, in practice, require more than merely reasoning from say D1 and D2 to D3. In a sense, it is D1 and D2 plus mathematics which implies D3; and, third, even the logical relationships that one eventually arrives at may themselves sometimes be contested, and one might be prepared to consider non-classical logics: understanding these---particularly their semantics---brings in more mathematics.

So, even if the optimum output of some philosophical inquiry is to have clarified logical relationships between certain metaphysical doctrines, mathematics intrudes in a number of ways.

In the second post, I intend to give some examples from my own work.

[UPDATES (24th/25th March): I have updated this to include mention of an interview with Hannes Leitgeb. I've have moved some bits of text around, and added some more explanation. to make the organization clearer.]

Thursday, 21 March 2013

The previous post discussed the Newman-style objection to a certain way of thinking about "structural representation claims". Beginning with a certain understanding of what "represents" means and a certain "amorphous glub" conception of how the world is, one gets the conclusion that

a model represents the world if and only if the world has large enough cardinality.

I believe that only way to begin resolving this is to focus again on "represents", and try to clarify its meaning. The previous definition, of "represents*", was that,

a model $\mathcal{A}$ represents* the world if and only if there are relations $R_i$ such that $\mathcal{A}$ is isomorphic to these.

Notice that this definition contains a "ramsification" in the definiens, "there are relations ...".
If we adopt an extensional view of relations and suppose that there is some determinacy in domain of the things that are values of first-order variables, and permit the quantifiers to range over all relations whatsoever on this supposed domain of things, it is then not hard to show, using a Newman-style argument, the Glub Lemma:

$\mathcal{A}$ represents* the world if and only if the cardinality of the world is sufficiently large.

This implies that the only thing we can be either right or wrong about is the cardinality of the world. I take this to tell us that we have simply explained "represents" incorrectly. Of course, one may, or may not, accept this Newman-style conclusion. Like Ted Sider, I take the conclusion to be incredible. On this doctrine, for example, physics is nothing more than counting. For all there is to the world, all there is to the target domain, representation-independently, is its cardinality. If we think of the elements of the domain of a model $\mathcal{A}$ as representing mind-independent worldly "noumena", then the conclusion is that only the number of "noumena" count.

It is not absolutely clear to me that Kant's theory of representation faces this Newman-style objection, but I suspect that it does. For I think that Kant's theory of representation, with its "external intuitions" and "a priori categories" and so on, is more-or-less the same theory as Russell's theory of representation, set out in his Analysis of Matter (1927), and was precisely the theory that Newman objected to in his 1928 Mind review.

So, how to respond. Suppose instead that we think of the notion of "represents" as being relative to some sequence of relations (matching the signature of the model $\mathcal{A}$). The notion being used is then

$\mathcal{A}$ represents the world relative to relations $R_1, \dots$.

This is not being defined. It is being adopted as a primitive, at least for the time being. One obtains two derived notions, by ramsifying with respect to relations. First, quantifying over all relations, one defines "r-represents",

$\mathcal{A}$ r-represents the world if and only if there are relations $R_1, \dots$ such that $\mathcal{A}$ represents the world relative to $R_1, \dots$.

It is this notion that is likely to lead to trivialization (because one is likely to define relative representation in terms of there being an isomorphism from $\mathcal{A}$ to the $R_i$). And one obtains a Lewisian notion of representation by ramsifying with respect to all natural relations:

$\mathcal{A}$ Lewis-represents the world if and only if there are natural relations $R_1, \dots$ such that $\mathcal{A}$ represents the world relative to $R_1, \dots$.

This does not face a Newman-objection, at least not prima facie, for the class of natural relations is going to be much more sparse than the class of all relations whatsoever (which, e.g., include all the scientifically unnatural, gerrymandered, disjunctive ones too: such as the the property of being an electron or a cheese sandwich).

This is the fairly standardly-held resolution (it is sketched by Carlos A. Romero C in a comment to the previous post). There's a lot more to say, of course, but I leave that to some other time!

[UPDATE: 22 March. I'm grateful to a comment from Sara Uckelman, who pointed out the phrasing "just if" is ambiguous in the indented statements. So I have updated the phrasing, replacing "just if" by "if and only if".]

a. Assuming special relativity, a model of the form $(\mathbb{R}^4, \eta_{ab}, F_{ab})$ represents the world.b. Spacetime models $(M, g)$ and $(M^{\prime}, g^{\prime})$ represent different worlds.

A Newman-style objection arises in connection with such claims. Recall that in a 1928 Mind review of Russell's Analysis of Matter (1927), M.H.A. Newman made the following point in criticism of Russell's structuralist view of representation:

[A]ll we can say is, "There is a relation $R$ such that the structure of the external world with reference to $R$ is $\mathcal{A}$'". Now I have already pointed out that such a statement expresses only a trivial property of the world. Any collection of things can be organised so as to have the structure $\mathcal{A}$, provided there are the right number of them. Hence the doctrine that only structure is known involves the doctrine that nothing can be known that is not logically deducible from the mere fact of existence, except ("theoretically") the number of constituting objects. (Newman 1928, p. 144; slight change in notation.)

The problem with (R) is that the notion of "structural representation'" is not adequately defined. Suppose that we define a (wrong!) notion "represents*" as follows:

Definition:
A model $\mathcal{A}$ represents* the world just if there are relations $R_i$ on (possibly a subset of) the set $W$ of things in the world such that $\mathcal{A}$ is isomorphic to these.

There is a certain kind of conceptual nominalism or anti-essentialism, which conceives of the world as structureless---as a kind of "amorphous glub", and which aims to explain the appearance of mind-independent structure as a projection of some feature, such as the "form", of inner mental representations.

[So, for example, as I understand it, in Kant, our "external intuitions" carry a certain "form", which is actually---according to Kant---what space is. Thus, space is the form of our external intuitions; and, consequently, since intuitions cannot exist independently of thought, it follows that space does not exist independently of thought. Kant's error seems to be his assumption that space is the form of these intuitions, whereas the correct view is presumably that the form of these intuitions is representational form, associated with the inner mechanisms of perception; there seems little reason to suppose that what space itself is like be the same as what its cognitive representation, particularly in perception, might be like. For our representations can be mistaken. Space might, for all we know, be 10-diemnsional; or discrete in some way; etc. Thus, Kant seems to be conflating the perceptual representation of space with space.]

Now, if a view this is right, then there are no "external" constraints for the representation to be wrong about. More specifically, here is what I call:

Consequently, given a model $\mathcal{A}$ and a sufficiently large set $B$, one can "project" the "structure'' of this model $\mathcal{A}$ onto the set $B$. This implies that one can place any mathematical structure on the world one likes so long as the world has sufficiently large cardinality:

$\mathcal{A}$ represents* the world iff the cardinality of the world is at least $\kappa$.

The Glub Lemma may be regarded as a dramatic trivialization result, implying that certain conception of structural representation reduces to near triviality: i.e., the correctness of a structural representation is determined solely by cardinality.

(As Roy Cook nicely put it in conversation, physics has been reduced to "counting"!)

Sunday, 17 March 2013

Initially, the discovery that some respectable axiomatic truth theories were conservative over their base theories was viewed as an argument in favor of deflationist conceptions of truth, in the sense that adding a truth predicate did not add anything of ‘substance’ to a theory. But it seems that the demand that a theory of truth be conservative goes well beyond deflationist concerns, and is sufficiently neutral so as to count as a condition of material adequacy for any theory of truth about a given topic/domain S, on a par with Tarski’s T-schema.

[The result that certain systems of truth axioms -- e.g., the restricted T-sentences, of the form

$T \ulcorner \phi \urcorner \leftrightarrow \phi$,

where $\phi$ is a sentence of the object language -- are conservative can be found in Tarski's original 1936 monograph on truth, Der Wahrheitsbegriff. In general, the set of all instances of the T-scheme, where $\phi$ may itself contain the truth predicate, will be inconsistent if the overall metatheory implements self-reference or diagonalization, because one can define a sentence $\lambda$ equivalent (in the theory of syntax) to $\neg T \ulcorner \lambda \urcorner$. In his 1936 monograph and his 1944 article, "The Semantic Conception of Truth and the Foundations of Semantics", Tarski argues against the redundancy theory of truth, which was an earlier incarnation of deflationism. The result that under certain circumstances the axioms will be non-conservative can also be found in the Postscript to Tarski 1936.]

However, the point I would emphasize is that hoping for conservativeness is not really neutral: it is specifically a deflationary demand: more exactly, a necessary condition for such a theory to count as deflationary:

Conservativeness Condition
A truth theory $TR_L$ in metalanguage $ML$ for object language $L$ is deflationary only if its "combination" $TR_L(B)$ with suitably axiomatized object language theories $B$ in $L$ is conservative.

One might accept or reject this conceptual analysis of deflationism. For example, Stewart Shapiro (1998) and I (1999) proposed it, and think it is one way of explicating the demand that truth be "non-substantial" -- the basic instrumentalist constraint. And, for example, others have suggested the condition needn't be accepted: for example, Halbach 2001, "How Innocent is Deflationism" (Synthese).
However, there is a second adequacy condition worth considering - namely Reflective Adequacy.

Reflective Adequacy
A truth theory $TR_L$ in metalanguage $ML$ for object language $L$ is reflectively adequate only if its "combination" $TR_L(B)$ with suitably axiomatized object language theories $B$ in $L$ proves the reflection principle "all $L$-theorems of $B$ are true".

This corresponds to Leitgeb's adequacy condition (b) in his 2007 Philosophy Compass paper, "What Theories of Truth Should be Like (but Cannot be)", on adequacy conditions for truth theories.
The problem is that it is not difficult to show that, in a fairly general setting:

Reflective adequacy is inconsistent with conservativeness.

For example, let $L$ be the first-order language of arithmetic, and let $ML$ be $L_T$: that is, $L$ extended with a primitive predicate $Tx$ intended to express "$x$ is true in $L$". Let $TR_L$ consist in Tarski's compositional axioms for truth, assuming that the syntax of $L$ has been coded into $L$. Let $B$ be $PA$, Peano arithmetic. Then the result of "combining" $TR_L$ with $PA$, with full induction for all $L_T$-formulas, does prove "All $L$-theorems of $PA$ are true". Let me indicate this, somewhat loosely,

(a) $TR_L(PA) \vdash \forall x(Sent_L(x) \wedge Prov_{PA}(x) \to Tx)$

It follows that this combined theory proves $Con(PA)$. That is,

(b) $TR_L(PA) \vdash Con(PA)$

It follows that this combined theory $TR_L(PA)$ is non-conservative with respect to $PA$.

This argument against deflationism is sometimes called the Conservativeness Argument, and was given in 1998 and 1999 by Shapiro and yours truly:

Horsten, Leon. 1995: "The Semantical Paradoxes, the Neutrality of Truth and the Neutrality of the Minimalist Theory of Truth", in P. Cortois (ed.) 1995, The Many Problems of Realism.

(This article is difficult to locate if one's library does not have this book, and is not online.)

In the Shapiro and Ketland articles, it is noted that certain axioms for truth are, under certain circumstances, conservative. And also that certain axioms for truth are, again under certain circumstances (usually connected to induction), non-conservative. In particular, if one wishes to reason from a theory $B$ to its reflection principle "all theorems of $B$ are true", the result is bound to be non-conservative if $B$ is a theory to which Godel's incompleteness theorems apply. It is argued by both of us too that the deflationist ought to accept the conservativeness condition; and that, in general, an adequate theory of truth ought to be reflectively adequate. But, of course, these desiderata are incompatible.

One can give a semi-regimented formulation of this philosophical argument as follows:

(P1) A truth theory is deflationary only if conservative over suitably axiomatized theories $B$.
(P2) A truth theory is reflectively adequate only if it combines with $B$ to prove "all theorem of $B$ are true".
(P3) For many cases of $B$, reflective adequacy implies non-conservativeness.
-----------------------------------------------------------------------
(C) So, deflationary truth theories are reflectively inadequate.

The technical result here is (P3). Premises (P1) and (P2) are philosophical explications on the concepts of "deflationism" and "adequacy". There is still a certain amount of imprecision to this, and some caveats in its formulation. (I gloss over these, but they include clarifying what "combining" means precisely when theories contain axiom schemes: when one considers object theories which are infinitarily axiomatized; and issues to do with "disentangling" the syntactical entities from the objects of the object language theory, usually numbers and sequences.)

However, caveats aside, it is close to being a valid philosophical argument, and one that requires the deflationist either to reject the conservativeness condition (P1) or to reject the reflective adequacy condition (P2).

As is well known (but sometimes forgotten), in his seminal work on truth, Tarski proposed his condition T, now known as the T-schema, as a
condition of material adequacy for
any formal theory of truth. By this he meant that the T-schema was intended to
capture the conceptual core of the
antecedent, informal notion of truth
as correspondence.

Since Tarski, and especially in recent decades (roughly
since the 1970s), there has been an explosion of formal, axiomatic theories of
truth proposed in the literature. Now, whenever there are multiple contestants,
the question arises as to how one could determine which of the candidates is
the correct theory of truth. (If truth is a uniquely determined concept,
arguably there can only be one true theory of truth!) These theories can be
compared with respect to their formal properties; but most of them were
proposed by very able logicians, and are thus all formally/technically adequate
and sophisticated. So the debates have to revolve around the extent to which
the different theories capture the antecedent conceptual core of the notion of
truth, which in turn requires the informal discussion of which properties a
formal theory of truth should display in order to be ‘materially adequate’.

In an influential 2007 article, ‘What theories of truth should be like (but cannot be)’, H. Leitgeb discusses eight plausible
desiderata for theories of truth, but notes that they cannot be jointly
satisfied (taken together, they lead to inconsistency and triviality, and this
happens even with some subsets of these eight conditions). Now, if the ideal of
satisfying all these conditions at once cannot be realized, what can we do?

Just as in the case of moral dilemmas, if a set of prima
facie norms is not satisfiable simultaneously, the next best option is to
search for maximal subsets that can be satisfied. (Leitgeb 2007, 284)

He then identifies different subsets of these eight
desiderata that can or cannot be satisfied by specific theories. However, there
does not seem to be a unique 'peak of maximization' among the different candidate
subsets, so the discussion will have to revolve around how to weight and
compare the different informal desiderata – once again, by and large a conceptual
affair.

Last Friday, I attended some of the talks of the second Amsterdam workshop on truth (here is my blog post prompted by the first installment, two years
ago), and again many of these issues resurfaced, reminding me of Leitgeb’s
article and of the whole debate on desiderata for theories of truth. For
example, Philip Welch argued that revision theories of truth cannot be said to
be about truth, properly speaking, in
particular because the phenomenon they identify – ‘jump operators’ that are
built up by quantifying over ﬁxed points – is not unique to truth as such (it
underpins other widely dissimilar hierarchies such as infinite-time Turing machines).
Welch was relying on the idea that it is not sufficient for a theory of truth
to satisfy some conceptual desiderata; it should also not count as a plausible account of something other than truth.

In the last talk of the workshop, Leon Horsten explicitly raised the question of what counts as a ‘good’ theory of truth, and
discussed some desiderata other than those discussed in Leitgeb’s paper.
Horsten noticed that it depends on what the proposed theory of truth is a
theory of (of the truth predicate in
natural language? Of the philosophical concept of truth?), and adopted the
perspective of theories of truth specifically for meta-mathematics. He then went on to propose the following set of
desiderata:

Non-interpretability

Conservativeness

Speed-up

This was all stage-setting to argue that there is at least
one theory of truth in the market, namely a theory proposed by Martin Fischer
(not coincidentally, his co-author in the paper!) in 2009, which satisfies all
these constraints. Now, here again a discussion needs to be had on why these
are indeed adequate desiderata for a formal theory of truth, and at Q&A I
suggested to Horsten that this is essentially a conceptual matter. He replied
that he and his co-author Fisher had debated the exact status of these
desiderata, and that they were not sure that the issue belonged to a conceptual level.

So here follows a modest attempt to argue that the items on
Horsten’s list are indeed conceptual in the sense that they are in the ballpark
of Tarski’s notion of material conditions of adequacy. Naturally, whether a
given formal theory does or does not satisfy these desiderata will be a purely
technical, formal matter, but the criteria as such must be plausible at an
informal, conceptual level. They represent bridges between the prior, informal
realm and the formal realm of the theory.

As Horsten presented it, non-interpretability
is a matter of expressiveness. The idea is that a truth theory T for meta-mathematics
must be non-interpretable in the sense that there is not another theory which
is not about truth, but on which T can be interpreted (in the technical sense
investigated by e.g. Albert Visser and others). If that were the case, then T
would not be specifically about truth, and this seems to me to be in the spirit of Welch’s objection to revision theories
of truth. Although it is based on a highly technical notion of
interpretability, the demand here seems to be that a theory of truth should be
about truth and not something else.

Conservativeness
is a property of axiomatic theories of truth which is usually formulated in
proof-theoretical terms: a theory T which is obtained by adding a truth
predicate to a theory S is conservative over S iff T cannot prove anything with
the vocabulary of S (i.e. statements not involving the truth predicate) that
could not be proved in S already. But Horsten proposes to think of
conservativeness in model-theoretic terms: T is a (semantically) conservative extension of S if and only if all the
models of S can be expanded to models of T. In other words, no models are
‘lost’ in the transition from S to T, since there are no models of S which are
not models of T. The conceptual rationale for this desideratum is that a theory
of truth for S should still be about the exact same ‘things’ that S itself is
about – no more, no less.

Initially,
the discovery that some respectable axiomatic truth theories were conservative over their base theories was viewed as an
argument in favor of deflationist conceptions of truth, in the sense that adding
a truth predicate did not add anything of ‘substance’ to a theory. But it seems
that the demand that a theory of truth be conservative goes well beyond deflationist
concerns, and is sufficiently neutral so as to count as a condition of material
adequacy for any theory of truth about a given topic/domain S, on a par with
Tarski’s T-schema.

But
conservativeness is not the whole story. It has also been known for a while
that some theories of truth also display the phenomenon of speed-up, which is best understood as the idea that theorems which
could be proved in S can then receive much shorter proofs in T. So while
conservativeness seems to suggest that the truth predicate does not add
anything of substance to the base theory, speed-up goes in the opposite
direction: a truth predicate can make proofs significantly shorter.

Now, in what
sense can we say that speed-up is a desideratum for a theory of truth? Again,
it seems to me that there is a plausible conceptual story here: a truth
predicate functions very much like a second-order quantifier, and so it seems
that an increase in expressive power should be expected when a truth predicate
is added to a theory S. In turn, increase in expressive power should allow for
derivational ‘shortcuts’ in the new theory T with respect to S. So arguably, the
more speed-up one observes, the more successful the theory has been in
capturing the expressive role of the truth predicate (again, something that
deflationists and non-deflationists alike agree on).

To conclude,
let me just reiterate my initial suggestion that Horsten’s proposed desiderata
for theories of truth (for meta-mathematics) are very much in the spirit of
Tarski’s notion of material conditions of adequacy, and thus must be (and
indeed are) backed by compelling conceptual arguments.