Friday, September 28, 2007

Near the start of chapter 3 of MIE, Brandom tells us that the primary normative concept for inferential articulation is commitment. When we move to the social picture involving more than one agent, there is a shift to multiple primary concepts. They are commitment and entitlement. These are two sides of one coin, to use a phrase that Brandom likes a lot. Surprisingly, he thinks that commitment can be understood entirely in terms of entitlement. In what sense is commitment needed then? Commitments have a sort of double life. Not only are they undertaken, but they are also what one is entitled to. To put it awkwardly, one can be committed and entitled to commitments. There is a status sense and a content sense of commitment it seems. (I think this point is one that MacFarlane hammers on in his excellent "Inferentialism and Pragmatism", available on his homepage.)

If commitment, in the status sense, is fully understandable in terms of entitlement, then it would stand to reason that incompatibility should be too. Incompatibility is defined as commitment that precludes an entitlement to something else. This would go something like: p is incompatible with q just in case p authorizes the removal of or the preclusion of entitlement to q. That doesn't sound that bad. Brandom should probably have said that the fundamental normative status for the game of giving and asking for reasons is entitlement and commitment takes on its content role. There are two problems with this that I don't have responses to. The first is that I'm not sure he's allowed to appeal to the idea of content at this point in the book. The second problem is that if commitment isn't a fundamental normative status, then it is difficult to see why inference must be the bridge to semantics. Any sort of doing should work to connect the praxis to the semantics.

(The people that are in the Brandom seminar are probably tired of the joke in the title in its various incarnations, but I find it funny nonetheless.)

[Edit: I'm trying out using the html for the math symbols and Greek letters since Blogger won't do LaTeX markup. Let me know if they are not rendering correctly in your browser.]Sometimes you read that that natural deduction systems are more appropriate for intuitionistic logic while sequent calculi are more appropriate for classical logic, e.g. in Hacking's article "What Is Logic?", whence the title. While reading a defense by Peregrin of why intuitionistic logic best characterizes the logic of inference, I came across a line that caught my eye. Peregrin comes to the conclusion that intuitionistic logic is the logic of inference as long as we restrict ourselves to single conclusion inferences. He goes on to say that he has written elsewhere about why we should restrict ourselves in that way. The interesting bit is the restriction to single conclusion inference.

One of the first lessons learned when studying the sequent calculus is that you get classical logic from the intuitionistic rules by allowing multiple conclusions. Going back over my notes from proof theory last year, it seems that you can also get classical logic if you keep the single conclusion by add a reductio rule, Γ, ∼φ ⇒ ⊥, so Γ => φ. This requires giving up the subformula property, so the multiple conclusion formulation is usually opted for.

Natural deduction systems only allow for one conclusion for each inference. This doesn't mean we don't have natural deduction systems for classical logic. We do, but they involve a reductio rule, from ∼∼φ to φ. How would this square with being more appropriate to intuitionistic logic? There seems to be a sense in which the reductio rule is really a multiple conclusion rule. In a standard multiple conclusion sequent calculus formulation, proving the reductio rule requires using multiple conclusions. Alternatively, we can prove that a formula is provable in a classical natural deduction system with reductio iff it is provable in the multiple conclusion sequent calculus. Given that this is "iff", why should the sequent calculus version be privileged? The sequent calculus version has the subformula property and cut elimination. This puts really strong restrictions on the structure of proofs. In particular, the subformula property requires that only the things in the bottom line of the proof appear in the preceding steps of the proof. This restriction is strong enough to simulate, in a way, semantic effects (See Jeremy Avigad's work for more on this). Put in a more philosophical way, it requires us to make explicit everything that goes directly in to the proof.

This explicitness requirement does the work. The sequent calculus version shows us that the reductio rule is really a multiple conclusion wolf masquerading in single conclusion sheep's clothing. It is apparent that none of the other rules require multiple conclusions. These correspond to the intuitionistic sequent calculus rules and they go over directly to natural deduction systems. Classical logic with its reductio rule can go over too, but along the way the reductio rule assumes the appearance of a single conclusion rule. Natural deduction is natural for single conclusion inference and the sequent calculus is natural for multiple conclusion inference (I haven't really defended the latter claim here). There is a sense, then, in which natural deduction is more natural for intuitionistic logic.

Sunday, September 23, 2007

There is what looks to be a good entry on the Stanford Encyclopedia of Philosophy on the Frege-Hilbert correspondence and disagreement. It is interesting because it makes Frege seem slightly reasonable even though he ultimately loses. When I had read the correspondence a few years ago (I think I only read Frege's contributions), Frege seemed rather unreasonable. It is nice to know I was incorrect.

[Edit: There is a new article on the philosophy of math up on the SEP. It looks like a nice little overview, especially the computation section at the bottom.]

I've discovered that it is difficult to juggle teaching with doing your own work (shocking, I'm sure). I have a paper due in a few days that is eating my free time. I hope to put up a new post either tonight or tomorrow. I keep meaning to write something on a topic other than Wittgenstein or Making It Explicit, but I have done an exceptionally poor job of thinking about other things.

Friday, September 21, 2007

Apparently the first use of schematic letters in the history of logic was Aristotle's Priori Analytics book 1 chapter 2. Even though I recently read that chapter several times, it didn't even occur to me that it was the first such usage. I gather Quine was quite fond of schematic letters in his teaching of logic. A tip of the hat to James Allen for pointing this fact out to me.

Sunday, September 16, 2007

I didn't realize how odd the preface of the Tractatus is until last week during the PItt reading group. In particular, the following:"If this work has a value it consists in two things. First that in it thoughts are expressed, and this value will be the greater the better the thoughts are expressed. … On the other hand the truth of the thoughts communicated here seems to me unassailable and definitive."I think the latter bit, after the ellipsis, gets the most focus usually. The truth of the thoughts is important. But, what struck me in a way it hadn't before was the first bit. The work has value insofar as it expresses thoughts? That seems to set the bar low as it isn't that hard to express a thought. The interesting thing is combining this with 6.54, which says that anyone who has understood the propositions of the book will recognize them as nonsense. This would make it a little harder to take Wittgenstein as expressing thoughts. In the preface he says that he doubts he has done it well. This might be literary self-deprecation, but that seems a bit unlike Wittgenstein.

We should note that Wittgenstein doesn't say what thoughts are, or are supposed to be, expressed in the book. Just that some thoughts are. The truth of these thoughts he thinks is clear. Michael Kremer says some interesting things about different senses of truth in his paper on solipsism that I suspect are important for understanding Wittgenstein here. In short, he thinks there is a non-propositional sense of truth. This is not the ineffable sort of truths that some realists ascribe to the TLP. It is more like the sense of truth expressed when people say things like "the truth in beauty" or "the truth in solipsism" (to use Kremer's title). Taking the preface to be meaning truth in this sense would go some ways towards making it consistent with the end of the book. The problem would probably come from the expression of thoughts. This would, it seems, have to be thoughts in a sense distinct from the Tractarian view of them, i.e. as significant propositions. Otherwise, taking a non-propositional view of truth would be a non-starter. There isn't a corresponding idea of thought developed in Kremer's paper. He says some possibly relevant things about solipsism among other "ways of thinking" which might be fleshed out appropriately, but it would result in interpreting the preface in such a way that it resembles the body of the book very little. That might not be a bad thing though. I don't have a well-developed idea here, but I think there is some promise to making sense of the preface along these lines.

(Running a quick search on Kremer's article, it seems that he talks about the preface. However, he doesn't talk about the part I am talking about. He concentrates on the early part of the preface which discusses drawing a limit to thought.)

One of the novel features of MIE is Brandom's philosophy of logic. He calls this the expressive theory of logic. On this view, the primary purpose of logic is to express certain things. It privileges the conditional and negation. The conditional expresses the acceptance of an inference from premises which form the antecedent to the conclusion which forms consequent. The conditional lets you say that a certain inference is acceptable. Of course, conditionals in different logical systems express different sorts of acceptance. Classical conditionals express a weak form of acceptance, intuitionistic conditionals a stronger acceptance in the form of saying there is a general method of transforming justification for the premises into justification for the consequence, and so on. Negation expresses incompatibilities, generally in the presence of a conditional so as to allow one to say that certain inferences are not kosher. Incompatibility can be used to create an entailment relation defined as inclusion on sets of incompatibles. Brandom suggests taking conjunction and disjunction as set operations on those sets of incompatibilities. This would work for languages with a conditional and negation. If one does not have negation, then, I suppose, come up with the sets of incompatibilities although one couldn't say that the incompatibilities were such. Barring defining conjunction and disjunction in terms of incompatibilities, I'm not sure exactly what they would be accepting. Conjunction might express the acceptance of both conjuncts. Intuitionistic disjunction might express the acceptance or ability to demonstrate one of the disjuncts and you know which one. I'm not sure what classical disjunction would express exactly; possibly that one accepts one of the disjuncts although no further information is given about which.

There is nothing in Ch. 2, where logical expressivism is introduced, about quantifiers. At this point in the book, nothing has been said about objects, so I'm not sure what the quantifiers express since it is likely to be tied in with theoretical claims about objects. Modal operators aren't addressed in MIE, although they are tackled in the Locke Lectures. I don't remember that terribly well, although I think for the most part Brandom sticks to alethic modalities in S4 and S5. I could be wrong on this. I am almost positive he doesn't get to non-normal modal operators (to use Restall's terminology) such as the Kleene star. I currently have no idea what the Kleene star would express. I'm similarly unsure about intensional connectives as in relevant logic's fusion. It might be an interpretation similar to Restall's of application of data in the form of propositions to other data, although this is really speculative. There might be something in this.

Something that I'm a little more immediately curious about is the status of translations of formulas. There are certain systems of logic that can be translated into others, e.g. S4 into intuitionistic logic. The conditional in S4 is classical, but the translation (in Goedel's translation at least) of the intuitionistic A->B looks like [](A'->B') where A' and B' are the translations of A and B. The classical conditional then will express a weaker acceptance of an inference but it will be modified in some way by the necessity operator in front. If the view of logic is right, one would expect the translation to preserve what is expressed in some form. I will have to track down the relevant part of the Locke Lectures in order to further test this idea.

Thursday, September 13, 2007

The comments on the last post were helpful, so I'm going to take another stab at figuring out how implicit norms are supposed to get around the rule-following problems that are supposed to undermine explicit rules. I think I was wrong to attribute the thesis that implicit rules are too much like explicit norms. Looking back at Ch.1, a difference emerges. Implicit rules are supposed to be exemplifications of a practical ability, applying practical rules and standards. They are a form of know-how. Explicit rules are linguistic, propositional things. They are a form of know-that.

Brandom denies that know-how is reducible to know-that. Instead, I think he thinks the converse is true, know-that is reducible to or at least depends on know-how. Consequently, I'm doubtful that it is correct to say that implicit rules can be made explicit without remainder. The reason is that there is a change in kind, from know-how to know-that. Since implicit rules are exemplified in the normative attitudes held by and sanctions performed by the critters in question, there is not the threat of a regress developing. This is because there is nowhere for the interpretive regress to get started.

This is, at least, the start of the answer. It is much like the previously suggested Kantian strategy of using the faculty of judgment. Something different in kind than the explicit rules is brought in to ground the explicit rules and prevent the regress. More details need to be supplied, but I think that is roughly how the start of the story goes. The rest of Ch. 1 supplies some of the details. Brandom leans on the idea of sanctions quite a bit and more needs to be said about them. They are important and in some cases non-normative, but I don't have much to say about them at this point.

Wednesday, September 12, 2007

Chapter 1 of Making It Explicit (MIE) contains a long discussion of the rule-following considerations in the Philosophical Investigations. There is one response to the rule-following considerations that generates a regress, namely taking rules to be explicit things. This is regulism. The regress is interpreting the rules, since each interpretation is setting an explicit rule to be followed, which then can be and requires being interpreted. The next option, regularism, is taking actual patterns of behavior and finding a regularity in them. Unfortunately, actual behavior won't nail down a unique set of behaviors that constitute a rule since you can gerrymander all sorts of crazy sets of behavior. Regularism is also nonnormative, since the "rules" it yields are just patterns of behavior in a descriptive sense. There is no prescribing. Regulism at least is normative. I took these considerations to be what motivate the introduction of the idea of norms implicit in practice, which are a key feature of the rest of MIE. The implicit norms seem to emerge from the wreckage of the explicit norms on the rocks of rule-following.

The question is: how do implicit norms meet the rule-following challenge? They aren't explicit, but they are capable of being made explicit. There isn't quite enough of a difference in kind there to put the issue to rest. They seem to be sort of mysterious things that are quite like explicit rules, since they can be made explicit without remainder. Brandom mentions Kant's suggestion of the faculty of judgment to end the regress, but that is a non-starter without a story about why the faculty of judgment would not flounder on the same problem. Another possibility is that one can make some implicit norms explicit, but not all at any one time. The implicit ones then ground the explicit ones somehow. One might think this relies on a sort of superficial feature about us: we can't do an infinite number of things, which presumably is what would be required to make all our rules explicit. This is a meager rod on which to rest the weight of MIE. Alternately, one might think that it is like the Tortoise and Achilles in that we can make some instances of our inference rules into formulas, but not all on pain of not being able to infer anything. I'm not sure if that is satisfying either.

After discussing this with some people, I've managed to become completely confused about the structure of the first chapter. How do the early considerations of rule-following motivate implicit norms if those don't get around rule-following problems? If they do, how do they? Is there another argument to more directly justify or motivate implicit norms? My understanding of the first chapter depended on implicit norms fitting into the dialectic as sketched above, so I clearly must rethink things.

Monday, September 10, 2007

Just a short note directing readers to some recent comments on some things in the archive that would likely be missed. There is a new comment on "Why do semantics?" and one on "Dummet and Davidson on translation", both by Brad. Interesting reading for those who are interested in Davidson. New content from the author to follow soon, I promise.

Sunday, September 09, 2007

The theorem that isomorphic structures agree on the truth values of all sentences is one of those theorems that cuts across normal logical boundaries. It is true for first-order logics, second-order logics, and higher-order logics. It is also true in infinitary logics. Certain monotony properties for quantifiers are like this as well, as pointed out by van Benthem. The medieval logicians knew about the latter; they probably were not aware of the former. Is there an "order-free" framework in which to state and prove these theorems? Is category theory capable of that? I'm curious since this makes two theorems that modern logical theorizing blinds us to the generality of.

Tuesday, September 04, 2007

I'm rather pleased to offer my readers a link to the freshly unveiled recordings of Bob Brandom's Locke Lectures, "Between Saying and Doing," as presented in Prague in April 2007. The original lectures are available here, including an updated technical appendix to lecture 5. The program is available here. There was quite a line up of commentators. The real treat is the audio of the lectures, comments, replies, and Q&A. The Locke Lectures didn't seem to get that much attention online for some reason, so it is nice that they might get a bit more attention with these resources going up. I haven't gotten to listen to the comments yet, but I've heard that they were insightful. The audio is being hosted at the U. of Chicago and might be available via their podcast service in the near future. Thanks to Jason Voigt for the links.

Monday, September 03, 2007

Today is Labor Day in the US, which means a day off from work. I'm spending the day reading and going to a barbecue. One of the things I'm reading is Brandom's Making It Explicit. I just came across one of my favorite lines from the book:"[O]ntological self-indulgence is a comparatively harmless vice."Delightful.