Thursday, August 30, 2007

Since I seem to like finding out about how other people work and think, I was delighted to come across Henri Poincare's description of how discoveries are made in mathematics in a large number of cases. He says it is a three stage process, which he illustrates with some of his own discoveries. The first stage is to work on a problem, attempting proofs and playing with definitions. This might not result in anything concrete but it will get your mind working. The next step is to take a break and do something unrelated. Often, he says, the subconscious will continue working on the problem and you will get a flash of insight. This flash is, apparently, often times correct. The final step is to sit down and check the insight to make sure it works and draw out a few consequences while it is still fresh and active in your mind. Something like this was also described by George Polya in his How to Solve It. Although, Polya didn't endorse Poincare's crazy analogy for why this is. Poincare described the mind as being like a great chamber full of Lucretius's hooked atoms. The atoms are ideas and are stuck to the wall of the chamber. The initial phase of work shakes some free to float around and crash into each other. When two ideas fit together (hook each other), they produce insight. Further work verifies this and pulls more atoms from the wall. Somewhere I read that Hilbert had a garden near his office. He set up a blackboard near the garden and would garden for a while, then walk over to write things on the board, then go back to gardening. He would mix this up by riding around on a bike occasionally. That sounds like he was engaging in a cycle like Poincare described. I wonder if there is any more than scattered anecdotal evidence that this is a way to produce insight. I think something like this works for me. Working hard on a problem for a class but not solving it before going to bed tends to result in figuring it out over breakfast or lunch. (Although occasionally it results in bad dreams. Raise your hand if you've ever dreamed about modal logic.) The question, I suppose, is: does insight come according to a pattern like the above in all disciplines or are there some where insight only really comes by slogging along at something for a long long time?

I just submitted my odd little Wittgenstein paper on the nature of language and logical truth in the Tractatus. That means I have completed the last outstanding class from my spring term and my first year. Woo! After I celebrate I can get used to this idea of being in class again.

Monday, August 27, 2007

I saw a book on a professor's shelf today whose spine said "Ideal Code, Real World". For a brief second I thought, "Is that about programming or ethics?" Then I realized I was in the office of an ethicist. I just found out that the complete title (not appearing on the spine) is: Ideal Code, Real World: A Rule-Consequentialist Theory of Morality. Alas.

Sunday, August 26, 2007

Tomorrow begins the new fall term. I'm hoping to have my Wittgenstein paper finished up by the end of the first week. My class line up looks pretty good. I'll be taking three classes, trying to audit one, and TAing for one. The three I'm taking are: Aristotle's logic and argumentation, model theory, and the philosophy of science core. The Aristotle class is taught by James Allen and a previous iteration of that class was the impetus for John MacFarlane's dissertation. Model theory is taught by Ken Manders. I know very little about model theory, so it will be good to fill in this gap some. I think the class is focused on definability and interpolation. The phil of science core is taught by Gordon Belot and I don't know much about it yet. I'm going to try to sit in on Bob Brandom's seminar on his Making It Explicit. I expect that will be good after the Articulating Reasons reading group this summer. I'm TAing for Kieran Setiya's intro to ethics class. Kieran is a good teacher, so that should, hopefully, go well.

Monday, August 20, 2007

This is not news by any means, but there are a few good entries in the Stanford Encyclopedia that are worth bringing everyone's attention to. The first is the algebra entry. It is written by Vaughan Pratt. Both Restall's Introduction to Substructural Logics and Dunn and Hardegree's Algebraic Methods in Philosophical Logic make it clear that there are important links between algebra and various areas of logic. Both books are also very well written, although Restall's has an unfortunate number of typos. There is also an entire entry on the mathematics of boolean algebra, which at first brush seems to be the least well written of the three. Finally there is an article on category theory, which I gather isn't that popular as a subject except at Carnegie Mellon and among some physicists. Eventually they will teach it again at CMU and I will take it; then I will be able to comment on the SEP article. It looks good to my untutored eye though.

Sunday, August 19, 2007

I seem to be having more trouble writing up posts while on vacation than I originally expected. Why I expected it to be easy to write posts on vacation is anybody's guess.

I wanted to write up a few things about Mark Wilson's masterful Wandering Significance, but I'm very slow at figuring out how to integrate it with the ideas that I think it will fit best. One aspect of the book that I found intriguing is Mark's insistence on the importance of concrete application and example. Logic and philosophy of language tend to like abstraction, but there are parts of me that much prefer the concrete as well. One of the problems with abundant abstraction that Mark points to is something that he calls the Theory T syndrome. He doesn't seem to define it anywhere that I could find but he does describe it and give some examples. It seems to be the often seen and overused move to considering only an abstract or schematized notion instead of a concrete instance. This in itself is not bad, and Mark describes many instances of abstraction, schematization, axiomatization, etc. that have proved fruitful. What the syndrome consists in is the move to schema or abstraction that excessively rarifies the notion in question. Considering only the abstractions provides too much opportunity to miss out important details or to just abstract them out. For example, when one considers an arbitrary theory T, one usually takes this on the logical model of a set of sentences closed under some form of logical consequence. This means that T is an infinite collection of sentences. One of Mark's complaints is that many things that get brought under this rubric are not fruitfully viewed as collections of sentences with that much structure. Rather, they present themselves as, in his words, intriguing proposals or guides about what to do. I think there is an example closer to home than any physical theory that he talks about. This is Russell's theory of descriptions. This theory isn't viewed as a set of sentences closed under consequence. It is, in various forms, directions about what to do when one encounters a particular string of words. There isn't a first-order theory of descriptions T that we compare to an equivalent theory T'. I think Mark puts his point entertainingly and well, so I will quote him at length (this also makes it look like I wrote more; theft 1, honest toil 0):

"While on this topic, there is a related misconception that merits deflationary comment. In rendering "theories" into schematic Ts and T's, our syndrome puffs the humble word "theory" into something quite grand, without it being exactly clear in what its grandeur consists (it reminds me of the log that was mistaken for a god in Aesop). Mild-mannered "theory," in its vernacular and scientific employments, often connotes little more than "an intriguing proposal," but it serves us well in that lowly capacity. For example, a "mean field theory" in solid state physics represents a suggestion as to how key quantities in the subject might be profitably approximated—that is, the "theory" properly qualifies as a mathematical guess that anticipates that the values of relevant physical variables will stay fairly closely to certain easy-to-calculate patterns. Such guesswork presently "belongs to physics" only because mathematicians haven’t been able to verify, by their own stricter standards of proof, that the technique actually works (a quite large portion of so-called "physical theorizing" partakes of this "mathematical guess" status). When we prattle philosophically about "theory," however, we commonly imagine that it represents some utterly freewheeling set of doctrines dreamed up by the creativity of man and is then submitted to verification or rejection at the hands of Nature. But this picture can be quite misleading. We don’t normally consider the response "about 10,000" to the question "what is 328 times 316?" qualifies as a theory, but the logical status of what are frequently called "theories" in real life physics is approximately that. To be sure, the employment of mean field averaging does represent an "intriguing proposal" and that is why we call it a "theory.""

Saturday, August 11, 2007

I'm reading Poincare's Science and Method as part of my philosophy of science mini-kick. Poincare is fun to read. I'm not sure what exactly I think of his views on logic, but I appreciate the Kantian line he's pushing. I hope to write up something on his work, but I think I have a few other posts on the backburner. In the interim, to try to make up for my lack of posting last week, here is a quote from Poincare on Hilbert's view of math and logic from his "Mathematics and Logic":"Thus, [according to Hilbert's view] it will be readily understood that, in order to demonstrate a theorem, it is not necessary or even useful to know what it means. We might replace geometry by the reasoning piano imagined by Stanley Jevons; or, if we prefer, we might imagine a machine where we should put in axioms at one end and take out theorems at the other, like that legendary machine in Chicago where pigs go in alive and come out transformed into hams and sausages. It is no more necessary for the mathematician than it is for these machines to know what he is doing."Axioms in one end, theorems out the other. What would Poincare think of automated theorem proving?

Thursday, August 02, 2007

I don't expect that this idea is original to me, but I had thought today about Hacking's view of logic in his "What is logic?" There are some conditions that sequent calculus introduction rules must satisfy in order to be properly logical including being conservative over the base language. There are some conditions that the deducibility relation must satisfy, including admitting cut elimination and elimination of the structural rules. In particular, the system must admit of dilution elimination. This is why modal logic doesn't count as logic to Hacking. It (S4 in particular, since there are lots of modal logics) doesn't admit of general dilution (weakening) elimination. Hacking sees this as a feature, not a bug. What I am now wondering about are substructural logics, like relevance logic. Relevance logic (if I am remembering this rightly) doesn't have weakening in it to begin with, so one can't prove an elimination theorem. Requiring the deducibility relation to admit weakening seems to put a strong constraint on what counts as logic at the outset. What is so special about weakening that it gets picked out? The other rules that get singled out as important are cut and the basic axiom or identity rule. It is hard to argue with those. Granted, Hacking says those are sufficient conditions and makes no effort to give necessary ones, but, I am now puzzled.

Wednesday, August 01, 2007

The title contains a variation on what is, I think, one of my favorite phrases. Thank you Bertrand Russell (it is enough to excuse you from the history of philosophy book you wrote). On to the post...

The collection in honor of John Perry, Situating Semantics, is now out. Here is the contributor list:Robert Audi, Kent Bach, Patricia Blanchette, Herman Cappelen, Eros Corazza, Ernie Lepore, Brian Loar, Peter Ludlow, Genoveva Marti, Michael McKinsey, Stephen Neale, Michael O’Rourke, John Perry, Francois Recanati, Cara Spencer, Kenneth A. Taylor, and Corey Washington. I haven't looked at it in depth, but there are several articles on unarticulated constituents which covers ground in the semantics/pragmatics debate, including a delightful but excessive article by Stephen Neale (although I must admit, I enjoyed the stories about John from Neale's grad student days). As per usual, there is a reply by Perry at the end. It is one long essay replying to everyone. It makes clear why certain theoretical moves were made, such as the focus on utterances and their relation to information and action. It looks pretty slick over all. Also, in the picture on the cover John looks like he's trying not to make a wisecrack about something happening to his left.