What I'm saying is that Less Wrong shouldn't ignore mainstream philosophy.

What I demonstrated above is that, directly or indirectly, Less Wrong has already drawn heavily from mainstream philosophy. It would be odd to suggest that the progress in mainstream philosophy that Less Wrong has already made use of would suddenly stop, justifying a choice to ignore mainstream philosophy in the future.

As for naturalistic philosophy's insights relevant to LW, they are forthcoming. I'll be writing some more philosophical posts in the future.

And actually, my statistical prediction rules post came mostly from me reading a philosophy book (Epistemology and the Psychology of Human Judgment), not from reading psychology books.

I'll await your next post, but in retrospect you should have started with the big concrete example of mainstream philosophy doing an LW-style dissolution-to-algorithm not already covered on LW, and then told us that the moral was that we shouldn't ignore mainstream philosophy.

I did the whole sequence on QM to make the final point that people shouldn't trust physicists to get elementary Bayesian problems right. I didn't just walk in and tell them that physicists were untrustworthy.

If you want to make a point about medicine, you start by showing people a Bayesian problem that doctors get wrong; you don't start by telling them that doctors are untrustworthy.

If you want me to believe that philosophy isn't a terribly sick field, devoted to arguing instead of facing real-world tests and admiring problems instead of solving them and moving on, whose poison a novice should avoid in favor of eating healthy fields like settled physics (not string theory) or mainstream AI (not AGI), you're probably better off starting with the specific example first. "I disagree with your decision not to cover terminal vs. instrumental in CEV" doesn't cover it, and neither does "Quineans agree the world is made of atoms". Show me this field's power!

When I wrote the post I didn't know that what you meant by "reductionist-grade naturalistic cognitive philosophy" was only the very narrow thing of dissolving philosophical problems to cognitive algorithms. After all, most of the useful philosophy you've done on Less Wrong is not specifically related to that very particular thing... which again supports my point that mainstream philosophy has more to offer than dissolution-to-algorithm. (Unless you think most of your philosophical writing on Less Wrong is useless.)

Also, I don't disagree with your decision not to cover means and ends in CEV.

Tarski on language and truth. One of Tarski's papers on truth recently ranked as the 4th most important philosophy paper of the 20th century by a survey of philosophers. Philosophers have much developed Tarski's account since then, of course.

Chalmers' formalization of Good's intelligence explosion argument. Good's 1965 paper was important, but it presented no systematic argument; only hand-waving. Chalmers breaks down Good's argument into parts and examines the plausibility of each part in turn, considers the plausibility of various defeaters and possible paths, and makes a more organized and compelling case for Good's intelligence explosion than anybody at SIAI has.

Bratman on intention. Bratman's 1987 book on intention has been a major inspiration to AI researchers working on belief-desire-intention models of intelligent behavior. See, for example, pages 60-61 and 1041 of AIMA (3rd ed.).

Functionalism and multiple realizability. The philosophy of mind most natural to AI was introduced and developed by Putnam and Lewis in the 1960s, and more recently by Dennett.

Explaining the cognitive processes that generate our intuitions. Both Shafir (1998) and Talbot (2009) summarize and discuss as much as cognitive scientists know about the cognitive mechanisms that produce our intuitions, and use that data to explore which few intuitions might be trusted and which ones cannot - a conclusion that of course dissolves many philosophical problems generated from conflicts between intuitions. (This is the post I'm drafting, BTW.) Talbot describes the project of his philosophy dissertation for USC this way: "...where psychological research indicates that certain intuitions are likely to be inaccurate, or that whole categories of intuitions are not good evidence, this will overall benefit philosophy. This has the potential to resolve some problems due to conflicting intuitions, since some of the conflicting intuitions may be shown to be unreliable and not to be taken seriously; it also has the potential to free some domains of philosophy from the burden of having to conform to our intuitions, a burden that has been too heavy to bear in many cases..." Soundfamiliar?

Pearl on causality. You acknowledge the breakthrough. While you're right that this is mostly a case of an AI researcher coming in from the outside to solve philosophical problems, Pearl did indeed make use of the existing research in mainstream philosophy (and AI, and statistics) in his book on causality.

Drescher's Good and Real. You've praised this book as well, which is the result of Drescher's studies under Dan Dennett at Tufts. And the final chapter is a formal defense of something like Kant's categorical imperative.

Dennett's "intentional stance." A useful concept in many contexts, for example here.

Bostrom on anthropic reasoning. And global catastrophic risks. And Pascal's mugging. And the doomsday argument. And the simulation argument.

Deontic logic. The logic of actions that are permissible, forbidden, obligatory, etc. Not your approach to FAI, but will be useful in constraining the behavior of partially autonomous machines prior to superintelligence, for example in the world's first battlefield robots.

Reflective equilibrium. Reflective equilibrium is used in CEV. It was first articulated by Goodman (1965), then by Rawls (1971), and in more detail by Daniels (1996). See also the more computational discussion in Thagard (1988), ch. 7.

Experimental philosophy on the biases that infect our moral judgments. Experimental philosophers are now doing Kahneman & Tversky -ish work specific to biases that infect our moral judgments. Knobe, Nichols, Haidt, etc. See an overview in Experiments in Ethics.

Greene's work on moral judgment.Joshua Greene is a philosopher and neuroscientist at Harvard whose work using brain scanners and trolley problems (since 2001) is quite literally decoding the algorithms we use to arrive at moral judgments, and helping to dissolve the debate between deontologists and utilitarians (in his view, in favor of utilitarianism).

Dennett's Freedom Evolves. The entire book is devoted to explaining the evolutionary processes that produced the cognitive algorithms that produce the experience of free will and the actual kind of free will we do have.

Quinean naturalists showing intuitionist philosophers that they are full of shit. See for example, Schwitzgebel and Cushman demonstrating experimentally that moral philosophers have no special expertise in avoiding known biases. This is the kind of thing that brings people around to accepting those very basic starting points of Quinean naturalism as a first step toward doing useful work in philosophy.

Bishop & Trout on ameliorative psychology. Much of Less Wrong's writing is about how to use our awareness of cognitive biases to make better decisions and have a higher proportion of beliefs that are true. That is the exact subject of Bishop & Trout (2004), which they call "ameliorative psychology." The book reads like a long sequence of Less Wrong posts, and was the main source of my post on statistical prediction rules, which many people found valuable. And it came about two years before the first Eliezer post on Overcoming Bias. If you think that isn't useful stuff coming from mainstream philosophy, then you're saying a huge chunk of Less Wrong isn't useful.

Talbot on intuitionism about consciousness. Talbot (here) argues that intuitionist arguments about consciousness are illegitimate because of the cognitive process that produces them: "Recently, a number of philosophers have turned to folk intuitions about mental states for data about whether or not humans have qualia or phenomenal consciousness. [But] this is inappropriate. Folk judgments studied by these researchers are mostly likely generated by a certain cognitive system - System One - that will ignore qualia when making these judgments, even if qualia exist."

"The mechanism behind Gettier intuitions." This upcoming project of the Boulder philosophy department aims to unravel a central (misguided) topic of 20th century epistemology by examining the cognitive mechanisms that produce the debate. Dissolution to algorithm yet again. They have other similar projects ongoing, too.

Computational meta-ethics. I don't know if Lokhorst's paperin particular is useful to you, but I suspect that kind of thing will be, and Lokhorst's paper is only the beginning. Lokhorst is trying to implement a meta-ethical system computationally, and then actually testing what the results are.

Note that useful insights come from unexpected places. Rawls was not a Quinean naturalist, but his concept of reflective equilibrium plays a central role in your plan for Friendly AI to save the world.

P.S. Predicate logic was removed from the original list for these reasons.

Quine's naturalized epistemology. Epistemology is a branch of cognitive science

Saying this may count as staking an exciting position in philosophy, already right there; but merely saying this doesn't shape my expectations about how people think, or tell me how to build an AI, or how to expect or do anything concrete that I couldn't do before, so from an LW perspective this isn't yet a move on the gameboard. At best it introduces a move on the gameboard.

Tarski on language and truth.

I know Tarski as a mathematician and have acknowledged my debt to him as a mathematician. Perhaps you can learn about him in philosophy, but that doesn't imply people should study philosophy if they will also run into Tarski by doing mathematics.

Chalmers' formalization of Good's intelligence explosion argument...

...was great for introducing mainstream academia to Good, but if you compare it to http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate then you'll see that most of the issues raised didn't fit into Chalmers's decomposition at all. Not suggesting that he should've done it differently in a first paper, but still, Chalmers's formalization doesn't yet represent most of the debates that have been done in this community. It's more an illustration of how far you have to simplify things down for the sake of getting published in the mainstream, than an argument that you ought to be learning this sort of thing from the mainstream.

Dennett on belief in belief.

Acknowledged and credited. Like Drescher, Dennett is one of the known exceptions.

Bratman on intention. Bratman's 1987 book on intention has been a major inspiration to AI researchers working on belief-desire-intention models of intelligent behavior...

Appears as a citation only in AIMA 2nd edition, described as a philosopher who approves of GOFAI. "Not all philosophers are critical of GOFAI, however; some are, in fact, ardent advocates and even practitioners... Michael Bratman has applied
his "belief-desire-intention" model of human psychology (Bratman, 1987) to AI research on planning (Bratman, 1992)." This is the only mention in the 2nd edition. Perhaps by the time they wrote the third edition they read more Bratman and figured that he could be used to describe work they had already done? Not exactly a "major inspiration", if so...

Functionalism and multiple realizability.

This comes under the heading of "things that rather a lot of computer programmers, though not all of them, can see as immediately obvious even if philosophers argue it afterward". I really don't think that computer programmers would be at a loss to understand that different systems can implement the same algorithm if not for Putnam and Lewis.

Explaining the cognitive processes that generate our intuitions... Talbot describes the project of his philosophy dissertation for USC this way: "...where psychological research indicates that certain intuitions are likely to be inaccurate, or that whole categories of intuitions are not good evidence, this will overall benefit philosophy."...

Same comment as for Quine: This might introduce interesting work, but while saying just this may count as an exciting philosophical position, it's not a move on the LW gameboard until you get to specifics. Then it's not a very impressive move unless it involves doing nonobvious reductionism, not just "Bias X might make philosophers want to believe in position Y". You are not being held to a special standard as Luke here; a friend named Kip Werking once did some work arguing that we have lots of cognitive biases pushing us to believe in libertarian free will that I thought made a nice illustration of the difference between LW-style decomposition of a cognitive algorithm and treating biases as an argument in the war of surface intuitions.

Pearl on causality.

Mathematician and AI researcher. He may have mentioned the philosophical literature in his book. It's what academics do. He may even have read the philosophers before he worked out the answer for himself. He may even have found that reading philosophers getting it wrong helped spur him to think about the problem and deduce the right answer by contrast - I've done some of that over the course of my career, though more in the early phases than the later phases. Can you really describe Pearl's work as "building" on philosophy, when IIRC, most of the philosophers were claiming at this point that causality was a mere illusion of correlation? Has Pearl named a previous philosopher, who was not a mathematician, who Pearl thought was getting it right?

Drescher's Good and Real.

Previously named by me as good philosophy, as done by an AI researcher coming in from outside for some odd reason. Not exactly a good sign for philosophy when you think about it.

Dennett's "intentional stance."

For a change I actually did read about this before forming my own AI theories. I can't recall ever actually using it, though. It's for helping people who are confused in a way that I wasn't confused to begin with. Dennett is in any case a widely known and named exception.

Bostrom on anthropic reasoning. And global catastrophic risks. And Pascal's mugging. And the doomsday argument. And the simulation argument.

A friend and colleague who was part of the transhumanist community and a founder of the World Transhumanist Association long before he was the Director of the Oxford Future of Humanity Institute, and who's done a great deal to precisionize transhumanist ideas about global catastrophic risks and inform academia about them, as well as excellent original work on anthropic reasoning and the simulation argument. Bostrom is familiar with Less Wrong and has even tried to bring some of the work done here into mainstream academia, such as Pascal's Mugging, which was invented right here on Less Wrong by none other than yours truly - although of course, owing to the constraints of academia and their prior unfamiliarity with elementary probability theory and decision theory, Bostrom was unable to convey the most exciting part of Pascal's Mugging in his academic writeup, namely the idea that Solomonoff-induction-style reasoning will explode the size of remote possibilities much faster than their Kolmogorov complexity diminishes their probability.

Reading Bostrom is a triumph of the rule "Read the most famous transhumanists" not "Read the most famous philosophers".

The doomsday argument, which was not invented by Bostrom, is a rare case of genuinely interesting work done in mainstream philosophy - anthropic issues are genuinely not obvious, genuinely worth arguing about and philosophers have done genuinely interesting work on it. Similarly, although LW has gotten further, there has been genuinely interesting work in philosophy on the genuinely interesting problems of Newcomblike dilemmas. There are people in the field who can do good work on the rather rare occasions when there is something worth arguing about that is still classed as "philosophy" rather than as a separate science, although they cannot actually solve those problems (as very clearly illustrated by the Newcomblike case) and the field as a whole is not capable of distinguishing good work from bad work on even the genuinely interesting subjects.

Ord on risks with low probabilities and high stakes.

Argued it on Less Wrong before he wrote the mainstream paper. The LW discussion got further, IMO. (And AFAIK, since I don't know if there was any academic debate or if the paper just dropped into the void.)

Deontic logic

Is not useful for anything in real life / AI. This is instantly obvious to any sufficiently competent AI researcher. See e.g. http://norvig.com/design-patterns/img070.htm, a mention that turned up in passing back when I was doing my own search for prior work on Friendly AI.

...I'll stop there, but do want to note, even if it's out-of-order, that the work you glowingly cite on statistical prediction rules is familiar to me from having read the famous edited volume "Judgment Under Uncertainty: Heuristics and Biases" where it appears as a lovely chapter by Robyn Dawes on "The robust beauty of improper linear models", which quite stuck in my mind (citation from memory). You may have learned about this from philosophy, and I can see how you would credit that as a use of reading philosophy, but it's not work done in philosophy and, well, I didn't learn about it there so this particular citation feels a bit odd to me.

when IIRC, most of the philosophers were claiming at this point that causality was a mere illusion of correlation?

That this isn't at all the case should be obvious even if the only thing you've read on the subject is Pearl's book. The entire counterfactual approach is due to Lewis and Stalnaker. Salmon's theory isn't about correlation either. Also, see James Woodward who has done very similar work to Pearl but from a philosophy department. Pearl cites all of them if I recall.

I know Tarski as a mathematician and have acknowledged my debt to him as a mathematician.

As I pointed out before, the same is true for me of Quine. I don't know if lukeprog means to include Mathematical Logic when he keeps saying not to read Quine, but that book was effectively my introduction to the subject, and I still hold it in high regard. It's an elegant system with some important innovations, and features a particularly nice treatment of Gödel's incompleteness theorem (one of his main objectives in writing the book). I don't know if it's the best book on mathematical logic there is (I doubt it), but it appeals to a certain kind of personality, and I would certainly recommend it to a young high-schooler over reading Principia Mathematica, for example.

Tarski: But I thought you said you were not only influenced by Tarski's mathematics but also his philosophical work on truth?

Chalmers' paper: Yeah, it's mostly useful as an overview. I should have clarified that I meant that Chalmers' paper makes a more organized and compelling case for Good's intelligence explosion than anybody at SIAI has in one place. Obviously, your work (and your debate with Robin) goes far beyond Chalmers' introductory paper, but it's scattered all over the place and takes a lot of reading to track down and understand.

And this would be the main reason to learn something from the mainstream: If it takes way less time than tracking down the same arguments and answers through hundreds of Less Wrong posts and other articles, and does a better job of pointing you to other discussions of the relevant ideas.

Talbot: I guess I'll have to read more about what you mean by dissolution to cognitive algorithm. I thought the point was that even if you can solve the problem, there's that lingering wonder about why people believe in free will, and once you explain why it is that humans believe in free will, not even a hint of the problem remains. The difference being that your dissolution of free will to cognitive algorithm didn't (as I recall) cite any of the relevant science, whereas Talbot's (and others') dissolutions to cognitive algorithms do cite the relevant science.

Is there somewhere where you explain the difference between what Talbot, and also Kip Werking, have done versus what you think is so special and important about LW-style philosophy?

As for the others: Yeah, we seem to agree that useful work does sometimes come from philosophy, but that it mostly doesn't, and people are better off reading statistics and AI and cognitive science, like I said. So I'm not sure there's anything left to argue.

The one major thing I'd like clarification on (if you can find the time) is the difference between what experimental philosophers are doing (or what Joshua Greene is doing) and the dissolution-to-algorithm that you consider so central to LW-style philosophy.

As for the others: Yeah, we seem to agree that useful work does sometimes come from philosophy, but that it mostly doesn't, and people are better off reading statistics and AI and cognitive science, like I said. So I'm not sure there's anything left to argue.

I'd like to emphasize, to no one in particular, that the evaluation that seems to be going on here is about whether or not reading these philosophers is useful for building a Friendly recursively self-improving artificial intelligence. While thats a good criteria for whether or not Eliezer should read them, failure to meet this criteria doesn't render the work of the philosopher valueless (really! it doesn't!). The question "is philosophy helpful for researching AI" is not the same as the question "is philosophy helpful for a rational person trying to better understand the world".

Tarski did philosophical work on truth? Apart from his mathematical logic work on truth? Haven't read it if so.

What does Talbot say about a cognitive algorithm generating the appearance of free will? Is it one of the cognitive algorithms referenced in the LW dissolution or a different one? Does Talbot talk about labeling possibilities as reachable? About causal models with separate nodes for self and physics? Can you please take a moment to be specific about this?

Tarski did philosophical work on truth? Apart from his mathematical logic work on truth?

Okay, now you're just drawing lines around what you don't like and calling everything in that box philosophy.

Should we just hold a draft? With the first pick the philosophers select... Judea Pearl! What? whats that? The mathematicians have just grabbed Alfred Tarski from right under the noses the of the philosophers!

To philosophers, Tarski's work on truth is considered one of the triumphs of 20th century philosophy. But that sort of thing is typical of analytic and especially naturalistic philosophy (including your own philosophy): the lines between mathematics and science and philosophy are pretty fuzzy.

Talbot's paper isn't about free will (though others in experimental philosophy are); it's about the cognitive mechanisms that produce intuitions in general. But anyway this is the post I'm drafting right now, so I'll be happy to pick up the conversation once I've posted it. I might do a post on experimental philosophy and free will, too.

To philosophers, Tarski's work on truth is considered one of the triumphs of 20th century philosophy.

Yet to Wikipedia, Tarski is a mathematician. Period. Philosophy is not mentioned.

It is true that mathematical logic can be considered as a joint construction by philosophers and mathematicians. Frege, Russell, and Godel are all listed in Wikipedia as both mathematicians and philosophers. So are a couple of modern contributors to logic - Dana Scott and Per Martin-Lof. But just about everyone else who made major contributions to mathematical logic - Peano, Cantor, Hilbert, Zermelo, Skolem, von Neumann, Gentzen, Church, Turing, Komolgorov, Kleene, Robinson, Curry, Cohen, Lawvere, and Girard are listed as mathematicians, not philosophers. To my knowledge, the only pure philosopher who has made a contribution to logic at the level of these people is Kripke, and I'm not sure that should count (because the bulk of his contribution was done before he got to college and picked philosophy as a major. :)

Quine, incidentally, made a minor contribution to mathematical logic with his idea of 'stratified' formulas in his 'New Foundations' version of set theory. Unfortunately, Quine's theory was found to be inconsistent. But a few decades later, a fix was discovered and today some of the most interesting Computer Science work on higher-order logic uses a variant of Quine's idea to avoid Girard's paradox.

To any of the scientists and mathematics I know personal and have discussed this with, the lines between science and philosophy and mathematics and philosophy are not fuzzy at all. Mostly I have only heard of philosophers talked about the line being fuzzy or that philosophy encompasses mathematics and science. The philosophers that I have seen do this seem to do it because they disere the prestige that comes along with science and math's success at changing the world.

Chalmers' formalization of Good's intelligence explosion argument. Good's 1965 paper was important, but it presented no systematic argument; only hand-waving. Chalmers breaks down Good's argument into parts and examines the plausibility of each part in turn, considers the plausibility of various defeaters and possible paths, and makes a more organized and compelling case for Good's intelligence explosion than anybody at SIAI has.

I thought Chalmers was a newbie to all this - and showed it quite a bit. However, a definite step forward from zombies. Next, see if Penrose or Searle can be recruited.

When I wrote the post I didn't know that what you meant by "reductionist-grade naturalistic cognitive philosophy" was only the very narrow thing of dissolving philosophical problems to cognitive algorithms.

No, it's more than that, but only things of that level are useful philosophy. Other things are not philosophy or more like background intros.

Amy just arrived and I've got to start book-writing, but I'll take one example from this list, the first one, so that I'm not picking and choosing; later if I've got a moment I'll do some others, in the order listed.

Predicate logic.

Funny you should mention that.

There is this incredibly toxic view of predicate logic that I first encountered in Good Old-Fashioned AI. And then this entirely different, highly useful and precise view of the uses and bounds of logic that I encountered when I started studying mathematical logic and learned about things like model theory.

Now considering that philosophers of the sort I inveighed against in "against modal logic" seem to talk and think like the GOFAI people and not like the model-theoretic people, I'm guessing that the GOFAI people made the terrible, horrible, no good, very bad mistake of getting their views of logic from the descendants of Bertrand Russell who still called themselves "philosophers" instead of those descendants who considered themselves part of the thriving edifice of mathematics.

Anyway. If you and I agree that philosophy is an extremely sick field, that there is no standardized repository of the good stuff, that it would be a desperate and terrible mistake for anyone to start their life studying philosophy before they had learned a lot of cognitive science and math and AI algorithms and plain old material science as explained by non-philosophers, and that it's not worth my time to read through philosophy to pick out the good stuff even if there are a few small nuggets of goodness or competent people buried here and there, then I'm not sure we disagree on much - except this post sort of did seem to suggest that people ought to run out and read philosophy-qua-philosophy as written by professional philosophers, rather than this being a terrible mistake.

You may enjoy the following exchange between two philosophers and one mathematician.

Bertrand Russell, speaking of Godel's incompleteness theorem, wrote:

It made me glad that I was no longer working at mathematical logic. If a given set of axioms leads to a contradiction, it is clear that at least one of the axioms must be false.

Wittgenstein dismissed the theorem as trickery:

Mathematics cannot be incomplete; any more than a sense can be incomplete. Whatever I can understand, I must completely understand.

Godel replied:

Russell evidently misinterprets my result; however, he does so in a very interesting manner... In contradistinction Wittgenstein... advances a completely trivial and uninteresting misinterpretation.

According to Gleick (in The Information), the only person who understood Godel's theorem when Godel first presented it was another mathematician, Neumann Janos, who moved to the USA and began presenting it wherever he went, by then calling himself John von Neumann.

The soundtrack for Godel's incompleteness theorem should be, I think, the last couple minutes of 'Ludus' from Tabula Rasa by Arvo Part.

I've been wondering why von Neumann didn't do much work in the foundations of mathematics. (It seems like something he should have been very interested in.) Your comment made me do some searching. It turns out:

John von Neumann was a vain and brilliant man, well used to putting his stamp on a mathematical subject by sheer force of intellect. He had devoted considerable effort to the problem of the consistency of arithmetic, and in his presentation at the Konigsberg symposium, had even come forward as an advocate for Hilbert's program. Seeing at once the profound implications of Godel's achievement, he had taken it one step further—proving the unprovability of consistency, only to find that Godel had anticipated him. That was enough. Although full of admiration for Godel—he'd even lectured on his work—von Neumann vowed never to have anything more to do with logic. He is said to have boasted that after Godel, he simply never read another paper on logic. Logic had humiliated him, and von Neumann was not used to being humiliated. Even so, the vow proved impossible to keep, for von Neumann's need for powerful computational machinery eventually forced him to return to logic.

ETA: Am I the only one who fantasizes about cloning a few dozen individuals from von Neumann's DNA, teaching them rationality, and setting them to work on FAI? There must be some Everett branches where that is being done, right?

Of course, since this is a community blog, we can have it both ways. Those of us interested in philosophy can go out and read (and/or write) lots of it, and we'll chuck the good stuff this way. No need for anyone to miss out.

I didn't say in my original post that people should run out and start reading mainstream philosophy. If that's what people got from it, then I'll add some clarifications to my original post.

Instead, I said that mainstream philosophy has some useful things to offer, and shouldn't be ignored. Which I think you agree with if you've benefited from the work of Bostrom and Dennett (including, via Drescher) and so on. But maybe you still disagree with it, for reasons that are forthcoming in your response to my other examples of mainstream philosophy contributions useful to Less Wrong.

But yeah, don't let me keep you from your book!

As for predicate logic, I'll have to take your word on that. I'll 'downgrade it' in my list above.

If that's what people got from it, then I'll add some clarifications to my original.

FWIW, what I got from your original post was not "LW readers should all go out and start reading mainstream philosophy," but rather "LW is part of a mainstream philosophical lineage, whether its members want to acknowledge that or not."

Meh. Historical context can help put things in perspective. You've done that plenty of times in your own posts on Less Wrong. Again, you seem to be holding my post to a different standard of usefulness than your own posts. But like I said, I don't recommend anybody actually read Quine.

Oftentimes you simply can't understand what some theorem or experiment was for without at least knowing about its historical context. Take something as basic as calculus: if you've never heard the slightest thing about classical mechanics, what possible meaning could a derivative, integral, or differential equation have to you?

There is this incredibly toxic view of predicate logic that I first encountered in Good Old-Fashioned AI.

I'd be curious to know what that "toxic view" was. My GOFAI academic advisor back in grad school swore by predicate logic. The only argument against that I ever heard was that proving or disproving something is undecidable (in theory) and frequently intractible (in practice).

And then this entirely different, highly useful and precise view of the uses and bounds of logic that I encountered when I started studying mathematical logic and learned about things like model theory.

Model theory as opposed to proof theory? What is it you think is great about model theory?

Now considering that philosophers of the sort I inveighed against in "against modal logic" seem to talk and think like the GOFAI people and not like the model-theoretic people, I'm guessing that the GOFAI people made the terrible, horrible, no good, very bad mistake of getting their views of logic from the descendants of Bertrand Russell who still called themselves "philosophers" instead of those descendants who considered themselves part of the thriving edifice of mathematics.

I have no idea what you are saying here. That "Against Modal Logic" posting, and some of your commentary following it strike me as one of your most bizarre and incomprehensible pieces of writing at OB. Looking at the karma and comments suggests that I am not alone in this assessment.

Somehow, you have picked up a very strange notion of what modal logic is all about. The whole field of hardware and software verification is based on modal logics. Modal logics largely solve the undecidability and intractability problems the bedeviled GOFAI approaches to these problems using predicate logic. Temporal logics are modal. Epistemic and game-theoretic logics are modal.

Or maybe it is just the philosophical approaches to modal logic that offended you. The classical modal logic of necessity and possibility. The puzzles over the Barcan formulas when you try to combine modality and quantification. Or maybe something bizarre involving zombies or Goedel/Anselm ontological proofs.

That is entirely possible. A five star review at the Amazon link you provided calls this "The classic work on the metaphysics of modality". Another review there says:

Plantinga's Nature of Necessity is a philosophical masterpiece. Although there are a number of good books in analytic philosophy dealing with modality (the concepts of necessity and possibility), this one is of sufficient clarity and breadth that even non-philosophers will benefit from it. Modal logic may seem like a fairly arcane subject to outsiders, but this book exhibits both its intrinsic interest and its general importance.

Yet among the literally thousands of references in the three books I linked, Platinga is not even mentioned. A fact which pretty much demonstrates that modal logic has left mainstream philosophy behind. Modal logic (in the sense I am promoting) is a branch of logic, not a branch of metaphysics.

There is this incredibly toxic view of predicate logic that I first encountered in Good Old-Fashioned AI. And then this entirely different, highly useful and precise view of the uses and bounds of logic that I encountered when I started studying mathematical logic and learned about things like model theory.

it's more than that, but only things of that level are useful philosophy. Other things are not philosophy or more like background intros.

I'm not sure what "of that level" (of dissolving-to-algorithm) means, but I think I've demonstrated that quite a lot of useful stuff comes from mainstream philosophy, and indeed that a lot of mainstream philosophy is already being used by yourself and Less Wrong.