I've been thinking about Roger White's essay `Epistemic Permissiveness' (available on his website), and I have an argument that I want to try out.

Permissive cases, in White's jargon, are ones in which it would be possible for two agents with the same evidence and background knowledge to disagree about the matter at hand; ie, in which it is compatible with rationality to believe P and equally compatible with rationality to believe not-P instead. Epistemic permissiveness is the doctrine that there are some permissive cases.

White's arguments aim to uncover a kind of deliberative irrationality in permissive cases. Consider a schematic example: Before I collect evidence about some contingent matter P, I do not have a believe that P or that not-P. I make some observations, consult some experts, and so on. The evidence that I collect leads me to believe P. If I know that this case is a permissive case, however, I know that I might rationally have come to believe not-P on the basis of the same evidence. Whether I believe P or not-P depends on the way in which I decide to be rational, not on the force of the evidence. If the difference between believing P and believing not-P just depends on choice or contingency in this way, I might as well have decided which to believe before collecting any evidence.

I have rendered the argument in a ham-handed way, making it look too much like the problem in my Peirce paper. However, I think that what I say below goes through even for White's more subtle formulation of the problem.

White admits that the deliberative incoherence evaporates if any agent cannot judge-- except perhaps retrospectively-- that a situation they are in is a permissive case. It would then be impossible for my belief that P to be undermined by rumination about the fact that I might rationally have believed not-P instead. He dismisses this approach:

...while this position may be coherent and escape the objections thus far, I doubt that anyone holds such a view, as it is hard to see what could motivate it. (p. 10)

Many in philosophy of science have been tempted to say that rationality is a feature of epistemic communities and not of isolated individuals. I will sketch a mild version of this claim, as advanced by Philip Kitcher in the 90s and Helen Longino in her more staid moments; I do not need the more revolutionary versions advocated by Lynn Hankinson-Nelson, Kitcher after the millenium, and Longino in her wilder moments.

In discovering what the world is like, there are pressures to reason in different ways. Some discoveries would never be made if we did not leap to bold new hypotheses, but sometimes leaping would lead us down blind-alleys and into theoretical box canyons. The rational thing to do is to spread the epistemic risk: Have some scientists pursue wild theories while others defend orthodoxy. Promising new leads will be followed up by someone, and the community will follow along only once a critical mass of evidence has been gathered in their favor. Call this the collective strategy for scientific development.

A further fact about human agents is that we are better at exploring new theories if we believe they might be true and we are better at defending orthodoxy if we believe that the challenging view is false. This means that the collective strategy requires some people (the pioneers) to believe P and others (the old guard) to believe not-P, even when confronted by the same evidence and arguments. The collective strategy yields permissive cases.

Permissive cases occur only around legitimate scientific controversies, so they are not ubiquitous. Moreover, it will only be clear in retrospect whether the pioneers were heading to a new frontier or down a dead end. Deliberation at the time cannot be undermined by considering that this is a permissive case. This view seems to be the kind of view that White considers coherent but unoccupied. And someone holds such a view-- namely me, at least some of the time.

I just posted a draft of a brief paper discussing Paul Teller's article 'How we dapple the world.' His title riffs off of Larry Sklar's `Dappled theories in a uniform world' which itself riffs off of Nancy Cartwright's The dappled world.

Following the flurry of `dappled' titles, I originally thought to title this paper `Worldish dappled worldly dappling world dapples.' I quickly shortened this monstrosity, and the draft that I circulated to a few grad students bore the title `WorldBob DapplePants.' I really wish I could publish a paper with that title, just because it would look sweet on my CV.* However, the word `dapple' suggests that we are talking about Nancy's metaphysical picture. Sklar only deals with Nancy's views en passant, Teller only deals with Sklar, and I am only dealing with Teller. WorldBob DapplePants, although a cool title, would be a bit dishonest.**

* For the similar reasons, I would like to publish a paper in Noûs. It is a good journal, of course, but it also has a circumflex in its name and that would look cool on my CV.

I just posted a new version of forall x, my introductory logic text. I have been using it as the text in my 130 member intro logic class this term, and I have been fairly satisfied. The process has allowed me to catch a bevy of typos and little slips. The new version corrects those, has added practice exercises, and has a new appendix discussing alternate symbol systems.

Commercial logic textbooks are, to be blunt, a scam. Every publisher has got one. I know, because they have sent me unsolicited desk copies. They know that I teach logic regularly and that I could turn my courses into a captive market for them. Textbook authors profit by complicity in this, but the lion's share goes to companies that are really not adding any value to the educational system. It is that state of affairs that led me to write forall x. The text is available under the Creative Commons license, which means that it is free for noncommercial use. I had it printed as a course reader for my students, and any other instructor is free to do the same.

The downside of not having a publisher is that no one is out there marketing forall x, sending desk copies to every professor anywhere who has taught logic. One kind Canadian has shown interest in using it in an abstract mathematics course. I am not sure how else to proceed by way of promoting it.

In a paper for MacHack several years ago, I tried to sort out the possible methods for evaluating claims found on the internet. (`Reliability on a Crowded Net' -- The conference still hosts a PDF of it.) I was primarily interested in claims made on web pages and in chat rooms, and I think the analysis extends pretty well to blogs.

Many people now use Wikipedia as their reference of first resort, however, and I wonder whether my previous analysis applies to it. In the paper, I identify four basic methods for evaluating a claim found on the internet and claim that they are exhaustive. Consider each in relation to the Wikipedia:

Appeal to Reliability involves relying on the reputation of the source. If I read an article on the New York Times website, I give it roughly the same weight as I would an article in the print edition.

It is not entirely clear how reliable the Wikipedia really is. It does have a good reputation. There is also a story to be told about the self-correcting nature of collaborative work. Nevertheless, a wiki will only be as reliable as its most persuasive members. I suspect that its reliability depends on the topic area-- because different topic areas will have different contributors.

Appeal to Plausibility involves assessing whether the general claims even sound like they are in the right ball park. This can be done both in terms of content and in terms of style. I think there is some reason to think that the Wikipedia could be deceptive in this regard. Even where it contains false information, contributors may have preened it to make it sound more plausible.

Suppose an entry contains incorrect information. If people wander through the site and make the entry sound better, even though they do not actually have any special expertise on the subject of the entry, then the entry will be written in a more plausible way than it would be if it were just the original falsities on somebody's personal webpage.

Calibration involves checking the facts where you can and extrapolating: If the source gets things right on matters you can check, then that is some reason to believe that it gets things right on matters you can't. Again, the collaborative nature of the Wikipedia makes this harder. If the things that you can check independently are the things that other people could check, then those things will probably be correct-- someone will have corrected any mistakes. The correctness on those points will fail to be evidence for the correctness on the remainder, if the background knowledge of honest and conscientious contributors runs out where yours does.

Sampling involves checking multiple sources and comparing them against one another. Insofar as one just does a quick Wikipedia lookup, one avoids sampling.

So the basic methods for evaluating credibility all get harder with the Wikipedia. The central issue is the degree to which the collective nature of the Wikipedia can be relied on to be self-correcting. How much is this a reasonable expectation, and how much is this an article of faith?

I've been reading Lauren Slater's Opening Skinner's Box, a popularized discussion of significant experiments in 20th-century psychology. The book is best when it presents facts and background, and worst when it tries to pose philosophical questions. One chapter is about Elizabeth Loftus' work to debunk repressed memories. Loftus points out that there is no plausibly mechanism whereby repressed memories would be stored in the brain.

Slater believes in a repressed memories nonetheless, and mentions Van der Kolk with approval. Der Kolk's view is that "the body keeps score." Anything that is too traumatic to be remembered is stored nonnarratively, to return later as muscle aches or panic attacks. The solution to such problematic, visceral memories is to recover them narratively. Fess up to the traumatic event, and the aches will go away.

So this has me thinking: Suppose that der Kolk is right, that there are separate centers of episodic and visceral memory, and that eliminating bad visceral memories is just a matter of thinking through the story of the traumatic event. There is nothing in such an approach that requires the story you think through to be true. Since there is no episodic memory of the event to compare to the recollection, your body has no way of checking.

I suggest the obvious therapeutic approach: Tell yourself a story about any bad visceral memories that your body has stored up. The beauty is that any story will do, as along as you meditate on it and process it through the narrative parts of your brain. I dub this innovation Bogus Scenario Therapy. A session might go like this:

Me: [In a silly mock-Austrian accent, because it enhances the therapeutic value of balderdash.] Zo, you haff been havink these panik attacks. Tell me about ze aliens from your memories.

Patient: [Lying back on a couch, like in a New Yorker cartoon.] What aliens?

Me: Ze traumatic ones. Ze ones from your memory. Vork mit me.

Patient: Oh. They were grey, I guess.

Me: Excellent. And zeir heads? Big or zmall?

Patient: ...um... big.

Me: [writing down 'big' in my notebook.] Vider zan zeir shoulders?

Patient: ...yes.

Me: And their arms: articulated like a humans?or did zey bend backvards?

Patient: [more confident now.] Backwards.

Me: And vat sounds did zey make?

And so it would go, with me billing hourly for little games of make believe. There would be money in it, if only I could keep a straight face.