Ah, Monday morning, the start of another week. That means: time to mock a GMU economics professor!

This installment of (what I’ve come to realize is) my mission in life concerns Bryan Caplan’s post on the Paulson bailout, made the day after the House voted it down:

We’re still likely to see a bail-out in the near future. So here’s a question: If the bail-out happens, how will we know if it prevented disaster? If unemployment stays under 8%, proponents will say that the bail-out prevented a recession. But if unemployment hits 8% or higher, proponents will say that things would have been even worse without the bail-out. Opponents, of course, will flip things: Good conditions mean the bail-out wasn’t necessary, bad conditions mean that the bail-out made things worse.

Ex ante, though, basic Bayesian reasoning doesn’t allow us to claim that whatever happens confirms our position. If the occurrence of X raises the probability of A, then the occurrence of not-X must reduce the probability of A….

Unfortunately, if you let people see whether X happens before they update, you can’t show that they’re being bad Bayesians. You only see how they updated, not how they would have updated if the news had been different.

So here’s my request: Tell us how you’re going to update in advance. Assume the bail-out happens. If unemployment stays below 8%, does that make you more or less confident that the bail-out prevented disaster? If unemployment rises to 8%+, does that make you more or less confident that the bail-out helped prevented disaster? If you’re a good Bayesian, you must give opposite answers to these two questions.

Now this is fine as far as it goes, except for one little problem: There are some things that you know from aprioristic reasoning. And in this set of truths are economic laws. Now it’s true, a good economist has to be able to sift evidence and exercise judgment when applying various economic insights. But those insights are derived from introspection, not from looking at a million different “economic” statistics (and you would only know how to use this adjective if you had an antecedent theoretical framework!) and then drawing generalizations from them.

Anyway, the specific problem with Bryan’s post is that he’s not allowing for the possibility that your “prior” doesn’t budge. For example, will your belief that 2+2=4 go up if there is rising unemployment next year? No? Oh, so it will go down then?

Even if we’re not talking about things literally deduced from self-evident axioms, Caplan’s post is still silly. Consider a more relevant analogy: Suppose I say that sacrificing virginal GMU professors will appease the unemployment gods. Caplan goes this week, then Cowen, then Tabarrok (I really hope it doesn’t last more than two weeks). Now let’s be scientific about it guys, no cheating! Tell me beforehand how your belief in the unemployment gods will change, depending on what we observe after each execution.

As with the putative unemployment gods, so too with the Paulson bailout: I don’t care what happens to unemployment, you are not going to convince me that taking $850 billion from the public at gunpoint, and giving it to reckless bankers who thought home prices could only go up, will make our economy recover more quickly.

To avoid misunderstanding, let’s try one more: If your initial probability for Bryan Caplan finding me funny is p1, and your estimate of this probability after he reads the present post is p2, is p1 greater than or less than p2?