I know how desperately bored the 14 billion regular subscribers to this blog can get on weekends, and the resulting toll this can exact on the mental health of many times that number of people due to the contagious nature of affective funks. So one of my NY's resolutions is to try to supply subscribers with things to read that can distract them from the frustration of being momentarily shielded from the relentless onslaught of real-world obligation they happily confront during the workweek.

So I thought, hey, maybe it would be fun for us to take a look at other efforts that try to "expose" non-randomness of events that smart people might be inclined to think are random.

Here's one:

Actually, I'm not sure this is really a paper about the randomness-detection blindspots of people who are really good at detecting probability blindspots in ordinary folks.

It's more in the nature of how expert judgment can be subverted by a run-of-the-mill (of the "-mine"?) cognitive biases involving randomness--here the "gamblers' fallacy": the expectation that the occurrence of independent random events will behave interdependently in a manner consistent with their relative frequency; or more plainly, that an outcome like "heads" in the flipping of a coin can become "due" as a string of alternative outcomes in independent events--"tails" in previous tosses--increases in length.

CMS present data suggesting that behavior of immigration judges, loan officers, and baseball umpires all display this pattern. That is, all of these professional decisionmakers become more likely than one would expect by chance to make a particular determination--grant an asylum petition; disapprove a loan application; call a "strike"--after a series of previous opposing determinations ("deny," "approve," "ball" etc.).

If you liked puzzling over the the M&S paper, I predict you'll like puzzling through this one.

In figuring out the null, CMS get that it is a mistake, actually, to model the outcomes in question as reflecting a binomial distribution if one is sampling from a finite sequence of past events. Binary outcomes that occur independently across an indefinite series of trials (i.e., outcomes generated by a Bernoulli process) are not independent when when one samples from a finite sequence of past trials.

But figuring out how to do the analysis in a way that avoids this mistake is damn tricky.

If one samples from a finite sequence of events generated by a Bernoulli process, what should the null be for determining whether the probability of a particular outcome following a string of opposing outcomes was "higher" than what could have been expected to occur by chance?

Another tricky thing here is whether the types of events decisionmakers are evaluating here--the merit of immigration petitions, the crediworthiness of loan applicants, and the location of baseball pitches--really are i.i.d. ("independent and identically distributed").

Actually, no one could plausibly think "balls" and "strikes" in baseball are.

A pitcher's decision to throw a "strike" (or attempt to throw one) will be influenced by myriad factors, including the pitch count--i.e., the running tally of "balls" and "strikes" for the current batter, a figure that determines how likely the batter is to "walk" (be allowed to advance to "first base"; shit, do I really need to try to define this stuff? Who the hell doesn't understand baseball?!) or "strike out" on the next pitch.

CMS diligently try to "take account" of the "non-independence" of "balls" and "strikes" in baseball, and like potential influences in the context of judicial decisionmaking and loan applications, in their statistical models.

But whether they have done so correctly--or done so with the degree of precision necessary to disentangle the impact of those influences from the hypothesized tendency of these decisonmakers to impose on outcomes the sort of constrained variance that would be the signature of the "gambler's fallacy"-- is definitely open to reasonable debate.

References will be subject to editor approval before appearing. Your reference will not appear until it has been cleared by a website editor.

Research of the Cultural Cognition Project is or has been supported by the National Science Foundation; by the Annenberg Public Policy Center at the University of Pennsylvania; by the Skoll Global Threats Fund; by the Putnam Foundation; by the Woodrow Wilson International Center of Scholars; by the Arcus Foundation; by the Ruebhausen Fund at Yale Law School; by the Edmond J. Safra Center for Ethics at Harvard University; and by GWU, Temple, and NYU Law Schools. You can contact us here.