Bayes' rule in Haskell, or why drug tests don't work

A very senior Microsoft developer who moved to Google told
me that Google works and thinks at a higher level of abstraction than
Microsoft. "Google uses Bayesian filtering the way Microsoft uses the if
statement," he said. -Joel Spolsky

I really love this quote, because it's insanely provocative
to any language designer. What would a programming language look
like if Bayes' rule were as simple as an if statement?

Let's start with a toy problem, and refactor it until Bayes' rule is baked
right into our programming language.

Imagine, for a moment, that we're in charge of administering drug tests for
a small business. We'll represent each employee's test results (and drug use) as follows:

Assuming that 0.1% of our employees have used heroin recently, and that our test is 99%
accurate, we can model the testing process as follows:

drugTest1::Distd=>d(HeroinStatus,Test)drugTest1=doheroinStatus<-percentUser0.1testResult<-ifheroinStatus==UserthenpercentPos99elsepercentPos1return(heroinStatus,testResult)-- Some handy distributions.percentUserp=percentpUserCleanpercentPosp=percentpPosNeg-- A weighted distribution with two elements.percentpx1x2=weighted[(x1,p),(x2,100-p)]

This code is based our FDist monad, which is in turn based on
PFP. Don't worry if it seems slightly mysterious; you can think of the
“<-" operator as choosing an element from a probability
distribution.

Running our drug test shows every possible combination of the two
variables:

drugTest3::FDist'HeroinStatus->FDist'HeroinStatusdrugTest3prior=doheroinStatus<-priortestResult<-ifheroinStatus==UserthenpercentPos99elsepercentPos1-- As easy as an 'if' statement:condition(testResult==Pos)returnheroinStatus

This gives us the same results as before:

>bayes(drugTest3(percentUser0.1))[PerhapsUser9.0%,PerhapsClean91.0%]

So testing all of our employees is still hopeless. But what if we only
tested employees with clear signs of heroin abuse? In that case, there's
probably a 50/50 chance of drug use.

And that gives us remarkably better results. Out of the people who test
positive, 99% will be using drugs:

>bayes(drugTest3(percentUser50))[PerhapsUser99.0%,PerhapsClean1.0%]

The moral of this story: No matter how accurate our drug test, we shouldn't
bother to run it unless we have probable cause.

Similar constraints apply to any population-wide surveillance: If you're
searching for something sufficiently rare (criminals, terrorists, strange
diseases), it doesn't matter how good your tests are. If you test
everyone, you'll drown under thousands of false positives.

Extreme Haskell geeking

If we collapse MaybeT into PerhapsT, we can
work with probability distributions that don't sum to 1, where the
"missing" probability represents an impossible world.

We can add condition to Rand (part 2)
using MaybeT Rand. Bayes' rule is basically the
combination of MaybeT and a suitable catMaybes function
applied to any probability distribution monad.

Also worth noting: Popular theories of natural language semantics are based
on the λ-calculus. Chung-chieh Shan has a fascinating
paper showing how to incorporate monads and monad transformers
into this model. If we replaced Chung-chieh Shan's Set monad with one of
our Bayesian monads, what would we get? (Currently, I have no idea.)

Ah, thanks for the link to the Shan paper — I had not seen it before, and it’s a very interesting read.

As to what would come of using a Bayesian monad in place of Set, I cannot say, though it sounds to me like it might lead to a good model for a semantics including fuzzy categories (in the natural language sense, rather than the category theoretic, even assuming CT has a sense of “fuzzy category”).

Eric
wrote on Feb 23, 2007:

Interesting! Is there a good introduction to fuzzy categories for non-linguists?

IIRC, Shan uses the Set monad to represent ambiguous referents. The idea is that if the pronoun “he” might represent one of two people, you can do the calculation either way. (You can see the connection to logic programming here.)

Using a probability distribution monad, you could say, “We’re talking about Frank with 90% probability, and Mike with 10% probability.”

Of course, it’s not clear (to me, at least) how this relates to probabilistic parsing, or what the ability to use Bayes’ rule actually buys us.

And as for fuzzy categories, well, I really shouldn’t have looked, but here you go:

> Chapter 15 introduces toposes. A topos is a kind of generalized set theory in which the logic is intuitionistic instead of classical… Categories of fuzzy sets are recognized as almost toposes, and modest sets, which are thought by many to be the best semantic model of polymorphic lambda calculus, live in a specific topos.

I recently read Shan’s paper. Mind-blowingly awesome. But the Set and Pointed Set monads aren’t used there for fuzzy categories or for ambiguous referents. (He actually uses the reader/environment monad to deal with different variable assignments, like with “he” having multiple possible referents.)

In the paper, Sets and Pointed Sets are used for the semantics of questions and focus, respectively. Consider a sentence like “Who ordered a tuna sandwich?” The idea is that the semantic interpretation of a question like this would be a set of interpretations something like ordered(x,tuna sandwich) for every x in some contextually given set of alternatives. It might be broad – the “who” could be any person or even any animate – but more typically it would be more restricted – the people in a restaurant, the friends you picked up lunch for, etc.

Shan then uses pointed sets to deal with what could be answers to such questions: “John was the one that had the tuna sandwich.” This is like picking one of the alternatives out of that context set. But you still need to care about the rest of the set of alternatives. Consider “Only John ordered a tuna sandwich”. The truth of such a sentence depends on the set of options: it’s more likely to be true if only your friends are under consideration than if every living human being is.

So, in this context, I don’t think Bayesianifying the Set/Pointed Set monads buys you anything. (Not to say there might not be other linguistic uses of Bayesian monads.)

Eric
wrote on Mar 13, 2007:

(Re-reads the paper.)

Yeah, it looks as though I had generalized accidentally from Shan’s treatment of interrogative pronouns (“who”, etc.) to pronouns in general.

And I don’t pretend to understand the linguistic implications of focus, so I should probably refrain from commenting on Pointed Set monads until I read more papers. :-)

But my larger question involved the semantics of ambiguous sentences. Specifically, I was interested in the relationship between natural language parsing and the resulting semantics in such sentences as:

(There’s also a bunch of horribly bad parses which treat “fruit” as a transitive verb. Hey, it’s in the dictionary.)

But these sentences aren’t that different from:

Frank called Mark, and he got pretty upset.

…where “he” could refer to either Frank or Mark. This sentence would become much less ambiguous if we could estimate the following probabilities from the surrounding context:

P(Frank got upset|context)
P(Mark got upset|context)

So, my question: Given Chung-chieh Shan’s framework, and the various probability monads (with or without Bayesian conditioning), can we assign reasonable semantics to ambiguous sentences?

As I said earlier, I don’t have the foggiest idea of how to answer this question. :-)

Max Lybbert
wrote on May 07, 2007:

Unfortunately I just barely ran across this blog today. I like it, and will be coming back. So, although this is very late, you may consider looking at the CRM114 Discriminator (http://crm114.sourceforge.net/ ), which is supposed to be a language with Bayesian filtering (and Markov chaining, and …) built in. But its design looks a little more like Perl than Lisp or Haskell.

Allan E
wrote on Jun 14, 2007:

Regarding David’s comment “condition is just guard”, this doesn’t work unless PerhapsT is a MonadPlus instance… but in the darcs implementation there’s a note on how this leads to ambiguous semantics.

“The moral of this story: No matter how accurate our drug test, we shouldn’t bother to run it unless we have probable cause.”

This can be related to the current security strategy at airports. Look for ‘carnival booth algorithm’ for a description of the strategy and for criticism of it.

At first sight, I thought that your remark above leads to the conclusion that the airport security strategy is right: which is to say, select people for extra screening based on their ethnic background.

But in fact, it does the opposite: one should only select people for extra screening based on whether there is ‘probable cause’, i.e. on careful, human surveillance.