We're still in the Enlightenment, only now reason has shown us that we are not reasonable - and a more empirical study of man helps us remember the point of the whole program

Monday, August 2, 2010

How We Filter Arguments: Valid, Relevant, and Ridiculous

The Doomsday Argument as put forth by Nick Bostrom (and others) is a form of the self-indication assumption as applied to the continued existence of humans. In short: the Doomsday Argument states that we can reasonably assume we are substantially closer to the end of the existence of the human species than we are to the beginning of it. Hence the sunny name. Bostrom has complained that on hearing this argument, most people dismiss it outright as ridiculous, but do so without a real counterargument, or even an honest attempt therein. That is to say, the argument is not really rejected; it's filtered without being evaluated. Whether or not there is ever a time when this kind of filtering is legitimate or safe to do, we have no choice but to have some criteria for doing so.

I think adherents of the Doomsday Argument would agree that this particular chain of reasoning runs counter to our desires and intuitions, and that the argument's degree of abstraction is one factor that aids rejectors in calling it "ridiculous". To be clear, I'm not implying anything about the validity or lack thereof of Doomsday; but because of these characteristics, this is a good example of an argument that many readers will react similarly to and consider "ridiculous". This raises questions about what what we mean when we call an argument ridiculous, and how and why we filter arguments without actually evaluating them.

There are multiple ways that humans attempt to influence other humans, and outside of force, most of these ways involve language. These attempts to manipulate each other, via valid arguments or otherwise, do not occur in a vacuum. Literate people in industrialized societies are bombarded every waking minute with statements by other agents with their own interests who intend to change our behavior. Most of these statements don't bother with any semblance of logical coherence.

Of the attempts to influence that do at least look like arguments (whether or not they really are coherent and valid), a large portion (no doubt the majority) are invalid, advanced either in earnest by claimants unable to see the faults in their own arguments, or by claimants who are at best indifferent to the validity of their own arguments as long as they create the desired change in the behavior of their audience. The problem is that there are only so many hours in a day, and it takes time and effort to evaluate arguments, and we don't know until we evaluate them which are coherent and valid. Therefore we end up rejecting most arguments without actually evaluating them. This should be no cause for guilt. I strongly suspect that the average blog reader encounters far more arguments per day than Aristotle or Descartes did in their prime. In modernity the possible substrates for arguments are much greater, as are the channels by which we can receive them. This is why instead of evaluating and rejecting arguments, we filter them, i.e. ignore them. Sometimes we do this by calling them "ridiculous".

This might not be so dangerous if we were able to keep accurate labels in our mental catalog, but chances are, if you filtered an argument last month and it comes up again, you won't remember that you provisionally rejected it without actually evaluating it, you'll remember that you thought it was "ridiculous". It would be nice if we at least had a cognitive junk mail pile for those arguments. Therefore, knowing whether you've filtered or legitimately rejected an argument, and what your filtering system is, are very important.

To help us we use heuristics, which are usually a social network-influenced way of approximating truth values. "I don't have time or background to think through argument ABC I just encountered. But this is the first time I'm hearing it, and if such a profound argument were true, I would have heard of it already, or experts would be discussing it prominently in the media." Or, "A moral authority I respect has not heard of this or actively rejects it. Therefore, it is probably wrong." Or even, "I mistrust this person, or this person is trying to get me to buy something/vote for them/sleep with them, therefore this thing they told me is likely false." These are not bad ways to make your argument filtration more accurate, but again we forget which are provisional rejections based on "My brother told me it's hogwash" and which are full critical rejections.

Furthermore, if we use these social weighting methods, then the population dynamics of the spread of an argument become very important, and of course in most cases the spread is related much more to claimed relevance than it is to the merits of that argument. Some of these heuristics do seem to improve our chances of "buying" valid arguments: whether the source is secure enough to welcome critical approaches to the argument; the other positions the source holds, especially if they are normally hostile to the position to which the argument leads; and being repeatedly exposed to the argument. That is, "Everyone's talking about it, so it must be meaningful or useful, and besides I don't want to look stupid by not having considered it."

Talk is cheap, and arguments are vulnerable to a cheap-signaling-type exploitation, namely the argument's superficial relevance to the argument-hearer. If we want our arguments heard, we don't work on the logic, we work on the apparent relevance. (In most circles. Most humans are not graduate students in philosophy.) You're not very likely to spend your finite efforts parsing arguments that don't relate with high probability to anything in your current or future experience. But when someone tells you, a twenty-first century technology user, that "cell phone use causes brain cancer", it might be a good idea to actively pursue that line of reasoning to be sure it's false. But we muddy the waters because we all put relevant premises or conclusions in our arguments to get attention for them. Absent any source-weighting, and as long as the argument isn't "ridiculous" (more on what this means later), you're inclined to listen.

This is all to say: we want to spend as large a fraction of our attention as possible evaluating arguments in Category-B, but until we spend the time we don't know if those arguments actually belong to A (and most probably will). We don't care enough about arguments in C or D to decide on their validity because they don't seem to relate to anything that makes a difference to us, so we throw them into the argument spam folder. That is, even if we can't tell without deliberation whether arguments are valid (right column), we can usually tell at first glance whether they're relevant (upper row),

A "ridiculous" argument is therefore one which a) claims to be relevant, b) makes an argument which, if accepted, would require the audience to substantially update their model of the world, and c) which the audience therefore rejects without evaluating. An argument that is irrelevant can't be ridiculous: you might hear an airtight, clearly communicated argument that Genghis Khan was ambidextrous, and though it sounds reasonable, you probably don't care enough to worry about it or actively call it "ridiculous" unless you're a historian of medieval Asia. Of course we do save a lot of time here, because the majority of arguments making demands on our attention by claiming relevance are neither relevant nor valid.

So what properties make us likely to call an argument ridiculous, i.e. subject to dismissal despite claimed relevance and despite not being evaluated for validity? For now I'll stick to reasons that rejectors of ridiculous would themselves report.

Extreme implications - any argument involving a conclusion that an object can exist, or an event can occur, of a magnitude or quality unobserved in the hearer's life or the hearer's account of history. This is especially true for outcomes that are very pleasant, very unpleasant (as with the Doomsday Argument), or very strange.

Novel or strange relationships - arguments that entities in what seem to the audience to be completely separate categories are in fact related.

Arguments that require a substantial change in behavior - unsurprisingly.

Contradiction of currently held beliefs - perhaps most unsurprisingly.

Note that argument structure or source are not on this list. Once we deem an argument ridiculous we may resort to picking on the structure or source for further validation, but this isn't critical thinking, and these characteristics do not trigger the initial labeling of ridiculous.

Readers may notice the similarity of the relevance vs validity table to the urgent/not urgent, important/unimportant productivity table. We tend to spend too much time doing urgent but unimportant things (in corporatese, "putting out fires"), and not enough doing not-urgent but important things. The equivalent mistakes in argument filtering are that we spend too much time thinking about A-arguments (relevant/invalid) so we overcompensate by throwing out any argument which would make a dent in our belief network (some of which are certainly true!), and there are likely D arguments (not clearly relevant, valid) which actually might affect us. These are the rhetoric-parsing consequences of our limits in correlating beliefs, as well the human tendency to epistemological homeostasis, our strong tendency to preserve the status quo in our worldview and avoid updating our beliefs.

* * *

Language allows us to adopt beliefs about phenomena and relationships that we have not directly observed. As there are more humans communicating with each other through more channels, the amount of propositions we are exposed to will increase, but our cognitive bandwidth will not. The need for some argument filtration is unavoidable. Consequently, we use shortcuts to avoid fully evaluating every argument we hear: without evaluating them for coherence and validity, we reject arguments that are not relevant and we reject arguments that conflict with what we believe we know to be true (these we call "ridiculous"). However, the danger is that in both cases we do not cognitively categorize these beliefs as provisionally rejected, feeling that in fact the beliefs were positively refuted.