Stultitia Delenda Est

It is an unchallengeable orthodoxy that you should wear a coat if it is cold out. Day after day we hear shrill warnings from the high priests of this new religion practically seething with hatred for anyone who might possibly dare to go out without a winter coat on. But these ideologues don’t realize that just wearing more jackets can’t solve all of our society’s problems. Here’s a reality check – no one is under any obligation to put on any clothing they don’t want to, and North Face and REI are not entitled to your hard-earned money. All that these increasingly strident claims about jackets do is shame underprivileged people who can’t afford jackets, suggesting them as legitimate targets for violence. In conclusion, do we really want to say that people should be judged by the clothes they wear? Or can we accept the unjacketed human body to be potentially just as beautiful as someone bundled beneath ten layers of coats?

All grass is green.
This emerald is not grass.
Therefore this emerald is not green.

Most people will immediately recognize that the above syllogism is flawed. Clearly, there are other qualities besides “being grass” that might cause an object to be green. The syllogism is incorrect because it commits the formal logical fallacy of “Denying the Antecedent.” More formally, this fallacy looks like:

If A, then B
Not A
Therefore not B

Denying the Antecedent occurs when, given a syllogism, one attempts to prove the inverse by negating the initial premise (the antecedent). The problem with this approach is that most categorical syllogisms only define sufficient conditions, and not necessary conditions. In other words, such syllogisms show that if A is true, then B follows. It doesn’t show that A is the only way that B could possibly be true. So a thing being grass is enough to show that it is green, but it doesn’t exclude other categories or qualities that might also prove “greenness”.

Like all fallacies, Denying the Antecedent is easy to see in formal examples, but much trickier to spot in the real world. One place where it this fallacy is employed quite a bit is as an attempt to take one common condition as the only possible reason for a particular outcome. For instance, a common response to increased police powers or broad government surveillance is something along the lines of: “if you’re not a criminal, then why are you worried? You have nothing to hide.”

All criminals should fear surveillance.
But you’re not a criminal.
Therefore you shouldn’t fear surveillance.

Where this argument breaks down is that there are plenty of other reasons why one might reasonably fear increased surveillance or police powers. Being a criminal is a sufficient condition to fear government surveillance, but it is not necessary. That is to say, being a criminal is enough to cause a fear of surveillance, but there might be other reasons not admitted by the original syllogism.

I should note that there are instances in which denying the antecedent will result in a valid logical argument. This is only true in the case where the condition stated is both necessary and sufficient. This is condition is usually described in syllogisms using the phrase “if and only if”, often shortened to “iff”. For example:

Iff you are Ada Lovelace, then you invented computer programming.
You are not Ada Lovelace
Therefore you didn’t invent computer programming.

This syllogism works, because there is exactly one condition under which you could have invented computer programming: you have to be Ada Lovelace. Since you’re not, you can’t possibly be the inventor of computer programming.

A trick for spotting this fallacy in the wild is to look for instances in which people portray some quality or condition as the only way a certain outcome can occur. When you notice someone making such an argument, take a moment to think of other conditions that might lead to that consequence or to carefully write out the argument their making in a more formal way. Odds are, you’ll find that they’ve relied on a denied antecedent somewhere in their reasoning.

I’ve been meaning to post this for awhile, but my friend Heather linked me to the excellent Your Logical Fallacy Is site. It provides clear, useful descriptions of most of the common informal fallacies. Useful for linking people too in various online discussions.

My only real gripe is that it doesn’t cover formal fallacies (e.g. Affirming the Consequent), but it’s still a useful source for concise descriptions of whatever fallacy you catch yourself or an interlocutor making.

Two muffins are sitting in an oven, baking. One muffin turns to the other and says: “Is it just me, or is it getting really hot in here.” The second muffin turns to the first and says: “Holy crap, a talking muffin!”

The above joke is funny, largely because it commits a logical fallacy called a category error, and then immediately turns around and calls itself on it. It’s generally accepted that muffins can’t talk and so to ascribe them that ability is ridiculous. Ascribing a quality or set of qualities to an object that can’t possibly possess them is called a category error.

Another, less obvious example: many people say that anyone who plays the lottery is a fool. After all, the odds of winning are narrow, the expected return is staunchly negative. It seems to me, however, that for most purchasers of lottery tickets, that argument is fallacious. After all, most of the people buying lottery tickets don’t actually expect to win. And vanishingly few of them actually expect to make money on the prospect in the long run. Most of them are playing the lottery in order to day-dream for a few days what they would do if they win. (My personal fantasy is to pay for Logic 101, Computer Science 101, and Economics 101 classes for every person in the country.)

So to tell a lottery player that they’re making a bad gamble is a category error. You assume that lottery playing is a form of gambling, when really it’s a form of assisted daydreaming. The anti-lottery killjoy is ascribing to the lottery player motives they don’t actually possess.

Here’s a less abstract example. When the first photocopy machines came on the market, some of the early adopting executives took the time to proofread every copy the machine made, just in case it made transcription errors. They assumed (reasonably) that a photocopy machine was the same category of object as a human transcriptionist, and so ascribed it the ability to make typographical or transcription errors. This was a category mistake because photocopy machines use captured images to make copies, and don’t actually transcribe documents at all.

In day-to-day life, category errors occur frequently in fields that are poorly understood by the general populace, like science or technology. Many of the technological problems that people encounter come from bad analogies leading them to think that, for instance, a computer and a human brain are the same category of thing. The answer to “why did my computer do something so stupid” is, invariably, “because someone told it to.” Computers only ever do exactly what they’re told, and this leads to bad behavior at times. Brains, on the other hand, have no such constraints and, indeed, can’t have them because they aren’t artifacts like computers are.

Almost all category errors rely on explicit or implicit analogies. This is because all analogies are imperfect, even though we tend to treat them as flawless identities. Joel Spolsky calls this the “Law of Leaky Abstractions“. We assume our computer is like a brain, because that’s an analogy that serves us well some high percentage of the time. We end up expecting our computer to behave “intelligently” (meaning roughly: however I really wish it would behave) and get angry when it’s “dumb” enough to do something like download a virus or delete our files. Ascribing intelligence or lack thereof to a modern computer is a category error based on the leaky abstraction that computers are “sort of like” brains.1

Category errors contain a couple of important subspecies that I’ll probably talk more about in a future post. These are the Fallacy of Composition, and the Fallacy of Division. Basically these are category errors in which qualities held by a part is ascribed to the whole or vice versa.

Like all informal fallacies, avoiding category errors can be difficult. The best way to attempt to avoid them is often to be rigorous about examining your suppositions. Whenever you ascribe qualities to person or thing without direct evidence, or you find yourself making assumptions based on analogy, it’s a good time to step back and ask if those assumptions are warranted.

1 It’s actually an open question whether that will always be a category error. The question of general artificial intelligence has a number of unresolved technical and philosophical pieces. For the pro-AI side, see Turing, Minsky, Kurzweil, et al. For the anti-AI side see Searle, Lanier, Penrose, et al. For the dismissive “that’s kind of a stupid question” side of the argument, see the wonderful E. W. Dijkstra.

The politician’s syllogism, also known as the politician’s logic or the politician’s fallacy, is a logical fallacy of the form:

We must do something
This is something
Therefore, we must do this.

Allow me to present a similar argument:

“A cat has one more tail than no cat.
No cat has eight tails.
Therefore a cat has nine tails.”

These are both classic examples of the fallacy of Equivocation. Equivocation is when a speaker uses a word with multiple meanings without making clear which meaning is intended. Generally, the speaker will tend to use two different meanings interchangeably.

Let me show you what I mean. First, let’s turn our attention to my proof that all cats have nine tales. In this instance, the equivocated term is the word ‘no’. In the first premise it indicates “zero cats” and in the second premise it indicates “there does not exist a cat”. By using the same word for both concepts we cover up the fact that we’re making a giant leap between our two premises. This allows us to smuggle two disjoint premises in the side door before anyone can stop us.

In the Politician’s Syllogism, the equivocated term is much more subtle. The word “something” is used in the first premise to indicate “a particular, but unknown course of action” and, in the second premise, to mean “any course of action at all”.

To be more formal, the first premise asserts “there exists a course of action that we must take.” The second premise asserts that the course of action at hand is “a member of the set of all possible courses of action”. When rewritten this way, the syllogism becomes a non-sequitor and it becomes obvious that the politician’s conclusion is far from proven:

“There exists a course of action that we must take.
This is a course of action.
Therefore we must do this.”

This lack of linkage between the proposed course of action in the second premise and the correct course of action in the first is hidden by the fact that the word “something” is used in each case. In this way, Equivocation is often used to hide particular logical leaps in invalid arguments. One last example demonstrates this, stolen straight from the Wikipedias:

A feather is light.
What is light cannot be dark.
Therefore, a feather cannot be dark.

In the case of the cats and the feathers, this little bit of sophistry results only in freakish nine-tailed felines and a suspicious lack of swans. In the case of the politicians, however, it results in horrible, knee-jerk legislation.

An ad hominem fallacy occurs when someone attempts to refute an argument by attacking the person making the argument, rather than the argument itself. For instance:

“My opponent claims that the Earth is round, but he’s also a convicted Horse-Shaver, so we can’t very well trust anything he has to say. Therefore, the Earth is flat.”

Or, more generally:

“My opponent says that A is the case.
But my opponent has unpleasant quality Y.
Therefore A is false.”

This fallacy is extremely popular in politics and frat house arguments. (Which might be a distinction without a difference.)

For more excellent information about what is or isn’t an ad hominem, check out this article by Stephen Bond. Bond makes the excellent point (which I will now forever onward call Bond’s Law) that a mere insult is not sufficient for an ad hominem fallacy. Rather, the insult must be used as evidence that a particular argument is incorrect.

This is a textbook case of “Affirming the Consequent“. This occurs when an inference is improperly inverted. Saying that A implies B asserts the fact that A is a sufficient condition of B. That is to say when A is true, B must also always be true. It is not, however, a necessary condition. In other words, there are other conditions that might make B true.

More simply put: A implies B doesn’t mean that B implies A.

To give another example of the fallacy to more clearly demonstrate:

If it is grass, then it is green.
I have shaved this horse and painted him green.
Therefore, this horse must be grass.

Moral of the story? Simple inference is a one way street. If you try to go the wrong way down an inference, you’ll end up in a horrible flaming wreck of bad logic.

Also: knowing logic is important, even if it does occasionally damage your sense of humor.

Magic Blue Smoke

House Rules:

1.) Carry out your own dead.
2.) No opium smoking in the elevators.
3.) In Competitions, during gunfire or while bombs are falling, players may take cover without penalty for ceasing play.
4.) A player whose stroke is affected by the simultaneous explosion of a bomb may play another ball from the same place.
4a.) Penalty one stroke.
5.) Pilsner should be in Roman type, and begin with a capital.
6.) Keep Calm and Kill It with Fire.
7.) Spammers will be fed to the Crabipede.