Suppose I find a barrel, sealed at the top, but with a hole large enough for a hand. I reach in, and feel a small, curved object. I pull the object out, and it's blue—a bluish egg. Next I reach in and feel something hard and flat, with edges—which, when I extract it, proves to be a red cube. I pull out 11 eggs and 8 cubes, and every egg is blue, and every cube is red.

Now I reach in and I feel another egg-shaped object. Before I pull it out and look, I have to guess: What will it look like?

The evidence doesn't prove that every egg in the barrel is blue, and every cube is red. The evidence doesn't even argue this all that strongly: 19 is not a large sample size. Nonetheless, I'll guess that this egg-shaped object is blue—or as a runner-up guess, red. If I guess anything else, there's as many possibilities as distinguishable colors—and for that matter, who says the egg has to be a single shade? Maybe it has a picture of a horse painted on.

So I say "blue", with a dutiful patina of humility. For I am a sophisticated rationalist-type person, and I keep track of my assumptions and dependencies—I guess, but I'm aware that I'm guessing... right?

But when a large yellow striped feline-shaped object leaps out at me from the shadows, I think, "Yikes! A tiger!" Not, "Hm... objects with the properties of largeness, yellowness, stripedness, and feline shape, have previously often possessed the properties 'hungry' and 'dangerous', and thus, although it is not logically necessary, it may be an empirically good guess that aaauuughhhh CRUNCH CRUNCH GULP."

The human brain, for some odd reason, seems to have been adapted to make this inference quickly, automatically, and without keeping explicit track of its assumptions.

And if I name the egg-shaped objects "bleggs" (for blue eggs) and the red cubes "rubes", then, when I reach in and feel another egg-shaped object, I may think: Oh, it's a blegg, rather than considering all that problem-of-induction stuff.

It is a common misconception that you can define a word any way you like.

Yet the brain goes on about its work of categorization, whether or not we consciously approve. "All humans are mortal, Socrates is a human, therefore Socrates is mortal"—thus spake the ancient Greek philosophers. Well, if mortality is part of your logical definition of "human", you can't logically classify Socrates as human until you observe him to be mortal. But—this is the problem—Aristotle knew perfectly well that Socrates was a human. Aristotle's brain placed Socrates in the "human" category as efficiently as your own brain categorizes tigers, apples, and everything else in its environment: Swiftly, silently, and without conscious approval.

Aristotle laid down rules under which no one could conclude Socrates was "human" until after he died. Nonetheless, Aristotle and his students went on concluding that living people were humans and therefore mortal; they saw distinguishing properties such as human faces and human bodies, and their brains made the leap to inferred properties such as mortality.

Misunderstanding the working of your own mind does not, thankfully, prevent the mind from doing its work. Otherwise Aristotelians would have starved, unable to conclude that an object was edible merely because it looked and felt like a banana.

So the Aristotelians went on classifying environmental objects on the basis of partial information, the way people had always done. Students of Aristotelian logic went on thinking exactly the same way, but they had acquired an erroneous picture of what they were doing.

If you asked an Aristotelian philosopher whether Carol the grocer was mortal, they would say "Yes." If you asked them how they knew, they would say "All humans are mortal, Carol is human, therefore Carol is mortal." Ask them whether it was a guess or a certainty, and they would say it was a certainty (if you asked before the sixteenth century, at least). Ask them how they knew that humans were mortal, and they would say it was established by definition.

The Aristotelians were still the same people, they retained their original natures, but they had acquired incorrect beliefs about their own functioning. They looked into the mirror of self-awareness, and saw something unlike their true selves: they reflected incorrectly.

Your brain doesn't treat words as logical definitions with no empirical consequences, and so neither should you. The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity. Or block inferences of similarity; if I create two labels I can get your mind to allocate two categories. Notice how I said "you" and "your brain" as if they were different things?

Making errors about the inside of your head doesn't change what's there; otherwise Aristotle would have died when he concluded that the brain was an organ for cooling the blood. Philosophical mistakes usually don't interfere with blink-of-an-eye perceptual inferences.

But philosophical mistakes can severely mess up the deliberate thinking processes that we use to try to correct our first impressions. If you believe that you can "define a word any way you like", without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely.

Comments (21)

It is a common misconception that you can define a word any way you like.

Incorrect. It is not a misconception. There are consequences of choosing to define a word that can lead to error if they are ignored, but that does not constrain the definition.

you can't logically classify Socrates as human until you observe him to be mortal.

Also incorrect. Mortality can be a trait possessed by all humans, yet not be needed to identify something as human. If Socrates meets all the necessary criteria for identification as human, we do not need to observe his mortality to conclude that he is mortal.

It is a trivial objection to say that the definition of human might not reflect the nature of the world. That is the case with all definitions: we can label concepts as we please, but it requires justification to assert that the concepts are present in reality.

Eliezer said: "Your brain doesn't treat words as logical definitions with no empirical consequences, and so neither should you. The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity."

What alternative model would you propose? I'm not quite ready yet to stop using words that imperfectly place objects into categories. I'll keep the fact that categories are imperfect in mind.

I really don't mean this in a condescending way. I'm just not sure what new belief this line of reasoning is supposed to convey.

I think I would agree with Charlie Munger that more mistakes have been made from inferential ("run from the tiger") shortcuts than from the use of logic. Such shortcuts as proximity bias, following perceived leaders, doing things because people around us are doing them,loving similar-looking people and hating different-looking people, and similar errors are most likely caused by evolutionary hard-wiring, not by philosophical ponderings. I have dedicated a section of my blog to Munger here:
http://www.blogger.com/posts.g?blogID=36218793&searchType=ALL&txtKeywords=&label

Reactions to 500lb stripy feline things jumping unexpectedly come from pre-verbal categorisations(the 'low road', in Daniel Goleman's terms), so have nothing to do with word definitions.
The same is true for many highly emotionally charged categorisations (e.g. for a previous generation, person with skin colour different from mine....).
Words themselves do get their meanings from networks of associations. The content of these networks can drift over time, for an individual as for a culture. Words change their meanings.
A deliberate attempt to change the meaning of a word by introducing new associations (e.g. via the media) can be successful. Changes in the meanings of political labels, or the associations with a person's name, are good examples.
Whether the direct amygdala circuit can be reprogramed is a different matter. Certainly not as easily as the neocortex.
If you lived in the world of Calvin and Hobbes for six months, would you start to instinctively see large stripy feline things jumping out at you unexpectedly as an invitation to play ?

I suppose I should add, for those who are really stuck in maths or formal logic, that changing the definition of a symbol in a formal system is not the same thing as changing the meaning of a word in a language. In fact you can't, individually and as a decision of will, change the meaning of a word in a language. It either changes, as per my previous comment, or it doesn't.

In fact you can't, individually and as a decision of will, change the meaning of a word in a language.

New phrases are coined constantly, and people change the meanings of existing words also: 'gay' being a good example as it's changed twice in recent history. Presumably there was some person that started that particular definition-shift, does that not count as "individually as a decision of will"?

We are very imprecise in this way because it is very rare that we split the sign into signified and signifier. If you know that a 'Tiger' thing can kill, it is perhaps best not to worry about the signification of the form and the entropy of its relations - its best to run.

I was reading Nietzsche and found something striking. Compare this, from Eliezer:

But when a large yellow striped feline-shaped object leaps out at me from the shadows, I think, "Yikes! A tiger!" Not, "Hm... objects with the properties of largeness, yellowness, stripedness, and feline shape, have previously often possessed the properties 'hungry' and 'dangerous', and thus, although it is not logically necessary, it may be an empirically good guess that aaauuughhhh CRUNCH CRUNCH GULP."

and this, from Nietzsche:

Innumerable beings who made inferences in a way different from ours perished; for all that, their ways might have been truer, Those, for example, who did not know how to find often enough what is "equal" as regards both nourishment and hostile animals– those, in other words, who subsumed things too slowly and cautiously– were favored with a lesser probability of survival than those who guessed immediately upon encountering similar instances that they must be equal. [ . . . ] The course of logical ideas and inferences in our brain today corresponds to a process and a struggle among impulses that are, taken singly, very illogical and unjust. We generally experience only the result of this struggle because this primeval mechanism now runs its course so quickly and is so well concealed. (The Gay Science, Section 111)

Nietzsche doesn't have a modern grasp of how evolution works, but his intuitions on cognition were far sharper than any of his contemporaries. That's partially why I think he still has something to offer.

I think this is exciting. I'm going to start making my own words for groups of things. I'm a java/.net programmer so I'm used to object-oriented so it's natural for me to group things that may be used again!