The War on Mediocrity

Rationality

April 19, 2010

I've long sought to be rational, but I've recently found that I also
want to be irrational. Rationality has its benefits — you get to know
the truest possible nature of reality, you get to be right about stuff
— but it seems to take you only so far. And, yeah, I know what you're
thinking: "Woah, slow your roll, new ager." But hear me out. I'd agree
that rationality is infinitely good at accomplishing what rationality
can accomplish, which I suppose is a tautology. But the more I consider
irrationality, the more it, too, seems to offer.

When I think
about people I admire, whose lives and careers I respect like whoa —
and I think about them a lot — I face the unavoidable conclusion that
almost all of them are, in some respect, irrational. Worse, the very actions I admire them for taking
are irrational. I thought rationality was pretty sweet until I realized
that all my heroes, and many of my friends' as well, got famous by
being irrational.

August 26, 2009

When Less Wrong's call to compile a reading list for new rationalists went out, contributor djcb responded by suggesting The Mind's I: Fantasies and Reflections on Self and Soul,
a compilation of essays, fictions and excerpts "composed and arranged"
by Douglas Hofstadter and Daniel Dennett. Cut to me peering guiltily
over my shoulder, my own copy sitting unread on the shelf, peering back.

The
book presents Hofstadter and Dennett's co-curation of 27 pieces, some
penned by the curators themselves, meant to "reveal" and "make vivid" a
set of "perplexities," to wit: "What is the mind?" "Who am I?" "Can
mere matter think or feel?" "Where is the soul?" Two immediate concerns
arise. First, The Mind's I's 1981 publication date gives it
access to the vast majority of what's been thought and said about these
questions, but robs it of of any intellectual progress toward the
answers made in the nearly three decades since. (This turns out not to
be an issue, as most of the answers seem to have drawn no closer in the
1980s, 1990s or 2000s.) Second, those sound suspiciously similar to
questions hazily articulated by college freshmen, less amenable to
"rational inquiry" than to "dorm furniture and bad weed." They don't
quite pass the "man test," an reversal of the fortune cookie "in bed" game: simply tack "man" onto the beginning of each question and see who laughs. "Man, who am I?" "Man, where is the soul?" "Man, can matter think or feel?"

Hofstadter
and Dennett's fans know, however, that their analyses rise a cut above,
engaged as they are in the admirable struggle to excise the
navel-gazing from traditionally navel-gazey topics. The beauty is that
they've always accomplished this, together and separately, not by
making these issues less exciting but by making them more so.
Their clear, stimulating exegeses, explorations and speculations brim
with both the enthusiasm of the thrilled neophyte and the
levelheadedness of the seasoned surveyor. They even do it humorously,
Hofstadter with his zig-zaggy punniness and Dennett with his wit that
somehow stays just north of goofy. Thus armed, they've taken on such
potentially dangerous topics as whether words and thoughts follow
rules, how the animate emerges from the inanimate (Hofstader's rightly
celebrated Gödel, Escher, Bach: An Eternal Golden Braid) and consciousness (most of Dennett's career), on the whole safely.

But
obviously this is not a "pure" (whatever that might mean)
Hofstadter-Dennett joint; rather, their editorial choices compose one
half and their personal commentaries — "reflections," they banner them
— on the fruits of those choices compose the other. Nearly every
selection, whether a short story, article, novel segment or dialogue,
leads into an original discussion and evaluation by, as they sign them,
D.R.H. and/or D.C.D. They affirm, they contradict, they expand, they
question, they veer off in their own directions; the reflections would
make a neat little book on the topics at hand by themselves.

Terribly inelegant a strategy as this is, perhaps I'll cover the pieces one-by-one:

The first section, on self and identity, opens strong with Jorge Luis
Borges, for my money the finest short fictionalist of ideas... ever,
probably. His well-known "Borges and I" plays with the distinction
between Borges the man and Borges the public author, treating the two
as ontologically distinct. Even if that idea has passed into the realm
of old hat, the story containing it holds up by the razor-sharpness of
its language, even in translation: "It would be an exaggeration to say
that ours is a hostile relationship: I live, let myself go on living,
so that Borges may contrive his literature, and this literature
justifies me."

The mystic Douglas Harding, in "On
Having No Head", recounts the moment he discovered he had no head. As
he describes the various consequences of this realization, ht essay
becomes essentially a riff on the fact that it's impossible for anybody
to directly see their own, physical head and thus that they know of its
existence that much less definitively. At some point, this wears out
its welcome; Harding stretches an intellectual snack into a dinner,
following the meal with a coda about how, aw, it's all just semantic
confusion over the verb to see.

Harold
Morowitz's "Rediscovering the Mind" has not, it must be said, stuck
deeply in my own. My forgetfulness may be due in part to the fact that
reductionist examination of the mind and the challenges such an
approach faces have entered, and remained in, common discourse since
the article saw Psychology Today publication in 1980, so its
ideas couldn't strike me with what I assume to be the intended force of
novelty. As a brief introduction to the problems of reductionism and
the mind, though, I imagine it's pretty effective.

Kicking off the section on the concept of the soul, Alan Turing's groundbreaking 1950 Mind
article "Computing Machinery and Intelligence" proposes his
now-eponymous test for machine intelligence. One might assume the years
have been especially unkind to Turing's (at least nominally)
technology-minded essay and Dennett and Hofstadter's accompanying
commentary, but no, machine intelligence remains elusive, and thus both
texts merit continued digestion.

Hofstadter extends
the Turing talk with "The Turing Test: A Coffeehouse Conversation",
setting up an intellectual triangle between "Chris, a physics student;
Pat, a biology Student; and Sandy, a philosophy student." (The unisex
names turn out to fold into one of the discussion's main points, though
I found keeping everyone straight a tad difficult.) The three throw
down their collective six cents on the possibilities, implications and
validity of the famous test. While illuminating, the piece spreads its
content way too thin, its 23 pages littered with conversational
detritus: "That's a sad story." "Good question." "How so?" But to be
fair, these problems hamper most written dialogues, as does the
reader's sneaking suspicion that they're being somehow led down the
garden path. As dialogues — trialogues? — go, though, this one serves a
nutritional portion.

"The Princess Ineffabelle", the
first of the collection's three imaginings by Polish science fiction
writer Stanislaw Lem, envisions a sort of proto-virtual-reality device
that can load up an entire era and its people on punch cards (!) and
simulate it with find-grained precision. A king, seeking a princess
extant only within the machine's world, inquires as to how he might go
about having himself digitized and inserted into said world. But the
digital king wouldn't really be the king king, right? Or would that matter?

Terrell Miedaner's eerie "The Soul of Martha, A Beast" envisions a
courtroom demonstration wherein a chimpanzee, wired to a device that
translates its brain's neural patterns into a simple English
vocabulary. A discussion ensues about whether the animal, "uttering"
strings like "Hello! Hello! I Martha Happy Happy Chimp," truly merits
the designation "intelligent," after which the researcher puts his
charge to death:

As the unsuspecting chimpanzee placed the
poisoned gift into her mouth and bit, Belinsky conceived of an
experiment he had never before considered. He turned on the switch.
"Candy Candy Thank You Belinsky Happy Happy Martha."

Then her voice stopped of its own accord. She stiffened, then relaxed in her master's arms, dead.

But
brain death is not immediate. The final sensory discharge of some
circuit within her inert body triggered a brief burst of neural
pulsations decoded as "Hurt Martha Hurt Martha."

Nothing
happened for another two seconds. Then randomly triggered neural
discharges no longer having anything to do with the animal's lifeless
body send one last pulsating signal to the world of men.

"Why Why Why Why —"

A soft electrical click stopped the testimony.

The
operative concept, discussed in Hofstadter's reflection, emerges as the
determination of what degree of linguistic evidence, if any, indicates
the presence of "intelligence," "consciousness," a "soul" — pick one or
more of your favorite fuzzily-defined concept and attempt to determine
what separates them. All the book's pieces present more questions than
answers, and Miedaner's first especially so. Still, it stays with you,
as does his next piece...

"The Soul of the Mark III
Beast", in which a lawyer invites a timid woman to "kill" a robot. The
mechanical creature, a steely cross between a mouse and a beetle,
"eats" electrical current from the wall, "flees" its pursuer's hammer
blows and "bleeds" oil when damages. These points of superficial
congruence with the animal kingdom seriously freak the woman out, and
she's really got to maintain to finish the job. In short: the
fuzzy-to-nonexistent boundary between the sentient and the nonsentient,
illustrated (in prose).

Allen Wheelis' "Spirit", which
heads the section on the mind's physical foundation (also known as the
brain), comes off as relatively insubstantial but addresses concerns
certain readers may harbor. To wit: it feels as if we humans possess
some ineffable "spirit." But it's tough to pin down, though it may
animate the rest of the natural world as well. Hofstadter boils it down
skillfully in the reflection: "Wheelis portrays the eerie, disorienting
view that modern science has given us of our place in the scheme of
things. Many scientists, not to mention humanists, find this a very
difficult view to swallow and look for some kind of spiritual essence,
perhaps intangible, that would distinguish living beings, particularly
humans, from the inanimate rest of the universe. How does anima come
from atoms?" A Big Question indeed.

"Selfish Genes and Selfish Memes" is a selection from Richard Dawkins' The Selfish Gene. If you have not read this book, minimize your browser and do so. I'll wait.

"Prelude... Ant Fugue" is a selection from Douglas Hofstadter's Gödel, Escher, Bach.
If you have not read this book, minimize your browser and do so. I'll
wait. (It's the dialogue comparing the human brain to an ant farm,
which I still find ever-so-slightly mindblowing to this day.)

Our mental hardware undergoes the severest possible parting-out in Arnold Zuboff's "The Story of a Brain",
a fiction and thought experiment — in several senses of the term —
where a group of scientists remove the healthy brain from a young man's
otherwise abnormally decaying body, stick it in a vat and give it
"experiences" by way of electrical stimulation. But then a drunken
night watchman accidentally separates the brain's hemispheres, damage
the scientists attempt to repair with remote communication devices
allowing neurons from one half to stimulate the others'. Over the next
thousand years, thanks to widespread scientific-community interest,
fiddly readjustment of the apparatus and a general shortage of brains
in vats, each of this brain's individual neurons finds it way, step by
logical step, to a separate laboratory, all supposedly linked together.
And the labs occasionally replace their neurons. It's the brain as
Abraham Lincoln's proverbial original axe: the blade's been replaced
once and the handle twice. At what exact point can we no longer call it
a brain, as we normally understand the concept? As with many of the
other concepts on which the book touches, discrete boundaries remain
elusive.

Daniel Dennett's "Where Am I?" leads into the section on mind-as-software. (See also the video dramatization!)
The story follows a fictionalized version of Dennett himself as he's
hired on to a secret government project to dig up a brain-destroying
underground warhead. Specifically, Dennett's meant to go down there and
dig it up by hand. Removing his own brain and installing it safely in a
vat, the government dudes set it up so Dennett can remotely control his
own body, in a way, but feel, more or less — he compellingly describes
the newly-introduced little technical quirks — as if he's still a brain
and body organically united. But who's the "real" Dennett? Shades of
first-year philosophy classes' rhetorical questions about who you'd be
if your divided brain was split between two bodies, I know, but Dennett
presents it in a delightfully entertaining way, as is his wont.

With "Where Was I?",
David Hawley Sanford takes another angle on Dennett's concept, positing
a different government operation — again, top-secret — to develop
devices that transfer remotely-gathered sense experiences so accurately
to the local user's body that the meaning of reference to his actual
location — and how one goes about determining his actual location —
grows muddled, questionable, a matter of unsettlable debate.

The next chapter excerpts Justin Leibler's Beyond Rejection,
a sci-fi novel about a murdered man who wakes up to find his brain
loaded — via brain-backup tapes, a standard piece of personal
technology in Liebler's imagined future — into a new body:
specifically, a woman's. (More specifically, a woman with a tail's. The
tail is not explained, at least in the reprinted segment.) The ten
pages include a suitably creepy sequence wherein the protagonist wakes
up, disoriented due to incomplete brain-body synchronization and
disturbed by the two new "dead cancerous mounds" of "disconnected,
nerveless jelly" — breasts, in other words — he'll have to learn to
live with. While not especially striking technically or biologically,
the passage definitely evokes the right set of feelings.

A selection from Rudy Rucker's slightly goofy-sounding novel Software illustrates, after a fashion, the questions of what specific component or components, if any, drive consciousness, and what self-consciousness
has to do with that consciousness. And, as Dennett's reflection
clarifies, if a supposedly conscious entity's consciousness were to
cease existing, how would we know?

Christopher
Cherniak's short story "The Riddle of the Universe and Its Solution"
posits a computer program whose output, when viewed in full by a human,
forces that human's brain into an infinite loop — "perhaps even the
ultimate Zen state of satori," Hofstadter reflects — "locking
it up" for good. Before slipping into this coma, each victim utters the
word "Aha!" This analogizes the human brain — and only the human brain,
since the program, "the Gödel sentence for human Turing machine," is
shown not to induce the coma in apes — to an actual computer in terms
of operating with enough logical strictness to wilfully — loaded word,
I know, and so do all the authors involved — incapacitate itself.
Hofstadter ties this into the broader topic of self-referential loops
and what they might already have to do with the mind.

The book's second Stanislaw Lem selection, "The Seventh Sally or How
Trurl's Own Perfection Led to No Good", opens the section on created
selves and free will. I found it just slightly too weirdly-written to
draw much from directly, but Dennett and Hofstadter's much clearer
reflection — no pun intended — drops a few intriguing thoughts about
looking for "souls" inhabiting simulated worlds.

The
third Lem piece, "Non Serviam", comes immediately after. Though
thematically similar to its predecessor — the nature of simulation, the
parallels between simulated world and the non-simulated world — it's
also slightly less opaque. (Slightly less.)

Raymond Smullyan's dialogue "Is God a Taoist?" has a mortal pleading with his creator to strip him of free will:

GOD: Why would you wish not to have free will?

MORTAL: Because free will means moral responsibility, and moral responsibility is more than I can bear.

GOD: Why do you find moral responsibility so unbearable?

MORTAL: Why? I honestly can’t analyze why; all I know is that I do.

GOD:
All right, in that case suppose I absolve you from all moral
responsibility, but still leave you with free will. Will this be
satisfactory?

MORTAL: (after a pause): No, I am afraid not.

And
it goes on like this, the mortal desperately trying to reason with the
god and find a means of being freed from what's bothering him about
morality, goodness, responsibility and choice. Eventually, matters
either evolve or devolve, depending upon how you look at it, to whether
the god or the mortal exists, how one can know the other exists,
whether the god is the mortal or the mortal the god, who's on first,
what's on second and so on and so forth. In his reflection, Hofstadter
references an apropos Marvin Minsky quote: "Logic doesn’t apply to the
real world."

A second dose of Borges comes in "The
Circular Ruins", the story of an isolated wizard who dreams up an
actual human being. When he's imagined this potential boy's every
possible detail, he requests that the god Fire create him. Fire
complies, incarnating the wizard's vision, but in such a way that he's
still not quit real enough to be burned by fire (the element). When the
wizard walks into a fire, he find's that he doesn't burn — and thus is,
himself, someone else's dream. We're back in Intro to Philosophy's
territory, in a way: are you dreaming right now, or are you not? How do
you know? "Is this philosophical play with the ideas of dreaming and
reality just idle?" Dennett asks. "Isn't there a no-nonsense
'scientific' stance from which we objectively distinguish between the
things that are really there and mere fictions? Perhaps there is, but
then on which side of the divide we put ourselves? Not our physical
bodies, but our selves?" The answers appear to be "nah" and "we don't
know," or maybe "mu."

John Searle's "Minds,
Brains, and Programs" searches for the seat of intelligence with what's
now called the "Chinese room" thought experiment, in which one imagines
a human sealed in a room under whose door an unseen interlocutor passes
slips of paper with sentences written in Chinese. With no understanding
of the Chinese language, the man in the room follows a series of
mechanistic procedures to write out a reply on another slip and pass it
back under the door. Repeat. If the fellow on the door's other side
believes he's conducting a conversation in writing with a genuine
Chinese speaker — the rule-following scribbler inside having thus
passed a sort of Turing test — who's to say that somewhere in the man,
the rules and the slips of paper, there is not a genuine
understanding of Chinese? But of course we find that ridiculous, so
there's got to be something within the brain that we can use as a line
of demarcation. Nothing we've identified yet or that may be
identifiable at all —, but something. Dennett and Hofstadter
don't find this line of thought convincing, identifying a few
sleight-of-hand points in their reflection, but I didn't feel it a
waste of time to hear the notion proposed. Proposed rather
unconvincingly, sure, but quite articulately! (More so than my summary
gives it credit for, certainly.)

The brief but piquant
"An Unfortunate Dualist" by Raymond Smullyan envisions a devout dualist
in great pain. Though he'd like to kill himself, he fears hurting
others, commiting moral crime and/or enduring punishment in the
afterlife. Fortunately, he finds a drug that destroys only the soul,
leaving the body intact and operational as before. A friend secretly
injects him with the drug the night before he goes out to pick up a
dosage himself. Upon ingesting it of his own volition, the dualist, of
course, feels no different: disappointed, he believes himself to still
possess a soul and endure suffering. "Doesn't all this suggest,"
Smullyan asks, "that perhaps there might be something just a little
wrong with dualism?" Indeed, but who's really a dualist anymore?

Thomas Nagel answers his essay's title question "What Is It Like to Be
a Bat?" with the argument that we can't know, because we're humans,
inescapably, and they're bats. So we could well ask what it would be
like for a human to be a bat — what it would be like to have
our human senses and perceptions transformed into human senses and
perceptions that more closely resemble what we think bats have — but
not what it's like to simply be a bat. Hofstadter takes this
pretty far in his reflection, asking such questions as "What is it like
to hear one's native language without understanding it?" and "What is
it like to hate chocolate (or your personal favorite flavor)?" Fans
will enjoy his punning of Nagel's title, "What is it like to bat a bee?
What is it like to be a bee being batted? What is it like to be a
batted bee?" (Illustration of baseball player and bee included.)

Completing the Smullyan hat trick, "An Epistemological Nightmare"
depicts a man's consultations with an "experimental epistemologist."
Infatuated with his latest piece of in-office gear, a "cerebroscope"
that supposedly reads the patient's every neuron, the epistemologist
puts the poor fellow through the ringer by using the device to reject
his every statement about his beliefs, his beliefs about his beliefs,
and his beliefs about his beliefs about his beliefs. Like the book's
first Smullyan selection, this dialogue isn't without its
Abbott-and-Costello elements: the absurdity reaches such a height that
the epistemologist must eventually forsake the machine in order to
break the loop he's created, by virtue of the very trust he's placed in
it, between the cerebroscope and his own brain.

"A Conversation With Einstein's Brain" is a selection from Douglas Hofstadter's Gödel, Escher, Bach.
If you have not read this book, minimize your browser and do so. I'll
wait. (It's the dialogue, even better than the one about the ant farm,
that proposes the "copying" of a brain into book form, and then letting
the books "interact.")

The reflection-less "Fiction"
by Robert Nozick wraps the book. The piece at first seems to be
narrated by a fictional character: indeed, its first sentence is "I am
a fictional character." But this character goes on to assert that the
reader, too, is a fictional character, and that this piece one
fictional character reads and another narrates is, in fact, a work of
non-fiction, as are all works — works within this fictional world in
which we live, that is. But who, then, wrote (or currently writes) our
world?

As a book for new rationalists, The Mind's I
would be best offered as a jolt, a set of mind-stretching exercises
that clear the road for the long, incompletable journey to rationality.
A reader expecting any sort of instruction on how to think
rationally will find a dry well, but that's not the point; these 27
pieces and their commentaries illustrate that it's possible in the
first place to do some thinking in the borderlands of such everyday
concepts like as brain, mind, soul, self, I, you, intelligence,
sentience, etc. Perhaps the same explanation justifies
low-level philosophy courses and the bull sessions students hold in the
wee hours after them, but Hofstadter and Dennett manage to use material
a great deal more entertaining, more exotic and altogether smarter.
Would that we could get a revised and expanded update.

August 05, 2009

Health care has been a big debating issue in the United States for much
longer than I can remember, and this year it's consumed even more
public discourse bandwidth than usual. Rare is the day I hear or read
no arguments about health care, and I don't even seek them out.
Observing these discussions, I've learned nothing about health care but
quite a bit about debate.

There's very little rationality to be
found in these conflicts, which tend to devolve into emotional,
ideological shouting matches. Rather, there's typically a couple grains
of rationality, but no more; participants seem to refuse to think
critically about every aspect of the situation. As a result, there's a
lot of talking past one another, and even less gets accomplished than
in regular political arguments, which don't accomplish much.

My
low-res description of these health care debates breaks the debaters
into two sides: "pro-market" and "anti-market," depending upon their
preferred means of health care delivery. (Others argue on behalf of
whatever configuration happens to obtain in their own country, but that
appears to be little more than sublimated nationalism.) The labels
represent not extreme ends of a spectrum — practically nobody in the
mainstream plumps for thoroughgoing privatization or nationalization —
but humps of a bimodal graph. Here are the lessons I've taken from
their debates:

If you profess to be for or against a concept, know more about that concept than its name.
There are sound reasons to condemn socialism if you know what socialism
is. If you don't, there aren't. The same goes for free markets, where
capitalism's war of all against all and redness in tooth and claw have
sunk into the realm of desperate rhetorical cliché. And don't forget
the important question of whether what you're talking about actually is socialism or a free market, which brings me to implore you...

Do not confuse abstractions with realities, and vice versa.
Neither straight-on socialism nor pure free markets really exist in the
United States, so it's not exactly a rich field for examples of either.
Some anti-market debaters hold up the U.S.'s ad hoc health care
structure as an example of what's wrong with free health care markets,
but few of its observable mechanisms suggest anything like a truly
libertarian free market. It's similarly misleading to invoke the
"socialism" of a Sweden or a Finland or what have you while leaving
"socialism" undefined. I tend to agree with Paul Graham on this count:

I
think the fundamental question is not whether the government pays for
schools or medicine, but whether you allow people to get rich.

[ ... ]

Any
country that makes this choice ends up losing net, because new
technology tends to be developed by people trying to make their
fortunes. It's too much work for anyone to do for ordinary wages. Smart
people might work on sexy projects like fighter planes and space
rockets for ordinary wages, but semiconductors or light bulbs or the
plumbing of e-commerce probably have to be developed by entrepreneurs.
Life in the Soviet Union would have been even poorer if they hadn't had
American technologies to copy.

Finland is sometimes given as an
example of a prosperous socialist country, but apparently the combined
top tax rate is 55%, only 5% higher than in California. So if they seem
that much more socialist than the US, it is probably simply because
they don't spend so much on their military.

Americans on the anti-market side also display a persistent drift between, say, Sweden itself and their idea
of Sweden. A professor of mine once said to his class that Sweden
"offers free medical care to its citizens, of the highest quality." I
think he made more empirical (and empirically testable) claims in that
statement than he realized. We have much to learn from northern Europe,
but an unacceptable amount of anti-market people's claims about the
region have passed, unchecked, into cant.

Watch your analogies. Sweden is often the x in a common anti-market lament: "x developed country has universal health care, so why doesn't the United States?" (France also pops up as x,
though its system isn't quite what's normally thought of as "universal
health care.") As any analogy, this one presumes that its subjects are
comparable. But are the tiny, near-homogeneous nation and the gigantic,
ultradiverse, relatively ahistorical superstate really alike on any
meaningful dimension? A U.S.-France comparison is vaguely ludicrous; a
U.S.-Sweden becomes highly abstract almost immediately.

The idea of "American exceptionalism" gets dismissed as jingoistic nonsense, but it's not the idea that America is exceptionally awesome;
it's the idea that America is, by the standards of the world's
nation-states, starkly, observably different. (Which could mean worse,
if you're into that.) You should take as much care recommending the
transplantation of one country's institutions to America as you would
recommending the transplantation of one species' organs into another.

Your great data are not convincing.
If you find yourself winging histograms at your interlocutor, the
campaign for heart and mind is probably lost, no matter how much they
support your point. I realize, data wonks, that this isn't how you
think it should be, but cast back in your memory to the no doubt
countless times you've seen someone — anyone — convinced that way. Your
only hope is to perform your conversions in an extra-debate setting,
where all participants aren't primed to shield their ideological
identities as they would their scrota.

That said, most of the
data I see wielded doesn't actually address opponents' concerns.
Numbers about spending won't matter to someone who cares only about
quality, numbers about insurance coverage won't matter to someone who
cares only about delivery speed and numbers about R&D won't matter
to someone who cares only about universality.

We do not live in a morality play.
The useless concept of desert sees a lot of action in health care
debates. Depending upon which accusatory screeds you read,
anti-marketers believe that everyone everywhere deserves health care
and pro-marketers believe that people only deserve the health care they
can pay for. While I doubt that many on either side actually hold those
explicit beliefs, that's not my point: desert is so objectively
unrootable that discussions about it don't devolve into shouting
matches, they effectively begin as shouting matches. Claim that
a group does or does not deserve health care and your evidence
necessarily amounts to nothing more than "says me." The rebuttal will
be no better.

Your tragedies are anecdotes. It's hardly rare for a health care debate to feature sob stories presented as support. Hell, even Malcolm Gladwell did it in the New Yorker.
But are someone's uninsured grandmother's death from an untreated gum
infection or a freeloader cousin's overuse of the emergency room
relevant data points to a discussion of public policy? Maybe if the
public consists of five people. Otherwise this anecdotal strategy is a
naked stab at emotion, terribly intellectually insulting to those you
try to use it on. If the debate is about your cousin or grandmother, fine. If it's not, leave them out of it.

You do not have the hand of god. The first of the four horsemen of futility
frequents health care debates like they were singles bars. "Should the
United States have single-payer health care?" and "Should the United
States had a totally free health care market?" are questions at such a
distance from reality as to not be worth asking. If two omnipotent
superbeings who actually possess the power to instantiate these
conditions by pure will are the ones debating, different story. But
brother, we ain't omnipotent superbeings. (My recommendation, not that
you asked for one, is to discuss possible changes at the margin
instead.)

Don't appeal to your imagination. The second
horseman has a taste for the health care speakeasy as well. He goes in
for more pedestrian stuff, though: made-up disaster scenarios,
libertopias, the aforementioned fantasy Swedens.

Begin debating at step one, not step ten.
The most offputting characteristic of health care debates is their
tendency to skip important establishing steps. As a result,
foundational questions such as "Why do you regard health care as
different from other consumer goods like food or water?" are never
addressed, rendering the conflict insoluble. The key with any
discussions of this kind is to begin from a common set of premises,
however basic; otherwise, you'll never identify the real nature of the
disagreement.

July 10, 2009

Though it's aesthetically horrendous, as all PowerPoint presentations
are, there's something terrifically valuable to be taken from Liron
Shapira's "You Are a Brain". The slides point the way to a more elegant restatement of what I said in a past post to which I often refer:
roughly, that you shouldn't confuse your imagination's
information-starved projections for reality. Shapira displays a map
within a brain and states that understanding consists in "having an
accurate map of the territory inside you. Yes, physically inside you."
That our brains operate from something like a map of reality comes as
no surprise, but the upshot a few slides later is what struck me as so
incisive: "You can only see your map. But you feel like your map of
reality is reality. This is what it feels like to be a brain."

I'm
no big transhumanist; I assume none of us will live to see our own
mental capacity jacked up to the degree that our brain's maps can enter
shouting distance of a resemblance to reality. The only hope I can see
is to recognize, recognize hard and never for one second forget that
the world in all its intricate, variegated complexity is always and
everywhere mediated by our own shoddily-constructed inner
representation of it. We can try to compensate for our biases and
actively seek information to incorporate into our maps, but the first
and most necessary step would seem to be conceding their inherent
sketchiness. Think of the junior high kids whose maps display little
more than school, home and the mall. No wonder each and every bit of
miniscule nonsense that goes on in their classrooms, families or tiny
social circles resonates so earthshakingly with them; the maps lack a
realistic scale. (Hell, think of the salarymen of every nationality
whose inner worlds comprise home, the office, the commute and the
mistress' apartment. Same deal.)

Your own mental map, no matter
how detailed, fails to capture a literally unimaginable amount of
stuff. It doesn't even comprehend the sheer variety of the stuff it's
missing. It can't even grok the second-order variety, the variety of
the variety of the stuff outside its range. A environment — natural,
architectural, social, or some combination thereof — spectacularly
conducive to your current manner of thought and living in which you
could perform your most interesting, unexpected work exists somewhere
off your map. A book that, read, will substantially and irrevocably
alter your mindset for the better sits on a shelf somewhere off your
map. The models you can observe and combine to create a unique career
and/or life of which you don't yet have the slightest inkling is
possible live somewhere off your map. A work of art that, viewed, will
forever expand your idea of the possibilities of its medium and the
possibilities of all other media to which its concepts are transposed
is on display somewhere off your map. A foreign culture that reboots
the components of your brain that have lain dormant due to the
deadening, by-now-even-undetected repetition of the norms, mores and
clichés of the one you happen to have been born in is native to a land
somewhere off your map. Music that thrills you both aesthetically and
intellectually, stoking within your own insatiable drive to create, is
somewhere off your map. For that matter, a person who thrills
you both aesthetically and intellectually, stoking within you an
entirely different sort of drive, is somewhere off your map. And if you
already have any of the foregoing, they were once off your map, so imagine what else lies out there in terra incognita.

The
map, as we live in perpetual danger of forgetting — if indeed we ever
realized it — is not the land. Perhaps you can't much increase its
resolution, but you can expand it to cover more territory. Why reach
the end of your life with vast swathes of it left blank but for the
dire words "Here be dragons"?

Most disagreements are dishonest. That's what I tentatively believed and what I started believing even more after being pointed toward the paper "Are Disagreements Honest?" from the megacool Tyler Cowen. The gist: yes. Yes they are. A taste:

Are typical human disagreements rational? Unfortunately, to answer this question we would have to settle this controversial question of which prior differences are rational. So in this paper, we consider an easier question: are typical human disagreements honest? To consider this question, we do not need to know what sorts of differing priors are actually rational, but only what sorts of differences people seem to think are rational. If people mostly disagree because they systematically violate the rationality standards that hey profess, and hold up for others, then we will say that their disagreements are dishonest.

After reviewing some stylized facts of disagreement, the basic theory of disagreement, how it has been generalized, and suggestions for the ways in which priors can rationally disagree, we will consider this key question of whether, in typically disagreements, people meet the standards of rationality that they seem to uphold. We will tentatively conclude that typical disagreements are best explained by postulating that people have self-favoring priors, even though they disapprove of such priors, and that self-deception usually prevents them from seeing this fact.

Years ago, I placed upon myself a tentative injunction against arguing over the internet — and that only after a particularly unproductive few days spent in one of those "rate my playlist" communities. I can't quite tell whether I've broken it. Since vowing to put no more precious sand from the hourglass down the bottomless well of e-debate, I've engaged in discussions that one might or might not call arguments. I've avoided name-calling, to be sure, and insult-flinging in all its forms, though I wasn't much for flamewars to begin with. My "arguments", such as they are, take a different route.

It now seems to me that, when both participants lay out their premises and define their terms as clearly as possible, the substance of the argument, already hazy, evaporates. Without quite knowing how I've gotten here, I've reached the conclusion — a correct one, I believe, or at least a less wrong one than I've ever held — that everyday arguments are mostly, to re-contextualize the wise words of Paul Graham, "artifacts induced by sampling at too low a resolution." Explain your positions in reasonably fine detail. Get clear on your word meanings. Break the issues, where possible, into sub-issues. Reveal subjective probabilities. Poof, it's over: either the arguers find that they actually do agree, or they conflict in some philosophical bedrock issue rendered inconsequential by its sheer broadness.

When I'm talking with a guy or gal and my Spidey-sense detects an oncoming internet argument — or, what the hell, even real life argument — I, apparently being both Spider Man and Robocop, shift to one prime directive: dig deeper and deeper until I discover the core of the conflict. Nine times out of ten, there's no (real) conflict, and I've saved us both hours of dicking around over some semantic mismatch. One time out of ten, the conflict is real-ish, but whatever issue it's about is sure to be a damn fascinating one, and if my initial position turns out to be in the wrong, it's a golden opportunity to correct it.

That said, there are four angles in arguments, or just plain ol' discussions, that I cannot abide. If my interlocutor takes them, I will begin frantically searching for the escape hatch. They are as follows:

The hand of god. This is when one makes a normative argument about the actions of an omnipotent being. Sounds wild, but it happens all the time, and not just in religious contexts. When someone goes on about how if only "we" (or "they") just incentivized this, disincentivized that, punished this, rewarded that, built this here and built that there then our problems would be solved, they can only be making recommendations to the hand of god. Hard as this is to internalize, society is nobody's engineering project; it's not just that a given individual or group shouldn't, to use a phrase that refers to no conceivable action, "run the world", it's that nobody can. Absent awesome supernatural powers, it's impossible. If you find yourself hearing someone's grand plans to reform society, ask them if they're a god. ("Ray, when someone asks you if you're a god, you say yes!") If not, ask they're a close advisor to a god. If not, shut 'em down, because their argument is pointless. They might as well muse on how best to harvest the moon's green cheese.

People raise the hand of god all the time in political arguments. Oh, so you think everything would be just fine if the country switched to your tax program or your health care system, do you? Are you going to implement them yourself? Can't be done, I'm afraid. Are you going to run for public office and propose them? No? Well, are you at least going to suggest them to an elected official? No? Then shuduppa you mouth. I don't mean to give the impression that this is solely the domain of politics-watchers, either; just listen to a few presidential candidates' speeches. They just spout off tidal waves of B.S. about the supposed end results of the choices they would make if elected, as if they could control those. Sometimes I think I'd give up a limb if it meant hearing candidates, if only for one election, talk only about the actual actions a president can take.

Appeal to imagination. My opponent believes that this country is A-OK as it is. Well, I'm here to tell you that while he's standing around asking "Why?", I'm asking "Why not?" Look around you; look at all the filth, the deaths, the poverty. ("Crime is everywhere, crime, crime!") I don't think that's A-OK at all. My friends, I envision a world where everything — the cars, the buildings, the animals — is made of candy. Chew on that when you're about to punch the ballot.

The Economist summed up this maneuver nicely in its profile of Naomi Klein, entitled "Why Naomi Klein Needs to Grow Up":

She gives capitalism no credit for the extraordinary progress seen in recent decades in reducing poverty and other measures of deprivation (notably child mortality) in the world's poor countries. She measures the growing-pains of capitalist development not against real-world alternatives but against a Disneyesque utopia in which no poor person ever loses his job or chooses to work in a multinational factory at low wages (by rich-world standards).

Criticizing the state of the nation, world or what have you is perfectly fine — beneficial, even. Doing so by conjuring a paradise in your head and then comparing its conditions with those that actually exist, which — shock, horror — don't measure up to the chimera, is not. All you will have proven is that you can have one hell of a pleasant daydream. My job's pretty good, but you know what would be even better? A job that paid a hundred dollars an hour and only required me to drive a t-topped 300ZX and listen to old school all day. Never mind that such a job does not exist, nor has it ever; it's on the same plane as a world without disease, poverty and crime. Depending on the definitions we're using, those things theoretically could be eliminated, but what's the point of trying to imagine your way there no reference whatsoever to the world as it is, other than to bitch about it?

Invocation of desert. I am pretty much done with desert as a concept. No argument involving desert has ever been satisfactorily resolved, because desert can't be objectively established. I could argue that I deserve a pie, but anyone else could immediately counter, with the very same degree of truth and validity, that, on the contrary, I don't deserve a pie. Because both sides boil down to "says me", the argument remains gridlocked until the heat death of the universe. That's a silly example, but it's exactly the same as when someone argues that, say, the poor deserve to have wealth redistributed to them. Now, I don't disagree with that, but it doesn't matter whether the poor deserve anything, because there's no way to intersubjectively establish that desert. In fact, it actively harms the cause of the poor to make a desert-based argument in their favor, because it uselessly eats up opportunity cost that could be employed concretely improving their lot in life. Alas, a simple morality play pulls hard.

Identity. Just as I am not a fan of identity-based politics, I am not a fan of identity-based anything. Two particularly abused concepts might shed light on this: authenticity and soul. When a speaker makes claims about the "soul" of a piece of music, say, or the "authenticity" of a dish of food, they're not really — or even primarily — uttering a statement about an external thing. The speaker's making claims about himself, and bold ones at that. The sentences "This soup tastes authentic" or "This song has no soul" are more realistically phrased as "I have the special experience and/or ability required to identify which soups are possess an extremely vague but important quality, and I have identified this soup as having it" and "I have the special experience and/or ability required to identify an extremely vague but important quality in music, and I judge this song as lacking it". They pretend to be conveying one piece of information on their surfaces, but they're dishonestly meant to convey entirely different ones. The speaker refers to qualities of himself, rather than qualities of the thing in question. If he wanted to do something other than brag, he'd presumably focus on other qualities, ones that don't implicitly buff his own imagined qualifications: whether the soup, say, tastes good, or whether the song has a technically accomplished bass line. But that, unfortunately, wouldn't accomplish the goal of impressing you. Not that identity arguments pull it off either, though at least a non-identity argument probably won't make other people quite as embarrassed for you.