An Attempt to Explain the Uselessness of Political Flame Wars

I am becoming increasingly convinced that the folk saying “I’ll
believe it when I see it” is wrong, at least some of the time. Or at
least, people have a remarkably predictable ability to ignore
seemingly convincing evidence in certain situations, when that
evidence appears to contradict closely held beliefs. I’ve started
calling this the “I’ll see it when I believe it” phenomenon.

Two of the major areas where I see this pattern have also been
identified by Paul Graham as areas where people tend to fight
pointlessly1, and I think our observations
are consistent, although I do not agree with Graham’s conclusion
on this issue.

To rephrase Graham’s claim, Graham believes that so-called ‘religious
wars’ are the result of conflicting identities. Because an argument
about one’s identity may potentially place that identity at risk,
people have a tendency to attack anything which has the appearance of
disagreeing with their identities. Graham claims that politics and
religion are areas which tend to be incorporated into people’s
identities, making it difficult or impossible to have an intelligent
discussion which questions anyone’s political or religious
assumptions. Furthermore, Graham theorizes that having an identity
which includes a large number of things actually decreases one’s
intelligence by reducing the number of topics around which one can
have an intelligent discussion. Therefore, in order to increase one’s
intelligence, one should strive to minimize one’s own identity.

Graham’s claim is problematic because if true, it would essentially
make it impossible have an intelligent discussion on this
topic. Obviously I have an identity, and since an argument which
suggests minimizing my identity is in fact a threat toward said
identity, I cannot possibly allow myself to comprehend the full
implications of Graham’s statement. Or can I? There must certainly be
plenty of counter-examples where intelligent people have held rational
discussions on topics which have been critical parts of their
identities. While this may not be the norm online, it is a
sufficiently important case that failing to explain it would be a
fatal flaw.

I suggest that rephrasing Graham’s statement in terms of ‘world views’
rather than ‘identities’ may help enlighten the debate. I also propose
that rather than asking about the case where discussions devolve into
irrational arguments, we should consider the subtle case where
rational arguments are presented on both sides, and each is internally
consistent, but they fail to be consistent with each other.

For the purposes of this discussion, I define a world view as being
the combination of all internal facts which a rational agent knows
about the world. The world view is a function of perception, as it is
derived by analyzing observations about the world, constructing
internal models to predict the world, and then comparing the results
of those models against future observations. When a world view closely
reflects one’s physical reality, the correlation between predictions
and observations is high. Otherwise, the predictions and observations
may fail to agree, sometimes miserably.

It is important to realize that the essence of what we call
‘understanding’ is in the ability to ‘predict now’. In other words, if
one’s predictions about what should be happening right now line up
with the inputs one is receiving, then one can be said to understand
the situation. Otherwise, the internal model is obviously faulty, and
one cannot be said to have an understanding of the situation. Note
that under such a definition, understanding is not literally the act
of precisely simulating the world, because these methods are usually
approximations or heuristics. Humans are, after all, not machines, and
have little or no affinity for those sorts of computations. Computer
Science has done a good job of showing exactly how hard computing
those sorts of properties can be in a formal system. Accepting that
understanding is a heuristic process based on recursively refined
predictions derived from an internal data set, we move on to
differences between our world view approach and Graham’s identities.

The critical difference between Graham’s identities and the world
views proposal is that world views are inescapable, and smaller is not
better. Indeed, having a smaller world view limits what one can
predict, and therefore understand. Furthermore, having a wide world
view will be important to being able to understand the actions and
arguments of others. Having a limited world view will reduce the
percentage of intelligent discussions in which one can engage.

Let us turn for a moment to the proposed case study for evaluating
this hypothesis. It is truly a remarkable happening when two highly
intelligent people have a rational, reasoned discussion, where novel
information is presented on either side, and yet neither side appears
to recognize the implications of the evidence presented. And yet, I
claim that this happens all the time, especially on the topics of
politics and religion.

Let me stress how strange this should seem. We are all confronted, on
a nearly continuous basis, with novel information. To an extent this
has been accelerated by the internet, but even before the information
age, there have been printed publications, and of course, face-to-face
discussions. A great deal of this information gets absorbed quickly
into our world view (at least among people who consider themselves to
be intelligent). And this allows us to execute quickly on this new
information, and make rational arguments about recent events. (Again,
let me stress that this is not an easy process. This information can
frequently be game changing in completely new ways, which can
fundamentally change our lives. But intelligent people usually seem to
have little difficulty absorbing this sort of new information.)

There are also times when information is ignored quickly. Usually this
is the case when frivolous or nonsensical arguments are made. These
can be quickly dismissed, because they present little of interest to
our insatiably curious minds. And there is nothing surprising about
this rapid dismissal. But it should still seem odd when a rational and
internally consistent argument is discarded without any apparent
consideration.

I hypothesize that these cases are the result of a miscategorization
of information. Remember that information is filtered by the listener,
and information which seems irrelevant or uninteresting or which
simply doesn’t fit well into the listener’s world view may be
discarded without further consideration. How quickly this information
is discarded is dependent on how far outside the listener’s world view
the information lies. So information which is in fact rational and
internally consistent may be discarded before receiving full
consideration if it is unpleasant in one or more of the ways
above. This could be the case, for example, if the speaker argued a
political view with which the listener had had a previous bad
experience. Further attempts to explain that political view might be
discarded more quickly than the original. If so, then the listener
could form a cycle where negative internal feedback on that political
issue would cause the listener to continue rejecting arguments from
that political view with increasing confidence (this is called a
bias). This negative feedback cycle explains how a person, having
decided that an argument is invalid at one point in time, may continue
to assume that new arguments remain invalid even if the content of
those arguments changes.

But how then does an individual accumulate the initial bias which
causes this feedback cycle? As I stated previously, world views are
inescapable. Every one of us is imparted with social knowledge which
we accumulate as we grow up. This knowledge comes from our parents,
our teachers, our peers, and everyone around us, and is frequently
referred to as ‘cultural knowledge’, or in post-modern terms, the
‘master narrative’. It is important to remember that this knowledge is
still individual, but for the most part, the initial knowledge sets
will agree among individuals who grew up in similar conditions. It is
this initial set of facts which we believe to be true which sets us up
to reject the ideas of others who have grown up differently. By this
mechanism, the cycle is set in motion and we are prepared to
participate in political flame wars when we come of age.

Let us finally return to the original premise. The ‘believe what you
see’ saying comes from the recognition that our internal world view,
and thus our belief system, is derived from our observations of the
world around us. However, the saying fails to explain that this is a
cyclic process, where our world view influences how we interpret
future observations, and may ignore some observations entirely if they
don’t fit well into our existing world view. The ‘see what you
believe’ hypothesis attempts to explain this other side of the coin.

What is there to do for those people who are interested in having
discussions about historically contentious topics? I agree with Graham
here that one may not be able to intelligently discuss these topics
with everyone one meets. However, having framed this as an
understanding problem and not an identity problem, I believe that the
critical piece is to find people with an open mind. And keeping an
open mind is fundamentally an issue of how well one can incorporate
unexpected observations into one’s world view where doing so would
normally be difficult. This sort of mental juggling is not easy, and
can be easily as harmful as it can be helpful, as in the case of the
gullible person who can be manipulated easily. However, maintaining a
balance is essential to one’s ability to absorb new ideas. Therefore,
as a heuristic, I suggest that intelligence may be a good indicator
for open mindedness. ‘Wait,’ you may say, ‘I know intelligent people
who have trouble absorbing new information.’ If so, I would say to you
that those people would be even more intelligent if they were better
able to absorb new information.