Friday, November 25, 2011

Recently I was attempting to estimate the probability that we are living in a simulation. Watching my brain's natural attempt to estimate the probability of this eerie abstraction, I saw that it seemed to do two things: (a) poll my mental models of people I thought of as smart who had considered the question; and (b) discount that percentage by the high percentage of smart people I know who haven't considered the question (though this second is really about my confidence in my assignment of probability, rather than the probability itself - the hash I see my brian performing is to adjust the probability toward the status quo in response to a drop in confidence of a strange assertion).

I have no idea if my introspective experience†
is in any way univesal, but it does seem that polling one's epistemic peers is an excellent means for estimating probabilities. This is basically like looking things up on Wikipedia, which is also a highly rational action, but polling one's own personally vetted epistemic peers removes some of the uncertainty.

Noticing how I use my mental models of my epistemic peers in thinking about things caused me to notice how startling the effect of a single epistemic peer can be on my confidence in widely accepted beliefs. When I get to know someone and find that I have no choice but to grant that his brain works at least as well as mine, I have no choice but that any strange belief he holds alters my confidence in the commonly held belief.

One way that we protect important beliefs is to by definition exclude those holding opposing beliefs as epistemic peers. If we refuse to admit them into polite society, they can't harm the stable, practical ways of thinking that we have developed.

What this suggests to me is, to get your strange belief accepted by a wider audience, it's much more effective to establish yourself as an epistemic peer of a wide, influential group than it is to develop "convincing" arguments for your strange belief.
The existence of an epistemic peer who holds a contrary belief is more devastating than any argument.

What this also suggests is that if we are really interested in the truth, we will surround ourselves with epistemic peers who hold beliefs as different from ours as possible, and try to figure out why similar brains have come to hold such different beliefs. We will counteract our social belief-protection systems.

I am anxious to test this - if anyone knows any 150+ IQ evolution deniers, please send them my way! (Basically I think this is how
Chip Smith
lives his life.)

†
As usual, I would like to thank marijuana for its important contributions to this post.

24 comments:

I don't know any genius creationists personally; But, as an interesting historic tidbit (to people who are interested in such things), Thomas Nagel reported that Kurt Gödel stated a disbelief in evolution:
http://www.cs.sfu.ca/~anoop/weblog/archives/000235.html

I'm not sure intelligence alone has much to do with it. Intelligence can also be used to rationalize nonsensical convictions. This will be especially true if domain-specific beliefs are compartmentalized in your brain - and religious beliefs are good candidates for that.

As a moral nihilist (though in the "nice" sense of Alex Rosenberg's fun and penetrating 'Atheist's Guide to Reality'), I don't believe correct answers exist to moral issues, just a range of naturally selected, intuitive bases we draw upon in addressing them, of the sort captured in Haidt's Moral Foundations Theory, though of course there's always work to be done in identifying the false factual claims which, in getting harnessed to the products of those bases, support moral views at odds with mine. So I try to expose myself to people whom I envision would score higher than me on the "conservative" foundations (loyalty, respect, purity). Another promising source, I think, of ideas for considering how to fruitfully expose oneself to those whose beliefs are at odds with one's own is Mercier's work expanding upon his and Sperber's Argumentative Theory of Reasoning: http://tinyurl.com/6hbpn84

Constant - g definitely not sufficient for epistemic peer. And agreed that religiosity is special - religious peers never threaten my atheism as it seems too obvious how, exactly, their brains are failing to make me properly doubt my non-belief.

Rob - I had a strong "sacredness" reaction myself, way out of proportion to what I would have predicted, when I heard about how
police hit Robert Hass with batons. My heart still beats fast and my breathing gets hard when I think about it and I just want to PUNCH SOMEONE IN THE FUCKING FACE. Not at all rational. A good experience to have for Haidt-ian empathy, though.

Thanks for the compliment (I absolutely consider it a compliment). Of course, I'm all talk compared to TGGP. That punk has the shit down to an art.

A few years ago, I might have recommended Vox Day as a go-to anti-evolutionist. Not so sure nowadays. Alan Dawrst is the only thinker who leaves me wondering whether my atheism is overconfident. Those Pascalian knots are a bitch.

Let me offer myself as a case study. I am already part of a cognitive minority by taking the idea of a singularity seriously. But then, I have beliefs which set me apart from the rest of that minority. If we use LessWrong as a measure of the beliefs characteristic of that minority, then I can report endless clashes there over propositions such as (1) the many worlds interpretation of quantum mechanics has deep flaws and makes much less sense than its advocates think (2) functionalism is dualism (3) the most plausible theory of the mind is that it's a quantum-entangled subsystem of the brain. And I can't say that having the majority of my minority against me on these issues has made any impression on me, because the arguments are very simple. The argument that functionalism is dualism, for example, can be boiled down to "colors exist, but they don't exist in physics".

So I suppose my situation is somewhat the reverse of Sister Y's. I don't feel the need to check my own views against the opinions of others, to any special extra degree. I already know I'm capable of making mistakes, I already know that many of my opinions are hypotheses and don't consider myself to be attached to them. Instead, when I think about issues like this, it is usually from the perspective of trying to figure out why wrong ideas have such a grip on people. For example, I think part of the attraction of functionalism - or the motivation for qualia denial, if you can get the discussion that far - is intellectual desperation. The only alternatives people know are "monistic materialism" and "dualistic religion". And in fact the way they usually get around this dilemma is by pretending that functionalism is not dualism; this is almost done automatically. The qualities of subjective experience are automatically identified with their supposed neurological correlates, even though they involve properties which simply don't exist in physics. You have to work to make people notice that this is a habitual association, not an identity. And it's even harder to advocate a neither-nor approach to the usual choice, physicalism or dualism. Although in principle it is a simple idea that, offered a choice between X and Y, the best choice may be Z, a possibility not yet known, it seems very hard to get people to take this seriously. I am pretty much resigned to not being understood until the problem is half solved, at which point one can describe an approximation to Z and say, "there is your alternative".

Mitchell, I understand that you stick to your positions
after
you have had discussions with your epistemic peers. Sister Y's post, as I understood it, is not about doing that, but about feeling the need for such discussion. An epistemic non-peer's contrary opinion doesn't even provoke an evaluation of arguments; an epistemic peer's does.

Actually, why should IQ be correlated with epistemic rationality, anyway? This is an interesting and entirely non-trivial question. Also,
are
the two correlated? If so, is it because greater computational power allows better consistency checks, or allows you to take into account more possibilities? Or is it just because intelligent people get more out of their education? Or do IQ tests (in part) directly measure epistemic rationality?

I don't know.

I was made aware of this issue by seeing that interestingly, Langan apparently screws up in just the same way that countless other (probably intelligent) people have before him: by getting confused over language.

I use the concept of epistemic peer so frequently that I have it completely internalized without, I realize, being able to define it. I confirmed with my philosophy-PhD-having roommate that it is, in fact, very tricky to specify the conditions for epistemic peerhood.

Wouldn't an epistemic superior be better than an epistemic peer?

This is tripping me out. My instinct, shared by those I've discussed it with, is that you'd have a hard time differentiating an epistemic superior from anything else (such as a crazy person).

One of my big crackpot convictions right now is that individuals are not the proper cognitive units; we think better when linked up. So a lone crackpot won't do nearly as well as a linked up group of crackpots.

Aw, great... You just prevented me from going to bed by posting this article. It's too interesting not to start reading right now.

Also, I think the case of an epistemic superior being indistinguishable from a crazy person is a very special case. The superior position is not usually ineffable; in fact, epistemic superiors should be likely to express good arguments clearly.Of course, there is the issue that their arguments may be fiendishly complicated. But that's why you're not alone; others can tell you if that person's reasoning is probably accurate.And finally, I think there is a way of not understanding an argument that is actually evidence for the argument being nonsensical. This is just sort of my personal experience. I have seen positions that I've had a hard time understanding, but I worked my way through them because it was clear to me that there was something to understand. On the other hand, there have been such that I gave up on because I formed the opinion that they were confused and impossible to understand.

So it seems to me that an epistemic superior is actually reasonably well distinguishable from a crazy person.

I was led back to this post from Aretae (in turn via Ilkka), where a commenter brought up Lawrence Auster's differing view on evolution. I thought: "That's nothing, Vox Day I'd take more seriously". But I haven't actually read V.D in quite a while.

The very same Fourth Checkraise post had a link to Jason Brennan at Bleeding Heart Libertarians with a self-described "rant" titled "Dear Left: Corporatism is Your Fault". I was actually irritated as a I read it even though I agree with him because I thought "You're not a cloistered dittohead but an academic and you know there are plenty of liberals completely aware and not automatically dismissive of public choice theory who still disagree with you!"

Very delayed response to Constant: "Colors don't exist in physics" means that, if the universe were exactly as our physical theories describe it, there would be no colors. A length is not a color, and calling electromagnetic waves with a length of 700 nanometers "red" does not in itself make the waves red. It is the same for brain states (though more complicated to discuss): if those brain states are supposed to be made out of colorless elementary particles, there will be nothing in there that is actually red, and yet we know that actual redness does exist. Therefore, we need a new ontology. To what extent this is a challenge to our current mathematical description of physics, and to our ideas of exactly how brain states can be mind states, is a long story, but I for one conclude that we must suppose the existence of functionally relevant quantum entanglement somewhere in the brain, as well as a radically new physical ontology which allows the self to be identified with a very large locus of entanglement. "Qualia" then turn out to be our one glimpse of the innate character of anything physical; in some sense, what we experience *is* the state of the conscious part of the brain. Such identification is what most neuro-materialists want to do anyway; but they generally don't grasp how enormous a change in our ideas about the innate character of physical objects this would demand. People who do grasp this, usually become panpsychists, but they still don't have any very clear ideas about how panpsychism works in detail.