The Myth of White Male Geek Rationality

People who consider themselves fully rational individuals are ignorant about basic psychology and their own minds.

It is easy for white men in science, technology, engineering, and math (STEM) fields to perceive themselves as more rational than other groups, because our society associates rationality with whites, men, and STEM professionals. When white men in STEM fields believe in this stereotype, they might assume that bias is more common in non-white people, women, and people in the arts, humanities, and social sciences. After all, these other groups seem to want to discuss bias more often, and unexamined associative “reasoning” would link bias to those who bring up the topic of bias. Under logical scrutiny, however, it does not follow that the act of thinking about bias makes one more biased.

GreenRedBluePurpleBluePurple

BluePurpleRedGreenPurpleGreen

the Stroop effect refers to the fact that naming the color of the first set of words is easier and quicker than the second.

A basic tenet of contemporary psychology is that mental activity can be unconscious. Unconscious simply refers to any mental activity that is “not conscious”, and it is not equivalent to the unscientific New Age concept of the Subconscious. A good example of unconscious mental activity interfering with conscious intentions is the Stroop effect (right). If you try to name the colours of the colour words aloud, the first set of colours will be easier to name than the second set of colours, because you unconsciously read the words. This means that you do not have full control over your thoughts and behaviour, and your willpower or logical reasoning cannot overcome the unconscious cultural bias of being able to read in English. Of course, there are other unconscious cultural biases aside from English literacy bias.

If you are a white person and you take Project ImplicitÂ®’s Race IAT (Implicit Association Test), you will probably discover that you implicitly/unconsciously prefer white faces to black faces, despite your explicit/conscious belief that white and black people are equal. (The test is called Race IAT, and it is located in a random position in the list of IATs.) If you are black, your results are less predictable, but on average, as many black people have a pro-white bias as a pro-black bias.

Like all psychology experiments, the IAT expects people to make mistakes. However, if you mistakenly associate black faces with negative words more than you mistakenly associate white faces with negative words, then it means you have an anti-black bias. Similarly, if you mistakenly associate white faces with positive words more than black faces with positive words, then you have a pro-white bias. Reaction times are another indicator of bias, e.g., if you are faster associating white faces with positive words than black faces with positive words, then you have a pro-white bias.

If you take the Gender-Science IAT, you will probably discover that you cannot help but associate women with humanities and men with science, despite conscious efforts to respond in an unbiased way. (Again, the test’s position in the list of IATs is randomized.)

The scientists who created Project ImplicitÂ® did not “invent” the concept of implicit bias to advance some liberal agenda, as the explicit-versus-implicit or conscious-versus-unconscious distinction in psychology is not limited to social stereotypes. Evidence of unconscious or implicit bias abounds in psychology literature, and the social application is only a tiny slice of the studies done on unconscious/implicit bias. Similarly, the scientists are not trying to suggest that racism, sexism, and other types of discrimination are innate. The scientists are simply bringing the practical and social applications of implicit bias to the public’s attention. Project ImplicitÂ® is only the socially-relevant tip of the iceberg of implicit bias research. Thus, it is silly to accuse the scientists of conjuring up the idea of “implicit bias” for the sole purpose of proving some political point.

A social implication of the existence of implicit bias is that people’s assurances that they are “not biased” do not prove that they are unbiased. Racial discrimination, gender discrimination, and other types of discrimination can occur without people’s conscious knowledge. A white interviewer might perceive a black interviewee as unfriendly, without realizing that the negative associations come from the white interviewer’s unconscious attitudes towards blackness. A male software developer might assume that his female peer is less competent in coding and better at documentation or communication, simply because of his unconscious attitudes towards gender and skill type. A person who says, “I don’t see any discrimination at my workplace,” is not even providing a single data point towards proving lack of discrimination. If you are biased, you might not be aware of it. Your conscious attitudes towards racial and gender equality are not sufficient to show that you are unbiased.

The realization that you may have biases that you are unaware of, and that you are not as rational and objective as you assumed, can be frightening and disorienting. However, you can reduce bias by becoming aware of implicit bias within yourself and accepting that implicit bias exists in our society. This means that you should no longer maintain naÃ¯ve notions that you live in a meritocracy; that you are racially “colorblind”; that racism comes from only those who self-identify as racists; that you can ignore somebody’s race and gender when you are evaluating them; or that white men in STEM fields are the last people who should worry about being biased. Ignoring bias or pretending it does not exist does not make it go away. Ignorance of bias does not indicate intellectual purity.

One can only reason meaningfully with the aid of pertinent information. However, I’d suggest that in the world of unconscious association, rationality is almost entirely conflated with knowledge.

For this reason those who know things that are commonly reasoned about, or about which reasoning is considered more valuable, believe that they are more rational.

And correspondingly those who are habituated to social scenarios in which they are the custodians of the information pertinent to the ongoing social interaction, also believe they are more rational. This occurs as much in the case of “emeritus disease” among academics as it does at the IT helpdesk.

I think the self-perceived rationality of STEM professionals is more or less the same as the belief that knowledge of “hard” science and technology is more important than knowledge of “soft” sciences or the arts.

The high value placed on STEM knowledge by STEM professionals is an important component of their sense of self-worth and leads naturally, together with their position as keepers of specialist knowledge, to the perception of greater rationality.

I like that you’re posting about unconscious bias, because I’m sure some folk aren’t aware, but I think you’re selling yourself short with the title, which at a glance seems to imply that only white geek males have such bias, even though within the main post you’ve gone into excellent detail about how bias affects people of all backgrounds!

You’d think the endless flamewars and outrage over tiny little nitpicks and dumb stuff nobody else cares about would demonstrate very obviously the myth of white geek male rationality! Like say, oh, when I run any story that mentions women on Linux Today, and the talkbacks fill up with outraged comments how women want special treatment, and how there aren’t more women in tech because They Don’t Want To, and how wanting a perfect 50/50 gender distribution is wrong, and women are trampling men’s sacred free speech rights, etc ad nauseum. Even on stories that aren’t about gender issues.

How much are the implicit associations really measuring unconscious associations we’re not aware of, and how much are they measuring known, conscious feelings that people think they’re supposed to lie about? I took the women-science one and the Canada-U.S. one, and scored neutral on the first and heavily biased towards Canada on the second – exactly what I’d expect if I’m being honest. But if I honestly expected the reverse, I’d probably lie …

I looked through some of the information there, but I didn’t see anything on how (or if) they control for that. Do I have terrible reading-comprehension?

I can’t speak for Brian but I just retook the gay/straight test trying to deliberately get a bias in favour of gay people, and it did work.

On the other hand, this shouldn’t necessarily be considered surprising – it’s not like this thing reads minds, it just judges reaction times, and if you know what it’s looking for you can cheat it fairly easily.

Jus to add another facet to the argument, rationality itself, or the notion of “making raional decisions”, is a myth. People who sustain damage to the amigdala, which is often described as the seat of emotion, become emotionally somewhat flat; but more importantly they can completely lose the ability to make decisions. Choosing their clothes in the morning or ordering form a menu become endless torutures of finer and finer hair splitting in an attempt to rationalise a choice that they don’t have the emotional engagement to commit to. This scales up to any commitment ot an opinion, intellectual as well as mundane.

Learning itself if an emotional process; successful instances of learning are reinforced with a serotonin spike, and unsuccessful ones (“mistakes”) are not. That’s how we learn to learn – our brain creates an association between being right and pleasure. These positive associations remian even when the neural pathway is well established, and it is the thing that makes it easy for us to retrieve knowledge that has been acquired. It’s a microcosm of sex – every time we’re right in the tinyest way, it’s like a miniscule orgasm.

In short, if you really were completely “rational”, in the sense of being unemotional, you’d be a dithery, unteachable wreck. Bit like the hateful Victorian stereotype of a young woman, in fact.

I think some of the more close-minded in the skeptic communities could well take note of this. Rather tired of the idea that all we have to do is become “rational” and all our biases and baseless beliefs (that those New-Age-y women and hocus-pocus-believing brown people have) will magically disappear.

Taking an AI course, the persistent idea from the students (always male, incidentally) that effective human thinking is this mechanical process of logic, and all we have to do is program the computers to model that, drove me nucking futs!

Even logic is affected by our implicit biases. Philosophy largely focuses on logical thinking, yet two people who start from the same point can end up with different conclusions based on their own ideologies. Often there’s no wholly logical answer to anything–personal values will dictate what seems most ‘logical’ to you.

OK, so I have an unconscious bias associating the concept of male(like me) with science(which I like). From observing the bits I got wrong it seems I also don’t consider biology to be a proper science, and when I see “Astronomy” I read it as “Astrology” and dump it straight in the not-science bucket (I was also thinking of the buckets as science and not-science). So when performing a test faster than I think it reveals a bias. When I speak or write stuff I consider the words I am using, I generally don’t talk faster than I can think. I would hope that any unconscious bias would be drowned out by conscious rationality.
I am not sure I understand why this interesting experiment would lead to a change in my sceptical outlook. My reading of this is that I am unconsciously insufficiently rational and I need to compensate for this by thinking about things more rationally and perhaps not responding overly quickly in situations where my unconscious bias might push me in an irrational direction.

My reading of this is that I am unconsciously insufficiently rational and I need to compensate for this by thinking about things more rationally and perhaps not responding overly quickly in situations where my unconscious bias might push me in an irrational direction.

Yes, doing that helps a lot, but does not guarantee being unbiased.

What bothers me about skeptic communities is the white colonial mentality, in which they look like they are trying to bring “Enlightenment” to the “backwards” brown people (and low-income white people) and women. There seems to be this assumption that the world’s discrimination is caused by religion, and that atheists, being non-religious, are the least racist/sexist.

Clearly a lot of religions are discriminatory and some are fundamentally incompatible with the concept of women in positions of power and I know large numbers of people use their religion to justify exceptionally confused viewpoints on sexuality and race now and in the past over thousands of years. It would be wrong to assert that the world’s discrimination is caused by religion (maybe discrimination causes religion) but at least an skeptic/atheist position is compatible with equality.

Most religions are *also* compatible with equality – arguably more so than skepticism and atheism.

Religion gets cited to support discrimination but *so does science*. There are *hundreds* of examples of “rational” defences of discrimination, some seem quaint and old-fashioned and some are still alive and kicking.

Maybe I’m reading Restructure’s point wrong, but I think part of what she’s saying is that the very *notion of rationality* is often used to justify discrimination – one of the driving principles of colonialism and imperialism was that we were bringing *reason* to those poor superstitious foreigners.

I think this has drifted offtopic a bit, probably my fault. What I am seeking to understand is Kite’s point that this experiment challenges a skeptical viewpoint. My understanding from the article was that we are not as rational as we believe ourselves to be, therefore we should try harder to actually be as rational as we aspire to be. I do think that one thing we have to do is become “rational”, I think the experiment shows we are not there yet. I don’t understand how being more rational is the wrong direction for displacing biases and baseless beliefs (which are held by all sorts of people – church congregations seem to be representative of society as a whole, I am unaware of any significant race or gender bias to belief)

I don’t understand how being more rational is the wrong direction for displacing biases and baseless beliefs

I think it’s a question of what you mean by “rational”.

If being rational means *the same as* acting without bias then “become more rational” is not a good answer to the question “how do we overcome bias” because it means the same thing. You become more rational by overcoming bias, you overcome bias by becoming more rational.

If by “rational” you mean “reaching conclusions by the application of logic to evidence” then being rational isn’t a particularly good way to deal with bias, because bias can affect your interpretation of evidence and your application of logic. (This is extremely well studied)

A big part of the problem is that people, ironically, attach a rather superstitious importance to the notion of “rationality”. “Rational” is used, in essence, as a synonym for “right” but that isn’t what it means.

If by “rational” you mean “reaching conclusions by the application of logic to evidence” then being rational isn’t a particularly good way to deal with bias, because bias can affect your interpretation of evidence and your application of logic. (This is extremely well studied)

Proper application of logic (in the strict sense) should not be influenced by bias, but geeks (and non-geeks) in general do not use logic and often have unexamined assumptions (such as assuming that Silicon Valley is more meritocratic than any other place in the world).

I took their Straight/Gay test, and while I came out with zero-prejudice, I’m not entirely convinced by it. I know from previous experience that I’m good at this *sort* of test (I do extremely well on the Stroop effect as well).

The order in which things come up seems to have a big effect, and while I know they compensate for it, I don’t know if they can compensate adequately within the space of a single test, I suspect that I came out with a “good” score because I compensated quickly, and because I’m good at focusing on obeying arbitrary rules, I’m not convinced it *actually* makes me less homophobic.

Gaming the system is always going to be a problem with this kind of test. I suspect that actually they’re most useful on a population-wide level than a single-person level.

If one person comes out with a very low or very high bias, it might just be a result of their reacting particularly well or particularly badly to the nature of the test. If a *lot* of people over a *large* population reveal a consistent bias in *one direction* that implies there’s probably a genuine bias in the population.