Friday, August 31, 2012

With the recent discussion of "atheism+" (short version: the atheist movement plus skepticism, feminism, and social justice), one point of contention was that this sounds a lot like humanism.* Now, there's nothing wrong with having two labels which mean nearly the same thing. But as a purely academic matter**, we'd all like to know if they are in fact similar and to what degree. Greta Christina has a summary of why they aren't the same thing.

*In this context "humanism" refers specifically to secular humanism and excludes religious humanism. I'm just following the way that secular humanists most often describe themselves.**tongue in cheek

There are two common objections to identifying humanism with atheism+. First, humanists are too much on the diplomatic side when many people in atheism+ wish to be more confrontational. Second, humanists seem concerned with the project of finding secular ways to fulfill the needs usually fulfilled by religion. Personally, I think both of these objections are based on mischaracterizations of humanist values. I think people's impressions are distorted by the Harvard Humanist Chaplaincy, which gets a lot of press but isn't that big. It would be better to refer to the Council for Secular Humanism (CSH) or the American Humanist Association (AHA), who do not seem to express such values.

I'm partial to a third objection: As Greta said:

I would like to point out that humanism is hardly immune to the
problems we’ve been talking about here — the problems that Atheism Plus
is working to address.

Many humanist groups have a huge diversity problem.

Just because humanists claim to address social justice issues doesn't mean they do it well. Most anti-feminists fancy themselves to be in favor of gender equality, but at the same time fight against it. In their view, we've nearly achieved gender equality already, and feminists are just trying to tip the scales against men. I am not saying that humanists are anti-feminist, I'm saying that when a group purports to be in favor of social justice, we can't take their word for it. We have to scrutinize them. This applies to humanists, and it applies to atheism+ too!

To give away my biases, I actively dislike humanism. It's not so much that I disagree with the principles, it's that I think they are too vague and I disagree with the attitude that leads to such vague principles. From the CSH:

Secular humanists believe human values should express a commitment to
improve human welfare in this world. (Of course, human welfare is
understood in the context of our interdependence upon the environment
and other living things.) Ethical principles should be evaluated by
their consequences for people, not by how well they conform to
preconceived ideas of right and wrong.
...
Indeed, say secular humanists, the basic components of effective
morality are universally recognized. Paul Kurtz has written of the
“common moral decencies”—qualities including integrity, trustworthiness,
benevolence, and fairness.

Improving human welfare? Consequentialist metaethics? Benevolence? It's a frustrating mixture of high-minded philosophy and undefined feel-good values. Note that there's no mention of feminism, ethnic minorities, or any social justice causes at all. They're in favor of improving human welfare, but how do I know whether they think that includes feminism?

Some of these concerns are put to rest by Ron Lindsay (the head of CFI, CSH's parent organization). He explicitly mentions several specific causes fought by CFI.

CFI has long been active in supporting LGBT equality, in supporting
reproductive rights, in supporting equality for women, in opposing
suppression of women and minorities, not just in the US but in other
countries, in supporting public schools, in advocating for patient’s
rights, including the right to assistance in dying, in fighting restrictions on
the teaching of evolution, in opposing religious interference with
health care policy, in promoting the use of science in shaping public
policy, in safeguarding our rights to free speech, and in protecting the
rights of the nonreligious.

Ron Lindsay says these issues are constrained to those where religion or pseudoscience have a big impact, and constrained to what they can do with limited resources. Good for CFI. Maybe I am wrong about the humanist community.

If we want to explore a little more common humanist attitudes towards social justice, we can check out humanist publications (which can express views unconstrained by organizational resources).

I propose an experiment. I will search the CSH website and The Humanist(a magazine put out by AHA) for the word "ethnicity" (or similar), and sample the results at random. For comparison, I'll try searching the AtheismPlus forum as well, though it may be too young to find anything. I'm looking for a particular attitude that I think separates a good social justice advocate from someone who just wants to be a social justice advocate. Namely, I am looking for colorblind ideology, the insistence that you don't see race. Colorblindness is a way to ignore racial disparities while excusing yourself from any responsibility to fix them.

If the humanist websites mostly express colorblindness, I consider that a strike against them. If they don't express colorblindness, or explicitly reject it, I would be impressed.

Here I pause. Does my procedure sound any good? What's your prediction of the results?

Wednesday, August 29, 2012

I probably spend a little too much effort into organizing all my posts into categories. I just decided to go back through the past year and put a bunch of posts into a new category, experiments.

My favorite example is One: the universe's favorite digit. There I hypothesized that the fundamental constants of the universe are distributed logarithmically, made a prediction, and went out and confirmed that prediction. It's the scientific method!

Okay, so I'm not exactly doing professional science here, nor am I even blowing stuff up like they do on Mythbusters. Most of the "experiments" are math experiments. But they're fun! I get to participate in the production of new knowledge. I get to adopt a "look and see" attitude, rather than just making arguments based on my prior beliefs.

Monday, August 27, 2012

Skepticism is cool and all, but if you identify as a skeptic, isn't that just like saying "I'm good at critical thinking"? It's like boasting of your own intelligence--it doesn't actually demonstrate that you are intelligent, it just demonstrates that you are boastful.

Even people who are actively involved in skepticism have said that skepticism has this weakness. From Pharyngula:

Ultimately, [this talk] just reaffirmed my regret that “skepticism” has become a
label for the timid almost-skeptical, who like to reassure each other
that they’re all truly the very best critical thinkers, now let the
believers among us close their eyes and pray.

The skepticism I believed in wasn’t about some little club for people to
get together and tell each other how smart they all are for not
believing in incredibly silly things like UFOs, Bigfoot, psychics,
ghosts and the Loch Ness Monster…

Both Pharyngula and Natalie criticize a particular attitude within skepticism. Let's call this attitude skeptical boasting.

Now, you may believe that skeptical boasting is rare, or you may believe it is common. It's hard to say, since all we have are our personal impressions. Skeptics aren't exactly going to say outright, "You should believe me because I identify as a skeptic and I'm better than everyone else", so we'd have to rely on our own judgment to determine whether this is a person's underlying attitude.

But regardless of whether skeptical boasting is rare or common, it's good to talk about the alternative to skeptical boasting. Is there a way to identify as a skeptic without boasting? What is the point of the label if it's not boasting?

Skepticism as a constraint

If we have two people arguing with each other, and one of those people is a skeptic, should we be inclined to agree with them? No, we should not. All we know is that they identify as a skeptic. Whether they fulfill skeptical ideals is another matter.

Furthermore, if we were inclined to agree with the skeptic, then we this creates perverse incentives: identify as a skeptic and more people will automatically agree with you!

But if identifying as a skeptic doesn't win debates, what good is it? I propose that winning debates is not what we should be striving for. I want to win the debates where I'm right, and lose the debates where I'm wrong. This is the difference between critical thinking skills and debate skills--Debate skills help you win, while critical thinking skills help you win when you're right, and lose when you're wrong.

When I identify as a skeptic, it's not a winning strategy, it's a constraint. I am not asking you to respect my opinion, I am asking you to scrutinize it by skeptical standards. If I ever veered off into a profoundly non-skeptical territory (for
instance, by writing anything resembling a syndicated opinion column),
you can call me a hypocrite, and cite skeptical arguments in support of
this view. As far as I'm concerned, it's good for readers to have this
power, because it means that I will lose more of the arguments I deserve to lose.

It's worth noting that I also open myself up to a certain set of misconceptions about skeptics. For example, skeptics don't believe anything, they're closed-minded, they're just cynics, etc. But I still find it worthwhile despite this disadvantage.

Of course, that's just the personal significance of identifying as a skeptic, which is only half the story. The other half is the skeptical community.

A movement you can disagree with

Skepticism is often defined as a method of thought rather than a set of
beliefs. It's a method of thought that mixes critical thinking,
empiricism, and experimentation. But the problem with this is that critical thinking is such a common value. Nearly everyone agrees that critical thinking is good, and purports to use it. So if skepticism is simply an expression of the value of critical thinking, surely this can go without saying. Calling yourself a skeptic just seems like skeptical boasting.

But in truth, skepticism is more than just skeptical thought. This can be illustrated by the many people who value skeptical thought, but who don't identify with skepticism. Why don't they identify with skepticism?

One of people's top reasons is that they don't associate with the skeptical community. They don't read skeptical books, magazines, or blogs. They don't go to skeptical meetings or participate in skeptical organizations. They don't talk about critical thinking in any active way with other people who are enthusiastic about the subject. So they sensibly disidentify with skepticism.

There are so many reasons why a person might not associate with the skeptical community--they're uninterested in skeptical discussion, they don't have access to skeptical discussion, the skeptical community doesn't do enough to welcome their minority group, they're preoccupied with some other movement, they had drama with the local discussion group, and so on. None of these reasons impugn the person's aptitude for skeptical thought.

There are also some views held by skeptics that are definitely not held by everyone. Here they are, skeptical beliefs you can disagree with:

Non-skeptical and anti-scientific thought is prevalent, and leads to wrong beliefs.

These beliefs lead to harm.

It is possible to counter this problem.

It is worth it to me to do what I can to counter this problem, or at least associate with other people who do so.

Note that (1), (2), and (3) are claims about the external world. They could be true or false. For instance, some people have told me that if we debunk one belief, it will just get replaced with another (against (3)). Other people have said that the beliefs that skeptics focus on are mostly fringe ideas that cause little harm (against (1) and (2)). I think (1), (2), and (3) are all true, but they are broad enough claims that we might reasonably disagree on the point.

(4) is a subjective claim. If it is worth it to me, that does not necessarily mean it's worth it to you. We are different people and we can be interested in different things or have different priorities.

When skepticism is just about valuing critical thinking, it seems the only point of identifying as a skeptic is skeptical boasting. But in reality, skepticism is more than that. A skeptic associates with the skeptical community, actively discussing skeptical thinking and its applications. Furthermore, they believe that skeptical thinking is worthwhile because it solves problems. It is possible to agree with the value of skeptical thinking without associating with the skeptical community. It is possible to be skeptical without making skepticism a personal priority. This does not make a person a poorer thinker in any way.

Saturday, August 25, 2012

Two previous linkspams
included themes of doubting, and made me reflect on my own attitude
towards doubting. I have some very high-minded principles about
doubting that dominated my coming out experience.

I can't see
doubt as good or bad. Doubting is just what you do when you don't have
enough evidence. Not doubting is what you do when you have enough
evidence. That's the principle, the rest is details.

But doubting
has acquired extra meaning, especially in the context of an identity
like asexuality, which is so often disbelieved and invalidated.
Disbelief and invalidation always seem to go together. Doubt thyself,
because asexuality doesn't exist. Doubt thyself, because real asexuals
can't have a sex drive. Doubt thyself, because you're just trying to be
special.

On
the flip side, we have "overcoming doubts" narratives. Many of the
reasons we are given to doubt ourselves are expressions of ignorance.
To overcome doubt is to climb out of the pit of ignorance and wipe the
mud off your boots. It's realizing, "Wow, my friend actually had no idea what she was talking about with asexuality, and I only took it seriously because I didn't know any better!"

So
in our context, doubting is what the ignorant tell you to do, and not
doubting is what you do when you realize their ignorance.

But I
can't see doubt as good or bad. To think of doubt as good or bad is to
constrain our view of the world, not according to what is true, but
according to what we want to be true. (I also suspect it will bite us
in the ass when the Unsures finally rise up as an empowered sexual
minority. Which I'm sure will happen any day now, right?)

And
yet, I felt bad about doubting. I felt bad about feeling bad about
doubting. I was scared of being wrong and missing a perfectly good
opportunity to fit the normative romantic narrative. I was scared of
being right, and not having any opportunity to fit the normative
romantic narrative. I was scared of inadvertently proving the doubters
right, even if for the wrong reasons. I was scared of the fact that I
was scared of doubt, and that made me doubt more. It was kind of a
mess.

I took solace in two things. First, I came to accept the
benefits of an aromantic lifestyle, as well as those of a romantic
lifestyle. So I would be okay no matter how it turned out, whether my
doubts were right or wrong.

Second, I gradually saw that my doubts
completely failed to conform to the reasons people said I should
doubt. People thought the label was limiting my exploration, but during
that time I did more exploration than the entire time I identified as
straight. Some people thought I was really gay, some thought I was
really straight, but as an informed doubter, I knew gray/demi were the
possibilities that loomed largest. People thought I would try sex and
like it and get over this asexual thing. In reality, I tried a
relationship, had a bad experience, and concluded I was gray-A.

I no longer consider myself much of a doubter. But I don't feel I overcame
doubt. I achieved better understanding through personal experiences
and philosophy, and reduced doubts were an incidental side-effect. If I
still found myself doubting, that would have been okay. If I start to
doubt again, that would be okay.

Perhaps this is confusing
correlation with causation, but I became comfortable with my doubts
around the same time I became comfortable with being gray-A. I feel
they are connected somehow. Being between worlds is different from
being unsure about your world, but in terms of personal impact they can
be quite similar.

I'm excited because, according to a linear regression of my previous rankings, this year my expected ranking is 6. I will finally complete one of my life goals: to rank under 25. But that's not as exciting as 2013, when I will rank at -9, better than the best. Yessssss

Ultimately, it just reaffirmed my
regret that “skepticism” has become a label for the timid
almost-skeptical, who like to reassure each other that they’re all truly
the very best critical thinkers, now let the believers among us close
their eyes and pray.

The problem is that my experience (anecdotal, yes, but ample and varied)
has been that there is quite a bit of un-reason within the CoR. This
takes the form of more or less widespread belief in scientific,
philosophical and political notions that don’t make much more sense than
the sort of notions we — within the community — are happy to harshly
criticize in others.

And then Natalie Reed says she is giving up on the atheist and skeptical movements (though not on atheism or skepticism).

At first how I assumed this went was people generally thinking
“secularism is one of many important issues presently going on, and one
that I happen to feel especially passionate about, so that’s where I’m
going to be put a significant chunk of my energy and attention”.
[...]
But lately it seems to me that a much more significant percentage than
I’d assumed are people thinking “atheism is the most important issue, so
that’s the one I’m going to focus on”.

We are…
Atheists plus we care about social justice,
Atheists plus we support women’s rights,
Atheists plus we protest racism,
Atheists plus we fight homophobia and transphobia,
Atheists plus we use critical thinking and skepticism.

That's cool, but I'm pessimistic that it will devolve into incoherence before long. Let's wait and see.

In future posts, I'd like to discuss a few ideas. This is an outline I may or may not follow.

1. Is claiming "skepticism" just boasting about your critical thinking prowess?
2. What does it mean to prioritize the cause of atheism or skepticism?
3. How do we deal with skeptics who are wrong about things?
4. Why doesn't humanism serve the purpose of Atheism+?
5. Where does skepticism go from here?

Sunday, August 19, 2012

Previously, I read a paper that mathematically modeled the search for a malfeasor using profiling techniques. I concluded that the model does not apply to security against airplane hijackers (though it may apply to other security situations). You know what, I could build a better model than that! I'm a physicist, dammit!

This comic seemed appropriate.

So here's my model. Suppose that there is a small minority of fliers who are "marked" as especially suspicious. (In Harris vs Schneier, the marked group are Muslims, though Schneier points out that being Muslim usually isn't a visible characteristic, and you'd have to use some proxy, such as "Arab-looking".) There is a fixed percentage of fliers that airport security can search, but if they like they can choose to search people in the marked group more often.

Security's adversaries are the terrorists. They have a certain amount of recruitment resources, which they can use to recruit hijackers in the marked group, or outside the marked group. However, if they recruit outside the marked group, it costs more resources, and thus they can recruit fewer people.

Security plays to minimize the number of successful attacks. To do this, they must minimize the average number of hijackers not searched. Terrorists play to maximize the number of successful attacks. The question: How much should security focus on searching the marked group, and how much should the Terrorists focus on recruiting from the marked group?

Parameters in this model:

m = the ratio of the number of people in the marked group to the number of everyone else

λ = the percentage of fliers that security can search

c = the ratio of the cost of recruiting a hijacker from the marked group to the cost of recruiting elsewhere

Assumptions I will make:

m is very small (the marked group is a very small minority)

λ > m (security can search every single person in the marked minority if they want)

The number of hijackers the terrorists can recruit is smaller than the number of people in the minority.

Lastly, I will ignore the fact that people come in discrete quantities.

The game:

Both airport security and terrorists have a choice to make. And it's not one of those either/or choices, they have a whole sliding scale to choose from. Let y be the position of the sliding scale for airport security, and let x be the position of the sliding scale for terrorists.

Legend text: "Percentage of minority searched"; "Percentage of other people searched"; "Hijackers recruited from minority"; "Hijackers recruited elsewhere". n is the maximum number of hijackers that can be recruited, but this parameter does not figure into the solution.

x and y are each numbers between 0 and 1. The greater x is, the more the terrorists focus on recruiting from the marked group. The greater y is, the more airport security focuses on recruiting from the marked group.

The number of successful attacks is a function of x and y. I will represent this function as the height in the following graph:

Terrorists control the x coordinate, and want to maximize the height. Airport security controls the y coordinate, and wants to minimize the height. Assuming both players are rational[1], and know each other to be rational, they will choose the single Nash equilibrium. This is a saddle point, which I've marked in the above graph.

Results:

I could give you the coordinates of the solution[2], but this would be meaningless because x and y are just abstract quantities. Here are some more meaningful quantities.

Click for larger

Yes, that means the terrorists should be equitable in their recruitment process. Even if it is easier to recruit from the marked group, they should still make no effort to focus on the marked group. This solution is exact[3].

The results for the airport security, on the other hand, are approximations only valid for small m. Basically, security should search a percentage of marked people such that the terrorists get just as much bang for their buck regardless of where they recruit.

If I were to plug in "reasonable" numbers, I would say c = 3/4, and λ = 1/5. With these numbers, airport security should search 2/5 of people in the marked group, and 1/5 of everyone else.

Applicability of this analysis:

This analysis is inapplicable, because it ignores the additional risk and cost associated with determining a profiling scheme, determining the parameters, and training security personnel to implement it. There are probably other complications too. Lastly, it assumes that airport security is rational. So maybe applications are a lost cause, but at least I got to do some math.

[1]This is "rational" in the game theory sense, which is frequently irrational in the colloquial sense.

[2]The exact coordinates are ( cm/(1+cm) , 1 - c(1-λ)(1+m)/(1+cm) ).

[3]That is to say, it does not use the assumption that m is very small. However, it does use the other assumptions.

Monday, August 13, 2012

Slightlymetaphysical mentioned that he likes the way I handle trolls on this blog. Well, I don't usually give advice, because I tend to think that my advice is no better than what you can come up with on your own. But I thought I would try to describe the way I handle trolls, and commenters who disagree with me in general. I do not claim that this is the best way to handle them, it's just the habits I've formed based on years of experience in whatever corners of the internet I spend my time on.

I don't get angry. I just don't have the temper for it. I sometimes attack people, but this is actually completely different from being angry, even if they're hard to distinguish.

I let go of (some of) my desire to be right. And I replace it with the desire to stop being wrong. I concede small points on a regular basis.

I set very modest goals of persuasion. I only expect to persuade
people on very narrow points. Or if even that seems unlikely, I just
present arguments to be seen by hypothetical third parties.

I think an awful lot about the pathology of disagreement. Sure, lots of disagreement is substantive, but some of it is pointless. I've blogged about things like the failure to cite opponents, the relativity of opinions, and generalizing anecdotes. We also tend to pigeonhole opponents before they explain their position. And many of us get really interested in one particular nuance, and tend to play it up at the expense of other nuances. Sometimes, I play up a different nuance than a commenter does, leading them to "disagree" with me. But it's cool, because more nuance is better.

Are they arguing or asserting? Most people don't know how to argue, and instead they simply state their position (often unclearly at that). For example, one person said, "stop bashing religions. It's just ignorant. Take the high road." If they don't present an argument, I don't need to either. Would a hypothetical third party be convinced by their non-argument?

Are they self-evidently ridiculous? If it's so self-evident, I don't need to argue against it. I can just leave it be... or quote it for my own devious purposes! This is the lesson I have learned from The Barefoot Bum's "The Stupid, it Burns" series. It's my source of peace of mind on the internet.

I know when to let go. Comments take a long time and I have
better things to do. If a comment argument goes on for more than a few
days, it stops being fun and I tell them I'm dropping out. I have a
policy of giving other people the last word, because I'd rather control
when we stop than have the last word.

Saturday, August 11, 2012

In an earlier post, I summarized a paper that used a simple mathematical model of profiling to determine the best strategy. I also argued that this paper was too idealized to apply to real-world situations, and it does not apply to the case of airplane hijackers at all.

Here I will expand on the paper just slightly. The paper still won't apply to real-world situations, and still won't apply to airplane hijackers. Basically this is just math for its own sake.

Here is a re-summary of the paper's model: We have a large number of people with one unknown malfeasor. Different people have different prior probabilities of being the malfeasor, and the government knows these probabilities. The government must search people one by one, but has no memory of who it has searched before. Therefore, the government must sample people randomly with predetermined sampling probabilities.

The paper chooses the strategy which minimizes the mean number of searches before the government catches the malfeasor. However, I think it makes more sense to maximize the probability of catching the malfeasor with N searches. This is a subtle difference, but it completely changes the math.

Math ahoy!

Let pi be the prior probability that person i is the malfeasor. Let qi be the probability that the government will search person i in any given search. The goal is to choose qi, given pi so that we maximize the probability of catching the malfeasor within N searches. Let M be the total number of people. We'll assume that M and N are very large numbers.

Consider the case where person j is the malfeasor. The probability of not catching them in any given search is (1-qj). The probability of not catching them at all in N searches is (1-qj)N. This is not quite our probability of failure, because we still have to average over all possible people who could be the malfeasor. And when we do the average, we have to weight the average by the probabilities pj. Here is our probability of failure:

But it's not as easy as picking the qj which minimize this figure. qj must also obey additional constraints (eg their sum total is 1). Fortunately, there is a technique to account for these constraints. It's called the Lagrange multiplier method, and I don't have the space here to explain it. Here's a solution:

Oh no, I chased away all my readers with a scary equation! But it gets worse... this is only sometimes the solution.* In certain situations, the above result will give negative values for qj. The physical meaning of this is that sometimes there are people who are best ignored completely, even if there is a small chance they are the malfeasor. Naturally, the larger number of searches you can make, the fewer people who should be ignored.

A simple case

Suppose that we have two groups of people, one which composes 99% of the population, and the other which composes 1%. And suppose that any given person in this 1% is 100 times as likely to be a malfeasor as any given person in the 99%. Finally, let's say that the number of searches is equal to the number of people (but recall that we can't prevent redundant searching). What should the sampling probabilities be?

The answer: You should check on people in the 1% more often by a factor of about 5.6.** (This is proportional to the log of 100.) Since there are far more people in the 99%, you should be performing 5.6 searches on the 1% for every 99 searches on the 99%.

This is only one particular limit of a very complicated solution, and you cannot draw more general conclusions from the results of this case. I tried a bunch of things, and one general trend is that the larger number of searches you can make, the more equitable your searching should be.

Sweet FSM, this was useless, but it sure was fun. There's one more post on profiling math coming.

*Details for those understand the math: Applying Lagrange multipliers is somewhat tricky because I don't know which constraints to apply. Some qj will have values of 0, bumping up right against the constraint that all qj must be positive. But for other qj, the constraint is irrelevant. The form of the solution depends on which constraints apply, and which don't. Mathematicians might have a better way of handling this, but I can only handle it by "guessing" which constraints are important.**The general formula, when N and M are very large, is ( N + M(1-K)Ln(a/b) )/( N - M K Ln(a/b) ), where K is the fraction of the population in group A, a is the prior probability that a person in A is a malfeasor, and b is the prior probability that a person not in group A is a malfeasor. Unless I made an error.

KEYES: I ran across a 2003 study and an article from 2008 that
suggests that the use of condoms doesn't help all that much as a primary
HIV prevention in Africa, partly because not enough people are using
them and partly because a lot of the people in Africa that are being
infected are in steady relationships already and people don't use them
as much in steady relationships. Is that true?

Mr. MWANZA: Yeah.

The pope said that condoms are not a "real or moral solution" to infection, and that a better solution is "a different way, a more human way, of living sexuality." Meanwhile, condoms could help people even in steady relationships.

Wednesday, August 8, 2012

Back in May, Sam Harris debated the efficiency of profiling Muslims at airports with security expert Bruce Schneier. I have some very snippy things to say about that, but wouldn't it be more
fun if I left the snippy comments to your imagination, and I talked about math instead?

During the debate, a particular paper got some attention: "Strong profiling is not mathematically optimal for discovering rare malfeasors". This was probably not the best paper to highlight, because it's a pure math paper; it's bound to consider a situation that is too idealized. Then again, Schneier cited lots of things that went into the nitty gritty, but I didn't read those because they didn't catch my interest. Go figure.

A summary of the paper

The authors consider a scenario where there is one malfeasor among many, and the government needs to search people one by one until they find the malfeasor. But the malfeasor is not equally likely to be anyone, there are different prior probabilities for different people. The question is, in what order should the government search in order to catch the malfeasor as quickly as possible?

The solution for an "authoritarian" government is obvious. Search the most likely people, and then move on downwards until you catch them. However, the authors posit a "democratic" government with additional constraints. Basically, the government has no memory of who it has already searched. Thus each time they conduct a search, they must sample people randomly, though with weighted probabilities.

The question is, how should the sampling probabilities be weighted? The "strong profiling" strategy weights the sampling probability by exactly at least the prior probabilities. But it turns out this is not the best strategy.

The math is not too difficult, and can be found in the paper.* The best strategy is to weigh the sampling probabilities according to the square root of the prior probabilities. That means, if someone is four times as likely to be a malfeasor, a democratic government should be twice as likely to search them.

*If readers find the paper hard to follow, I will break down the math in the "Authoritarian vs Democratic strategies" upon request.

That was the primary conclusion of the paper, but for completeness I should mention a few others. They show that the conclusion is identical in the case where there are multiple malfeasors. They also consider the malfeasor has a chance to escape detection even when the correct person is searched. In this case, the sampling probabilities should also be inversely proportional to the square-root of the probability of detection success for any given person.

Why it's inapplicable

While the paper was interesting, I honestly do not think it applies to the debate between Harris and Schneier. The reason the "strong profiling" doesn't work in the paper is because it leads to redundant searches. This might apply to the case of a smuggler who flies a lot and has repeated opportunities to get caught. But in the case of an airplane hijacker, they only need to bring equipment onto the plane once, and it is only during that one time they can be caught.

Therefore, the proper unit is not the person, but the airplane ticket. Some airplane tickets involve malfeasance, others do not. And some are more likely to involve malfeasance than others. But there is no danger of redundant searching, because obviously each person only has to go through airport security once per flight. Therefore, the search for an airplane hijacker is more like the case of the "authoritarian" government. The best strategy is to search all the most likely airplane tickets.

I have another finer point to make about the math. The number they try to optimize is the mean number of searches before a success. However, I think it makes more sense to try to optimize the probability of success given that you can make N searches. This may adjust the results slightly, but I'm not sure in which direction. (Hmmm... perhaps I should solve this problem and report the solution.)

On the other side, the paper is also missing a lot of complications that you would obviously need to account for to make a real-world judgment. Schneier discusses many of these. For instance, there is an additional cost associated with profiling, because you have to train people. Presumably this higher cost means that you can make fewer searches overall. And then there's the fact that the terrorists are intelligent beings and can game the system. And then there will be errors in the assessment of prior probabilities of malfeasance (errors which can be exploited).

Tuesday, August 7, 2012

As I said I would, I created the second Brillouin Zone of the fcc structure using modular origami. It's a shape that's related to some crystal structures we study in condensed matter physics, though truth be told it is not a very useful shape. It's pretty though.

It took 96 squares of paper, and four different kinds of modules. It holds together, but I wouldn't want to toss it around like some of the more stable shapes.

It turns out that designing your own modular origami is nontrivial. I started out by making the shapes exactly as they were in my origami book, and then I started making small modifications. But the first few things I tried (with simpler shapes) were not very successful. It's difficult to make them sufficiently stable.

Among the things I tried was the second Brillouin Zone of the bcc structure. I have a small one pictured below, but it doesn't count because I cheated. It completely fell apart and I had to cover it in tape. I could probably just create 48 triangle modules, but I'm trying to think of a more elegant way.

Saturday, August 4, 2012

One common way atheists cope with stigmatization is to distinguish themselves from those atheists. You know, the bad ones. I call this creating a foil. I'm using "foil" in the sense of a character foil. When you create a foil, you describe a position that contrasts with your own, often to highlight what you think are your positive qualities.

One classic example is Richard Dawkins' scale of belief from 1 to 7 (from The God Delusion). A 1 means you believe there is 100% probability of God, and a 7 means a 0% probability of God. Dawkins describes himself as a 6, and notes that "category 7 is in practice rather emptier than its opposite number, category 1, which has many devoted inhabitants." Category 7 is a foil, used to explain that he is not certain that there is no god.

From another point of view, when you create a foil, you create a straw man. After all, what is strawmanning, but attacking a position that no one holds? Or perhaps there is more to it than that. I propose that there is an additional component to a straw man: it must be an explicit or implied attempt to represent a real opponent. Dawkins does not misrepresent anyone with category 7, because he's quite upfront about the fact that category 7 describes few people. Therefore, Dawkins' foil is not a straw man.

There are some things I don't like about the foil strategy, but it is undeniably useful. People have so many misconceptions about atheists: they're certain, they're dogmatic, they have faith in science, they're always getting up in your business, etc. But even though people hold these misconceptions, they often don't put them into words. So it's up to the atheist to put the misconceptions into words, and create foils out of them.

Take, for instance, the time it was reported in majornewspapers that Dawkins isn't 100% certain, as if this were surprising. People are incredibly ignorant, and foils are necessary

But while foils are useful to spread a low-level understanding of atheism, they just aren't that good beyond that.

It could mislead people into thinking that the main difference between different atheists is the degree of certainty. In reality, most people in the movement don't care about that, (and to the extent that they do care about it, I don't think they should). What people actually argue about are goals and strategies.

Foils also set up a hierarchy of atheism. Rather than thinking about our different backgrounds and motivations, the foil draws all attention towards a single dimension of atheism. To our right is our fabricated foil, the absolutely certain atheists. To our left are people less atheisty than us. And then the people to our left will use us as a foil. Their foil implicitly attempts to represent us, but they don't do it very accurately, because their purpose is to create a foil, not to actually argue with us. Yep, it's a straw man!

This is frustrating, and magnifies divisions. I don't know what we can do about it, but I hope that everyone is at least aware of what's going on.

Wednesday, August 1, 2012

Psychologist Bob Altemeyer is known for his work on right-wing authoritarianism (free online book here), and I've also cited his statistics on sexual behavior. He also wrote a book called Atheists: A Groundbreaking Study of America's Nonbelievers, coauthored with Bruce Hunsberger. I was pleased to find a copy in the library.

Who is the study about?

The book mainly focuses on a 2002 survey of 253 "active" atheists in Bay Area atheist groups. The authors recognize, and make much of the fact that this is a very exceptional subset of atheists. They make some attempt to correct for this by adding a study of their psychology students' parents at the University of Manitoba. But while the Manitoba atheists provide a picture of "ordinary" atheists, this is confounded with a dozen other differences between the Manitoba atheists and Bay Area atheists.

But that's all fine by me. I'm interested in statistics of "ordinary" atheists and "active" atheists, especially since I'm an active atheist myself. But there was a major problem with me identifying with the Bay Area atheists. Their median age was 60.

Altemeyer and Hunsberger describe it as a "first" study, to be
followed by many more. But it may be one of the few studies that will ever
be done on active atheists of the baby boomer generation. I have a lot
of reason to believe that this older generation is
demographically very different from the newer generation of active
atheists that have appeared in the last decade. Older atheists are
just... just... --some of them are among my readers, so I'll say they're
absolutely amazing in every way and that's all there is to say on the
matter (more on this later).

The results, good and bad

Part of the appeal of the book was to have some impartial and meticulous psychologists study atheists, revealing the good and the bad. I certainly can't get an impartial account from anywhere else.

I felt the most meaningful results came from the parents of Manitoba students, since the atheist parents could be compared to religious parents. In all the measures they studied, there were two kinds of trends. Some measures increased from atheists to agnostics to inactive believers, all the way to fundamentalists. For example, atheists were the least hostile to homosexuals, had the lowest right-wing authoritarian scores, scored the lowest in religious ethnocentrism, were the least likely to favor teaching their own beliefs in school, and had the least emphasis on religion while growing up.

Other measures reached their minimum for agnostics. For example, agnostics were least likely to try to persuade a questioning teen to their beliefs, most equitable when rating their attitude to different religious groups, least likely to say that nothing could change their beliefs, and least "dogmatic" (as defined by the DOG scale). But this is not to say that the atheists and religious people were symmetric. In fact, atheists tended to score lower than even the inactive religious people (ie those who don't attend services).

So far, that's all rather flattering. It shows that people who compare atheists to fundamentalists are making it out to be more symmetric than it really is... which is what we've been saying all along!

Keeping in mind that the Bay Area atheists are hard to compare, they scored worse than the "ordinary" atheists on many measures (but almost always still better than the fundamentalists). They were more dogmatic*, more zealous**, and had higher religious ethnocentrism***. So that's not so flattering.

*Dogmatism tells you how certain people are about their beliefs, and unwilling to change their minds.**Zealousness tells you how much they try to persuade others. It's measured by questions such as what they would say to a questioning teen, and how they would raise their own children. Though the active atheists score high on some of these measures, the authors think that they are very low on an absolute scale.***When asked to rate their attitudes towards different religious groups from 0 to 100, atheists had greater disparities than even the fundamentalists did. Atheists were rated 90, and muslim fundamentalists were rated 0.

This was only a brief summary of the biggest results, but there was a lot of other stuff and more details. I was particularly interested in the "hidden observer" question. People are asked to imagine a hidden observer inside their head, and say whether this observer would see that they had secret doubts. In past studies of high-right-wing-authoritarian students, one third said they had secret doubts, as opposed to 4% of the active atheist sample. I guess atheist dogmatism is at least the honest sort of dogmatism.

Rationalizing away the bad

Let's be honest, the first place our mind goes is to interpreting the unflattering results in a flattering way. Did atheists score high on dogmatism because the measures were invalid? Is it because "dogmatism" as defined in the measures is actually a good thing? Is it because other atheists are dogmatic (but not me)? And if that weren't enough, I have an additional excuse: perhaps it's the older atheists who are dogmatic, but not my generation.

The last thing that occurs to us is that the measure is valid, and does in fact reflect negatively on ourselves. (I'm not saying that this is the correct interpretation, but it should at least be considered.)

The last chapter in the book had responses from atheist groups to the survey results. Not every response made excuses, but several of them did. And... their excuses weren't wrong, not all of them. There's some legitimate criticism to be made of the dogmatism measures.

And then there were some... weird responses. One person went on about his philosophy of writing his own meaning into the book called "My life", even though this had nothing to do with the survey. Another person felt the most lamentable result was that atheists did not find joy in logic and science, and concluded this was the result of internalized religious influence. One person said he couldn't respond to the survey because he took issue with defining atheism by "belief". I'm rolling my eyes at these responses and chalking them up to generational differences.

In summary, it was an interesting read, but the study was very limited, and it's too easy to reject the findings we don't like.