12/11/2008 @ 6:00AM

Your World View Doesn't Compute

Since computers are, if nothing else, starkly logical, for as long as they have been around, there have been people who have hoped that the machines might serve as an example to their human overlords, helping to make certain human affairs–politics, say–a little more logical too.

One of them is Scott Aaronson, a computer scientist at M.I.T. with an idea for a program designed to help people appreciate that the logical path they have just traveled in a political or other discussion might not have been entirely straight and narrow.

Despite being just 27 years old and in only the second year of his professorship, Aaronson is widely known in his field, quantum computing.

Quantum computers work in ways utterly different from conventional ones, and can do some tasks–breaking encryption, say–unimaginably quickly. So far, only small-scale, prototype quantum computers have been built, and it’s not yet clear whether one big enough to be useful will ever be technically possible.

Aaronson’s work involves quantum software, meaning, as members of his field like to say, that he spends his time thinking about programs for machines that might never get built.

One of his side projects, though, is a work-in-progress political program called the Worldview Manager. It has nothing to do with quantum machines or, indeed, of advanced computing of any sort. In fact, it’s so simple and straightforward an idea that you could write it with macros in Excel.

The goal of Worldview Manager, explains Aaronson, is to help people appreciate the inconsistencies and contradictions that might crop up in their social and political beliefs.

Consider the following three statements that Aaronson says could be made about climate change:

1) Reducing CO2 emissions is a fine goal, but only if China and India contribute equally with the U.S.

2) While global warming is real and caused by humans, it’s just as likely to improve life on Earth as to make it worse, as illustrated by the lushness of the Jurassic period.

3) While global warming is real and worrisome, there’s no evidence that human activities (as opposed to, say, sunspots or volcanoes) are the cause, meaning that expensive ameliorative measures would be futile.

It’s easy to imagine someone agreeing with all three, especially when the statements are mixed in with many others. But Aaronson points out–just like his program will eventually–that there are some obvious contradictions and tensions between and among them.

The first, for example, assumes that climate change is real, dangerous and caused by humans; the second assumes it’s real but benign; the third, that it’s real but not man-made.

Worldview Manager’s goal will be to nudge users who agree with all three statements to recognize–and then eliminate–the contradictions in their assertions.

The program will work by showing users a list of statements about a topic and then asking them how strongly they agree or disagree with each. At the end, the system will present users with a list of the statements they endorsed that contradict one another. It will also suggest that users reconsider those views and the assumptions behind them.

Similar teaching programs already exist for narrow fields, especially in technical areas of philosophy. Aaronson, though, is extremely ambitious for Worldview Manager and wants it to cover all the hot-button issues: gay marriage, the Middle East and more.

The program won’t take sides. In fact, two people with opposite ideas about, say, animal rights, could both get the equivalent of a passing score from the program, as long as their ideas were internally consistent.

Aaronson said he got the idea for the program several months ago in a discussion about the limits of human rationality. “I started wondering what could be done, as a practical project or experiment, that might make even the smallest dent in people’s propensity to think rationally about ‘big’ questions.”

Aaronson mentioned the project on his blog, Shtetl-Optimized, and, in addition to numerous suggestions about what the program could do, found two students– Louis Wasserman, an undergraduate at University of Chicago, and Leonid Grinberg, a student at Belmont High School in Massachusetts–who volunteered to work on it. (They will be paid a modest sum out of Aaronson’s MIT research budget.)

He says the most time-consuming part of their effort won’t be computer programming in the classic sense as much as figuring out what statements about the real world users should be presented with, and how those contradict or confirm each other. Aaronson hopes to finish Worldview Manager next year, at which point he will make its services available for free on the Web.

How much good can this program, or any other, do? After all, computers don’t exactly have a very encouraging recent track record in getting people to behave more rationally; consider the epidemic of febrile conspiracy theories that owe their life to the Internet. The notion that a computer program might get people to rethink some of their core beliefs seems as charming as it does naive.

It’s an observation for which Aaronson is prepared. “I’m not some sort of ‘rationalist Don Quixote,’ seeking to impose an unachievable logical consistency on a muddled and indifferent world. I just want to do what I think will be an interesting bit of research.

“The whole point of educating people in math and science is not the specifics; it’s the process of rational thought. I think the world would be better off were more people in the habit of thinking rationally about the ‘big’ questions.”