Monthly Archives: December 2008

Somewhere in the vastnesses of the Internet and the almost equally impenetrable thicket of my bookmark collection, there is a post by someone who was learning Zen meditation…

Someone who was surprised by how many of the thoughts that crossed his mind, as he tried to meditate, were old thoughts – thoughts he had thunk many times before. He was successful in banishing these old thoughts, but did he succeed in meditating? No; once the comfortable routine thoughts were banished, new and interesting and more distracting thoughts began to cross his mind instead.

I was struck, on reading this, how much of my life I had allowed to fall into routine patterns. Once you actually see that, it takes on a nightmarish quality: You can imagine your fraction of novelty diminishing and diminishing, so slowly you never take alarm, until finally you spend until the end of time watching the same videos over and over again, and thinking the same thoughts each time.

Sometime in the next week – January 1st if you have that available, or maybe January 3rd or 4th if the weekend is more convenient – I suggest you hold a New Day, where you don't do anything old.

Don't read any book you've read before. Don't read any author you've read before. Don't visit any website you've visited before. Don't play any game you've played before. Don't listen to familiar music that you already know you'll like. If you go on a walk, walk along a new path even if you have to drive to a different part of the city for your walk. Don't go to any restaurant you've been to before, order a dish that you haven't had before. Talk to new people (even if you have to find them in an IRC channel) about something you don't spend much time discussing.

And most of all, if you become aware of yourself musing on any thought you've thunk before, then muse on something else. Rehearse no old grievances, replay no old fantasies.

If it works, you could make it a holiday tradition, and do it every New Year.

In Experiment 1, students received an illustrated booklet, PowerPoint presentation, or narrated animation that explained 6 steps in how a cold virus infects the human body. The material included 6 high-interest details mainly about the role of viruses in sex or death (high group) or 6 low-interest details consisting of facts and health tips about viruses (low group). The low group outperformed the high group across all 3 media on a subsequent test of problem-solving transfer (d = .80) but not retention (d = .05). In Experiment 2, students who studied a PowerPoint lesson explaining the steps in how digestion works performed better on a problem-solving transfer test if the lesson contained 7 low-interest details rather than 7 high-interest details (d = .86), but the groups did not differ on retention (d = .26). In both experiments, as the interestingness of details was increased, student understanding decreased (as measured by transfer). Results are consistent with a cognitive theory of multimedia learning, in which highly interesting details sap processing capacity away from deeper cognitive processing of the core material during learning.

For this reason I tend to disagree with most people about who are the best speakers and writers. Most people prefer those with lots of interesting tidbits; I prefer those that stay focused on and deliver a key interesting point.

The study of eudaimonic community sizes began with a seemingly silly method of calculation: Robin Dunbar calculated the correlation between the (logs of the) relative volume of the neocortex and observed group size in primates, then extended the graph outward to get the group size for a primate with a human-sized neocortex. You immediately ask, "How much of the variance in primate group size can you explain like that, anyway?" and the answer is 76% of the variance among 36 primate genera, which is respectable. Dunbar came up with a group size of 148. Rounded to 150, and with the confidence interval of 100 to 230 tossed out the window, this became known as "Dunbar's Number".

It's probably fair to say that a literal interpretation of this number is more or less bogus.

There was a bit more to it than that, of course. Dunbar went looking for corroborative evidence from studies of corporations, hunter-gatherer tribes, and utopian communities. Hutterite farming communities, for example, had a rule that they must split at 150 – with the rationale explicitly given that it was impossible to control behavior through peer pressure beyond that point.

But 30-50 would be a typical size for a cohesive hunter-gatherer band; 150 is more the size of a cultural lineage of related bands. Life With Alacrity has an excellent series on Dunbar's Number which exhibits e.g. a histogram of Ultima Online guild sizes – with the peak at 60, not 150. LWA also cites further research by PARC's Yee and Ducheneaut showing that maximum internal cohesiveness, measured in the interconnectedness of group members, occurs at a World of Warcraft guild size of 50. (Stop laughing; you can get much more detailed data on organizational dynamics if it all happens inside a computer server.)

In practice as well as theory the Culture was beyond considerations of wealth or empire. The very concept of money – regarded by the Culture as a crude, over-complicated and inefficient form of rationing – was irrelevant within the society itself, where the capacity of its means of production ubiquitously and comprehensively exceeded every reasonable (and in some cases, perhaps, unreasonable) demand its not unimaginative citizens could make. These demands were satisfied, with one exception, from within the Culture itself. Living space was provided in abundance, chiefly on matter-cheap Orbitals; raw material existed in virtually inexhaustible quantities both between the stars and within stellar systems; and energy was, if anything, even more generally available, through fusion, annihilation, the Grid itself, or from stars (taken either indirectly, as radiation absorbed in space, or directly, tapped at the stellar core). Thus the Culture had no need to colonise, exploit, or enslave. The only desire the Culture could not satisfy from within itself was one common to both the descendants of its original human stock and the machines they had (at however great a remove) brought into being: the urge not to feel useless. The Culture's sole justification for the relatively unworried, hedonistic life its population enjoyed was its good works; the secular evangelism of the Contact Section, not simply finding, cataloguing, investigating and analysing other, less advanced civilizations but – where the circumstances appeared to Contact to justify so doing – actually interfering (overtly or covertly) in the historical processes of those other cultures.

Raise the subject of science-fictional utopias in front of any halfway sophisticated audience, and someone will mention the Culture. Which is to say: Iain Banks is the one to beat.

Our entire life stories are fixed by our genetics and our childhood environment (nature and nurture, more broadly), both of which we did not choose;

Our bodies are slowly growing more frail and debilitated until we die of something such as heart disease, cancer or stroke (or accident before then);

Even if someone develops a cure for aging, most of the experts who have studied the issue estimate about a 50/50 chance that our species will survive this century;

We live on a giant rotating planet, in an unimaginably large universe that is almost all empty space, and appears to be lifeless;

The fact that we were designed by evolution to value or desire certain things doesn’t seem to justify actually valuing or desiring them;

While most people believe in some sort of religion that provides cosmic context, the thousands of religions contradict each other, and all appear to be fictions created by men;

While most people believe in an “afterlife,” people don’t believe that parts of a crazy person’s mind go to Heaven when he loses them; by extrapolation, all of a person’s mind doesn’t go to Heaven when you lose all of it.

My point is not to push these beliefs onto anyone who resists them. I suspect, though, that most OB readers already think they are facts. And I suspect that many otherwise religious people, in their heart of hearts, already believe the above too.

My point, instead, is to make an observation about the above set of facts, which I’ll call “the human condition,” in the pessimistic sense. My observation is this: while all of the above facts can be considered an insult or injury, there is one more that goes largely unnoticed. The final insult is that we are not supposed to talk about the human condition. Indeed, we are not even supposed to acknowledge its existence. I call this last insult the “Meta-Human Condition”—the salt in the wound.

This piece by Marcia Angell in the New York Review of Books, while very good, mostly consists of stuff that would be familiar and unsurprising to OB readers. But I was somewhat surprised that she went so far as to say this:

The problems I've discussed are not limited to psychiatry, although they reach their most florid form there. Similar conflicts of interest and biases exist in virtually every field of medicine, particularly those that rely heavily on drugs or devices. It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine.

That's pretty strong stuff for someone who is enough of an establishment figure to become the editor of the NEJM. It's worth pointing out, though, that most of the biases that she is talking about are the product of plain old financial corruption, not the subtle cognitive biases that we mostly worry about here (though those undoubtedly play a role in allowing physicians to delude themselves into believing that they are not being swayed by the money). So these kinds of problems could probably be mostly eliminated by a conceptually simple (though of course politically very difficult) change in the rules of the game. Getting rid of problems like physician overconfidence would be much harder.

In places like Sweden, folks are more reserved and less "friendly" than in the U.S. When reserved and friendly cultures meet, the reserved folks often say they were initially fooled into thinking others liked them in particular. It took time to realize that their acting "friendly" did not actually indicate that they were more likely end up being friends in deeper ways. Eventually they learned to gauge how much foreigners from that friendly culture like them by comparing how those foreigners treat them, relative to how they treat others. Friendliness, as a signal of deeper interest and loyalty, is relative.

The movie quote above describes a common insight, that some people are "too easy" as friends. But salesman, politicians, etc. seem to usually act extra friendly to everyone; do we discount them enough for their being too easily "friendly"?

Why would you want to avoid creating a sentient AI? "Several reasons," I said. "Picking the simplest to explain first – I'm not ready to be a father."

So here is the strongest reason:

You can't unbirth a child.

I asked Robin Hanson what he would do with unlimited power. "Think very very carefully about what to do next," Robin said. "Most likely the first task is who to get advice from. And then I listen to that advice."

Good advice, I suppose, if a little meta. On a similarly meta level, then, I recall two excellent advices for wielding too much power:

Do less; don't do everything that seems like a good idea, but only what you must do.

"All our ships are sentient. You could certainly try telling a ship what to do… but I don't think you'd get very far." "Your ships think they're sentient!" Hamin chuckled. "A common delusion shared by some of our human citizens." — Player of Games, Iain M. Banks

Yesterday, I suggested that, when an AI is trying to build a model of an environment that includes human beings, we want to avoid the AI constructing detailed models that are themselves people. And that, to this end, we would like to know what is or isn't a person – or at least have a predicate that returns 1 for all people and could return 0 or 1 for anything that isn't a person, so that, if the predicate returns 0, we know we have a definite nonperson on our hands.

And as long as you're going to solve that problem anyway, why not apply the same knowledge to create a Very Powerful Optimization Process which is also definitely not a person?