Eigenmorality And The Dark Enlightenment

Jon Evans is the CTO of the engineering consultancy HappyFunCorp; the award-winning author of six novels, one graphic novel, and a book of travel writing; and TechCrunch's weekend columnist since 2010.

This is a post about good vs. evil and right vs. wrong, but don’t worry, it’s highly technical.

Let’s start with Stormfront, the white-supremacist hate site that attracts circa 300,000 unique American visitors per month, and the recent analysis of its members (and follow-up) by Seth Stephens-Davidowitz in the New York Times. Some numbers you probably didn’t expect:

when you compare Stormfront users to people who go to the Yahoo News site, it turns out that the Stormfront crowd is twice as likely to visit nytimes.com … 64% identify their age as under 30 … no relationship between monthly membership registration and a state’s unemployment rate … Nearly 80 percent of Stormfront fans have more than 100 Facebook friends. Nearly 60 percent have more than 200.

In related news, I was ego-googling the other day — look, I’m not proud of this, but I do it alone at home with all the doors and windows closed, and it doesn’t harm anyone — and I came across a response to one of my TC posts that ended with the line:

Due to the heritability of poverty and low IQs, we propose eugenics as the most effective long term solution for controlling entitlement spending.

I assumed this was some kind of Swiftian satire, but no, its author announces himself as part of the “Grey Enlightenment,” a less extreme version of the Dark Enlightenment neoreactionaries that Klint Finley wrote about on TechCrunch last year.

The Baffler subsequently published an excellent primer on the neoreactionary movement, complemented well by Vocativ’s FAQ. (If you want a palate cleanser after reading those, here’s a lengthy Anti-Reactionary FAQ.) Briefly, the neoreactionary movement:

advocates an autocratic and neo-monarchical society. Its belief system is unapologetically reactionary, almost feudal … liberal progressivism is seen as a state religion … the darkly enlightened see social hierarchies as determined not by culture or opportunity but by the cold, hard destiny embedded in DNA.

Hence their focus on “human biodiversity,” a.k.a. racism, and eugenics.

I don’t want to spend too many words on Stormfront and the neoreactionaries here, as I do have a larger point to get to, but you may well be wondering: why pay any attention to these assholes at all? After all, Stormfront’s 300,000 American users per month is still less than 0.1% of the population, and it’s possible that the handful of neoreactionary bloggers are just trying to use shock value to get attention. Why feed the trolls?

But I’m afraid I don’t think it’s that simple. For one thing, as hateful and wrongheaded as they may be, some of these people are clearly highly intelligent, and pour an enormous amount of effort into their work. It seems apparent to me that they’re not just doing it for shock value; they truly believe what they’re saying. I think it’s worth investigating how those beliefs form, and how and why their adherents cluster online.

The Baffler also suggests

Neoreactionaries are explicitly courting wealthy elites in the tech sector as the most receptive and influential audience … the neoreactionaries do seem to be influencing the drift of Silicon Valley libertarianism, which is no small force today

…but while I’m far from a libertarian, I’m largely unconvinced by what reads like an attempt to lump in with the Dark Enlightenment everyone who suggests “the status quo is pretty screwed up, maybe we should experiment with other sociopolitical systems” — you know, people like me. I’m sure there are Valley figures who do sympathize; but I’m also sure they’re a distinct minority.

It does seem clear, though, that technology has enabled the entire Dark Enlightenment movement. In part because it’s such a force multiplier that some people with tech skills have begun to think of themselves as part of a global elite, and then made the fundamental attribution error and concluded that this is because they are inherently better than other human beings; in part because, as I wrote last week, the Internet allows people who would otherwise have been loner outcasts to band together.

Circa 99% of the time this is a very good thing. The other 1%, though, gives us communities like “The Red Pill,” the misogynist hate site which became infamous after Elliot Rodger’s killing spree in Santa Barbara earlier this year.

What interests me about hate sites and fascist ideologies is, again: how do people find their way to these communities, and why do they join them? How and why do they forge hateful and dehumanizing beliefs in the midst of a society which — finally, after many years — generally recognizes them as wrong?

Well. Perhaps there might actually be a way to find out. In a formal, quantified, scientific way, no less. I give you MIT professor Scott Aaronson’s essay on eigenmorality.

It’s a terrific piece, but be advised, it’s written for a scientific audience, so it assumes that you know e.g. what an eigenvector is. (I confess I had to refresh my own memory at Wikipedia, as I haven’t much encountered matrix mathematics in the wild since acquiring my EE degree mumblemumble years ago.) To very briefly extremely oversimplify, though, it suggests that we can use an algorithm similar to Google’s original PageRank to measure morality across the human population:

A moral person is someone who cooperates with other moral people, and who refuses to cooperate with immoral people … How can we possibly know which people are moral? … linear algebra can come to the rescue! … by simply finding the principal eigenvector of the “cooperation matrix,” we then have our objective measure of morality for each individual.

(He goes on to propose a second, largely separate notion, eigendemocracy, which is even more interesting but probably outside the scope of this post. But you should go read about it.)

We don’t have a planetary-scale graph of who cooperates with one another…but we do actually have a reasonable approximation of who associates with one another. It’s called Facebook. Meaning, it seems to me, that we — by which I mean Facebook itself — may actually have the power to measure and quantify individual (relative) morality. (Which is presumably a vector in some kind of N-dimensional space rather than a single-digit rating from 1 to 10.) What’s more, Facebook could actually observe how that value changes over time.

So, in theory at least, it may already be within our power to identify some of the factors that cause otherwise intelligent people to adopt and propagate essentially fascist and misogynist ideas, such as Stormfront, the Dark Enlightenment, and the Red Pill. That, I think — unlike certain otherexamples I could name — would be an experiment worth trying.