Why Can't Steven Pinker and AI Safety Altruists Get Along?

There are few books that have more influenced to my thinking than Steven Pinker's The Better Angels of Our Nature. The book makes a powerful case for effective altruism by showing that much of what effective altruists try to spread—reason and empathy, chiefly—has led to a sweeping decline in virtually every form of human violence over the course of human history. At the same time, I think that Pinker's thesis and evidence in that book are compatible with an understanding that tail risks to human civilization, such as catastrophic risks, may have increased, and animal suffering, has clearly increased in recent history. (Humans' moral views on the latter do clearly seem to be improving, though.)

I've found it puzzling, then, that to coincide with the publication of his book Enlightenment Now, Pinker has been publishing multiplearticles criticizing altruists who are focused on addressing long-term risks, primarily from artificial general intelligence. Pinker disagrees with the view outlined in the AI alignment problem that there is a significant risk that artificial intelligence will produce catastrophic harm. I've found the criticisms surprising in part because I don't see how a small number of people focusing on the alignment problem can be a serious problem. It's been dispiriting, in turn, to see AI safety advocates turning against Pinker's work, which I think has threads that support any effective altruist's efforts.

In an op-ed a week ago, Pinker laid out his case for why focusing on the problem–or "moaning about doom," in his literary flourish–is harmful, and I think an examination suggests his views and those of AI safety researchers should not be so far apart.First, Pinker warns, "But apocalyptic thinking has serious downsides. One is that false alarms to catastrophic risks can themselves be catastrophic." He cites the Cold War nuclear arms race, the Iraq war, and the maintenance of nuclear weapons as a deterrent to biological weapons and cyberattacks. If we talk too much about catastrophe, we risk creating it.The first two examples here are ill-fitting because Pinker himself thinks that nuclear weapons are a significant problem, and presumably he would not discourage people from talking about nuclear weapons simply because doing so risks adverse consequences. Clearly talk about nuclear weapons can be done in such a way as to reduce and not exacerbate risks. Fear-mongering about a specific nuclear actor may lead to an arms race. Talk about incremental disarmament should not. Why can't there be a similar rhetorical distinction with AI?The second caution about doom-mongering is that "humanity has a finite budget of resources, brainpower and anxiety... Cognitive psychologists have shown that people are poor at assessing probabilities, especially small ones, and instead play out scenarios in their mind's eye. If two scenarios are equally imaginable, they may be considered equally probable, and people will worry about the genuine hazard no more than about the science-fiction plot line."Here the worry is about the conjunct fallacy, wherein people believe that a highly specific and therefore unlikely scenario is more likely to happen than it is. Catastrophic risk scholars are keenly aware of this fallacy and making systematic efforts to avoid it and other cognitive biases. From what I have seen, the AI safety community is investing serious effort in following the science of prediction, including Philip Tetlock's Good Judgment Project and Robin Hanson's prediction markets. That's not to say anyone is immune from cognitive fallacies, but there is more work to be done here to argue against AI safety advocacy. Most importantly, there is a careful and tempered case for AI safety that does not, in my view, rely on cognitive biases (see the Open Philanthropy Project's write up, for instance).

I agree with the worry about resources, but it ultimately begs the question. Of course, we should spend resources on AI if and only if it is a serious risk. The mere fact that there have been many mistaken predictions of the future in the past can't lead us to write off all such worries, and there is a case for worrying about AI that at the same time recognizes AI's potential for humanity and the risk's small probability. That case is strong enough that a reasonable person acquainted with it would probably want at least some amount of resources, even if modest, going to the problem.

Pinker's third argument involves the "cumulative psychological effects of the drumbeat of doom," which will lead people to conclude that we should, "Eat, drink and be merry, for tomorrow we die!" Humanity will neglect near-term problems while obsessing over risks so small it is impossible to know how large they are.I share Pinker's worry here to some degree after having seen some in the effective altruist community neglect near-term goods, such as not harming animals or common manners, in order purportedly to maximize long-term productivity. For the most part that behavior is rare or not as motivated by long-term worries as people say, and I would avoid tarring too many with the same brush. Still, I do think AI safety advocates could be a bit more conscious of this risk.Ultimately, I think Pinker misses that most AI safety researchers–aside from Elon Musk—increasingly avoid hyperbole ("moaning"). A few years ago, the common argument for AI safety used the massive negative consequences misaligned AI could have to justify a vanishing risk, an argument sometimes pointed to as what philosopher Nick Bostrom has termed "Pascal's mugging." Now, though, that sort of argument is much rarer. Instead, books like Superintelligence argue that not only are the consequences large, but also the chance of an AI disaster is not that small. Advocates emphasize that AI will likely be a very good thing for humanity (see 80,000 Hours's profile, for example), but we need to make sure that it's that and not a bad thing.These sorts of attitudes, I think, are less likely to lead to most of the bad consequences Pinker worries about. (Though I do think AI researchers and advocates could do a better job making that clear—see my note above about respecting near-term norms.) Tellingly for me, when I suggested this past summer that AI safety researchers should spread more awareness of the risk, I received a significant blowback. The AI safety community was clear that hyperbole on AI could be very, very bad and that doom-mongering was the last thing they wanted.There is an argument to be had about the magnitude of the AI risk, but AI safety researchers and advocates are not, in my view, "moaning about doom." Their worldview is instead largely compatible with Pinker's. Humanity has made tremendous progress and likely will do so, thanks to AI, so let's minimize the—small—chance that we screw up.

You focused on his arguments from the article in 'The Globe and Mail', but reading Pinker's op-ed in Pop Sci, his arguments for why to not expect advanced AI to be catastrophic aren't as rudimentary as the objections to AI safety concerns from other public intellectuals. It would be interesting, then, to see Pinker's perspective reconciled with that of AI safety advocates, because I think we could learn a lot about how to flesh out how we communicate and develop ideas from the AI safety field.

I'm surprised you've seen AI safety advocates turning on Pinker's work. Is this just his recent op-eds and 'Enlightenment Now', or are they criticizing Pinker's work more generally? I ask because it's my impression members of the EA and rationality communities are typically big fans of Pinker's evidence-based, humanistic approach to reflecting on society.

It's mainly his recent stuff, but I've seen it extended to criticisms of his work more generally. It's mostly in Facebook statuses and the like, so it's hard to compile. I would say that there are rationalists who are less optimistic than Pinker, and I think some AI safety advocates hold to a less optimistic view and think such a view is more fitting for someone concerned with AI safety.

I felt super excited to read a detailed description about machine learning engine and how it helps creating artificial intelligence models of different kinds that proves to be extremely useful for businesses.

Post a Comment

Popular posts from this blog

I can remember where I was the first time I learned that a man named Hitler had killed members of my family. It was on a hill in the Bay Area that we drove up to get to our house. I drove on it a few months ago and remembered the conversation. My great-grandfather loved me and always showed care to me in the few years I knew him, and it shocked me to learn that his brothers, sisters, and parents were murdered.

Like most Jews of my generation, I grew up with this legacy on my mind. In every history class I had that covered the 1940s, I would wonder when and how they would talk about the Holocaust. (It wasn't until high school that we did.) I did not know how the Holocaust happened until I was in fourth grade, when I overheard a friend describing how Hitler would get Jews to go into showers and then gas them. My friend clearly found it wrong, but he did not feel the outrage of if it had been done to him. I felt personal outrage. I could see that image in my head viscerally forever af…

1) The importance of artificial general intelligence:
I'd previously been dismissive of superintelligence as being something altruists should focus on, but that was in large part motivated reasoning. I read books like Superintelligenceand Global Catastrophic Risks, and I knew their theses were right initially but would not admit it to myself. With time, though I came to see that I was resisting the conclusion that superintelligence is an important priority mostly because it was uncomfortable. Now I recognize that it is potentially the most important problem and want to explore opportunities to contribute.2) The economic argument for animal welfare reforms:

One of the reasons often given for supporting animal welfare reforms to those who want to see fewer (read: no) animals tortured for food is that welfare reforms make the industry less profitable, cutting down on the numbers of animals raised. I did not think this effect was strong enough to be worth the effort activists put into …

Last weekend marked six months since my fiancé, Lucas Freitas, and I got engaged, and I thought it would be helpful for us to share how we (he, really) did it while keeping the event aligned with our shared values of altruism and rationality.

Lucas proposed to me in the conference hotel where we'd first met in person, just one year prior. We met at the National Animal Rights Conference in 2016, and he got down on one knee at the same tent by the pool where we'd had our first kiss.

When he offered me a beautiful ring, designed as the prairie diamond I'd gotten him when we were first dating, I was surprised and uneasy about the ring, and its potential cost, after saying yes while drowning in tears.

Jewelry had always seemed to me the essence of frivolity; the sort of expense one can commit to a charitable donation. What I didn’t realize was that Lucas researched and thought critically about the matter, and arrived at a middle ground that I believe combines a physical symbol of …

I am a PhD student in economics at Stanford University. I am also an advocate and a follower of the effective altruism movement (www.effective-altruism.com). I was previously a Senior Research Analyst at the Global Poverty Research Lab at Northwestern University's Buffett Institute, where I studied the implementation of evidence-based policies in education and criminal justice. I am also the chair of the Animal Advocacy Research Fund Oversight Committee, which distributes roughly $300,000 annually to fund research on effective advocacy for animals.
Follow me on Twitter: https://twitter.com/zdgroff.