Search This Blog

The Intelligence Explosion started 10,000 years ago (+/- 2,000)

Note: this post is from 2013. I think I write better blog posts now :-) But I'm thinking of writing a book with a similar title.

I just watched Daniel Dewey's TEDx Vienna talk, The Long-Term Future of AI, and it drove me crazy. On the one hand I agree with him about the description of the phenomenon that he and others call The Intelligence Explosion. But on the other, it was constantly grating to hear him describe it as something coming in the future. This is a perfect description of human cumulative culture, and how it differs from the many other species' culture, and why it is dangerous. And there is no question that human culture is both wonderful and dangerous – we kill millions of people, extinguish other species and our own languages with our weapons, pollution and just sheer competence to expand winning strategies.

You could attribute this all to AI if you like. I've realised lately that the essential problem I have with arguing with people about AI ethics is that they confound intelligence with sentience. Of course this is a matter of semantics, but I much prefer to think of intelligence as any form of plastic, adaptive capacity to change behaviour in response to perceived changes in the environment. This means even plants are intelligent. AI already exists, even though it just plans, sorts, or searches without motivation other than that provided by its programmer. And if you are willing to accept that, then you might accept that the first AI, which triggered the Intelligence Explosion, was writing. Writing provided out-of-mind memory, allowing humans to safely become more innovative because their old ideas wouldn't be lost forever if their present ideas got corrupted.

Why do I want to adopt such a weird definition of intelligence? Partly to avoid redundancy – "sentience" means sentience, why should anything else? But more importantly, to communicate why AI isn't going to take over the world by itself. If anyone takes over the world with AI, it will be people. People are the only moral / responsible agents in our culture, they are the ones' whose behaviour we should be working to control. Waiting around to declare some machine sentient and then worry is a bad plan.

And partly to point out that the threats of AI are not in the future. They are in the present. The world has changed since we lost privacy by anonymity. We need privacy by legislation, or we will lose our democracy and our civil society. The instability of the financial system, our capacity to build nuclear weapons, these all come as part of our exploitation of computer-based intelligence.

But this is not to say we need to start panicking. Another thing Dewey misses (and I do like him, Sorry Dan :-) is that there's been some very good work on how to handle errors in AI. Originally, in the 1950s, AI researchers thought that machines, being unemotional, could make perfect plans and do everything optimally so bugs could be banished. But by the 1980s we understood that some problems are just too hard to solve (we computer scientists call this "computationally intractable"), and there are good reasons that evolved, natural intelligence takes all the short cuts that it does, including emotions. One of the things the brain does is recognise and attend to errors after they are produced. Erann Gat (now Ron Garret) in his brilliant PhD dissertation showed how cognizant failures could be used in reactive (dynamic, new, cognitive, pick your adjective) AI for autonomous systems.

I still think cognizant failure is a critical component of any truly autonomous system, whether it's a robot or our society. The question I'm personally most agitated over is why it so hard for society to become cognizant of its dangerous failures and to unify sufficient support behind correcting them. Like, for example, our current problem with privacy. But we have done a pretty good job in the past, for example in damping the threat of nuclear and chemical weapons, so hopefully we'll continue to get on top of this. But AI as some weird autonomous, sentient thing to worry about in our future isn't really, in my opinion, the most helpful set of concepts to promote.This post got turned into a talk: Containing the intelligence explosion: the role of transparency. Slides & video are both available there.

Popular Posts

Since our Science paper came out it's been evident that people are surprised that machines can be biased. They assume machines are necessarily neutral and objective, which is in some sense true -- in the sense that there is no machine perspective or ethics. But to the extent an artefact is an element of our culture, it will always reflect bias.

I think the problem is that people mistake computation for math. Math really is pure, has certain truth, it's eternal, it would be the same without any particular sentient species looking at it. That's because math is an abstraction that doesn't exist in the real world. Computation is a physical process. It takes time, energy, and space. Therefore it is resource constrained. This is true whether you are talking about natural or artificial intelligence. From a computational perspective there's little difference between these.

People are smart because we are able to exploit the selected "best of" other people…

The good news: We know where word meanings come from
We have a paper in Science,Semantics derived automatically from language corpora contain human biases (a green open access version is hosted at Bath). What this paper shows is that you can find the implicit biases humans have just by learning semantics from our language. We showed this by using machine learning of semantics from the language on the Web, and comparing that to implicit biases psychologists have documented using the Implicit Association Test (IAT). The IAT uses reaction times to show that people find it easier to associate some things than others. For example, it's easier to associate flowers with pleasant terms and bugs with unpleasant terms than the other way around. Notice that the actual statistics underlying the IAT is always about these slightly complicated, dual relative measures. It's easier to: group {flowers and pleasant terms} together, AND {unpleasant terms and insect names} (both those groupin…

When Demis Hassabis said he would join Google if they didn't work with the US military, I told BBC Newsnight that this was a red herring. "Murder kills five times more people than war" was a short way to say there's a lot more to ethics than just avoiding the military, an obvious example being avoiding selling arms to paramilitaries (or school children.) In fact, many military officers often really are major advocates for peace and stability, including policies like reducing developing-world diseases and poverty because they contribute to instability.

But by far the worst and most disturbing thing I heard in the many AI policy meetings I was invited to attend last year was in the only one in the US. That was the Artificial Intelligence and Global Security Summit on Nov 1, 2017 in Washington, DC, hosted by the Center for a New American Security. Note: the CNAS have videos and transcripts of all the talks and discussions linked on that page.