Main Menu

The A.I Cargo Cult | Kevin Kelly

The Myth of a Superhuman A.I

By Kevin Kelly

I’ve heard that in the future computerized AIs will become so much smarter than us that they will take all our jobs and resources, and humans will go extinct. Is this true?

That’s the most common question I get whenever I give a talk about AI. The questioners are earnest; their worry stems in part from some experts who are asking themselves the same thing. These folks are some of the smartest people alive today, such as Stephen Hawking, Elon Musk, Max Tegmark, Sam Harris, and Bill Gates, and they believe this scenario very likely could be true. Recently at a conference convened to discuss these AI issues, a panel of nine of the most informed gurus on AI all agreed this superhuman intelligence was inevitable and not far away.

Yet buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence. These claims might be true in the future, but there is no evidence to date to support them…

Kevin Kelly is the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Review. His newest book, The Inevitable, reached the New York Times bestseller list in 2016, and will be released in paperback in June 2017. The book is about the deep trends that will shape our lives over the next 20 years. Kelly suggests we embrace these changes, including ubiquitous tracking, accessible artificial intelligence, constant sharing, getting paid to watch ads, VR in the home, etc..

Post navigation

5 responses to “The A.I Cargo Cult | Kevin Kelly”

“when Chris Andersen wrote in 2007 that Big Data and super computing (and machine learning or i.e., induction) meant the “End of Theory,” he echoed the popular Silicon Valley worldview that machines are evolving a human — and eventually a superhuman — intelligence, and he simultaneously imperiled scientific discovery. Why? Because (a) machines aren’t gaining abductive inference powers, and so aren’t getting smart in the relevant manner to underwrite “end of theory” arguments, and (b) ignoring the necessity of scientists to use their minds to understand and explain “data” is essentially gutting the central driving force of scientific change.”

I do tend to agree with your position, and the arguments are pretty good but I’m not sure if I agree with all of them; but I like them.

And while I have long said even since my anthropology class in evolutionary theory, that it is indeed a theory of evolution based on a kind of assumption.

But nevertheless I have difficulty reducing everything to a present moment of contingent types of consciousnesses that behave in a certain manner’s that we can also describe as different coordinations or configurations of various aspects such as intelligences cognition intuition and those kind of things similar to what you list.

I have my own arguments a ‘gainst this philosophical kind of religious presentism that reduces everything to a flat state of ever presence. It may well be that there’s no latter but there is some pretty good arguments that are difficult to set aside about evolutionary process, the natural selection of acquired traits, and competition of ecological niches, that is hard to get by without reducing everything to a flat presents. And similar to your labeling , I tend to say this flat kind of presents is religious.

For example, and I don’t remember all of that particular names, but I’m a little chart of Homo and Australopithecines and that kind of chart, there were sites of competition at which human beings and it up winning, in a manner of speaking. And that seems to make a certain kind of sense that we really can’t dismiss; if I get in a ring in a boxing match one of us is going to kick the other guys ass. But there is a certain type of scientific argument that would say that we are both just kind of dancing around and we’re unfolding in the way that each of us to do in a particular scene that we tend to interpret as a Progressive game where there is a winner at the end but where it actually speaking we can deconstruct it into impersonal dances of subjective agency.

What I think these AI fear mongers are saying is that intelligence indeed is not limited in human understanding of intelligence, that there is sort of a flat presents involved with the complexity of deep time that can argue that carbon lifeforms and the intelligence that arrives at human beings can be superseded by some other element and some other manner of intelligence.

I have not read all the arguments which you probably have but it seems to me that really that’s what they’re saying is that at some point there will be an intelligence, that we call AI, that will supersede and go beyond what our understanding of intelligence is. In the same way that we used to not consider trees intelligent now we can consider trees within a certain framework of intelligence. The same goes for AI in that this certain frame of intelligence that we have come to understand through a kind of relativity could in fact allow for intelligence that arises outside of our framework of understanding, and therefore our ability to act and produce.