Subscribe to Edge

You can subscribe to Edge and receive e-mail versions of EdgeEditions as they are published on the web. Fill out the form, below, with your name and e-mail address and your subscription will be automatically processed.

Email address *

Your name *

Country *

NOTE: if you use a spam-filter that uses a challenge/response or authenticated e-mail address system, you must include "[email protected]" on your list of approved senders or you will not receive our e-mail.

The Myth Of AI

The Myth Of AI

The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us. ...That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You'll have a figure say, "The computers will take over the Earth, but that's a good thing, because people had their chance and now we should give it to the machines." Then you'll have other people say, "Oh, that's horrible, we must stop these computers." Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: "Oh my God, these things are an existential threat. They must be stopped."

In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what was perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity. ... That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allows the data schemes to operate, contributing to the fortunes of whoever runs the computers. You're saying, "Well, but they're helping the AI, it's not us, they're helping the AI." It reminds me of somebody saying, "Oh, build these pyramids, it's in the service of this deity," and, on the ground, it's in the service of an elite. It's an economic effect of the new idea. The new religious idea of AI is a lot like the economic effect of the old idea, religion.

This past weekend, during a trip to San Francisco, Jaron Lanier stopped by to talk to me for an Edge feature. He had something on his mind: news reports about comments by Elon Musk and Stephen Hawking, two of the most highly respected and distinguished members of the science and technology communiity, on the dangers of AI. ("Elon Musk, Stephen Hawking and fearing the machine" by Alan Wastler, CNBC 6.21.14). He then talked, uninterrupted, for an hour.

As Lanier was about to depart, John Markoff, the Pulitzer Prize-winning technology correspondent for THE NEW YORK TIMES, arrived. Informed of the topic of the previous hour's conversation, he said, "I have a piece in the paper next week. Read it." A few days later, his article, "Fearing Bombs That Can Pick Whom to Kill" (11.12.14), appeared on the front page. It's one of a continuing series of articlesby Markoff pointing to the darker side of the digital revolution.

But these topics are back on the table again, and informing the conversation in part is Superintelligence: Paths, Dangers, Strategies, the recently published book by Nick Bostrom, founding director of Oxford University’s Institute for the Future of Humanity. In his book, Bostrom asks questions such as "what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?"

I am encouraging, and hope to publish, a Reality Club conversation, with comments (up to 500 words) on, but not limited to, Lanier's piece. This is a very broad topic that involves many different scientific fields and I am sure the Edgies will have lots of interesting things to say.

A lot of us were appalled a few years ago when the American Supreme Court decided, out of the blue, to decide a question it hadn't been asked to decide, and declare that corporations are people. That's a cover for making it easier for big money to have an influence in politics. But there's another angle to it, which I don't think has been considered as much: the tech companies, which are becoming the most profitable, the fastest rising, the richest companies, with the most cash on hand, are essentially people for a different reason than that. They might be people because the Supreme Court said so, but they're essentially algorithms.

If you look at a company like Google or Amazon and many others, they do a little bit of device manufacture, but the only reason they do is to create a channel between people and algorithms. And the algorithms run on these big cloud computer facilities.

The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person? Here we have this interesting confluence between two totally different worlds. We have the world of money and politics and the so-called conservative Supreme Court, with this other world of what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people. In both cases, there's an intellectual tradition that goes back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've been intertwined.

The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture—that's been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us.

That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You'll have a figure say, "The computers will take over the Earth, but that's a good thing, because people had their chance and now we should give it to the machines." Then you'll have other people say, "Oh, that's horrible, we must stop these computers." Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: "Oh my God, these things are an existential threat. They must be stopped."

In the past, all kinds of different figures have proposed that this kind of thing will happen, using different terminology. Some of them like the idea of the computers taking over, and some of them don't. What I'd like to do here today is propose that the whole basis of the conversation is itself askew, and confuses us, and does real harm to society and to our skills as engineers and scientists.

A good starting point might be the latest round of anxiety about artificial intelligence, which has been stoked by some figures who I respect tremendously, including Stephen Hawking and Elon Musk. And the reason it's an interesting starting point is that it's one entry point into a knot of issues that can be understood in a lot of different ways, but it might be the right entry point for the moment, because it's the one that's resonating with people.

The usual sequence of thoughts you have here is something like: "so-and-so," who's a well-respected expert, is concerned that the machines will become smart, they'll take over, they'll destroy us, something terrible will happen. They're an existential threat, whatever scary language there is. My feeling about that is it's a kind of a non-optimal, silly way of expressing anxiety about where technology is going. The particular thing about it that isn't optimal is the way it talks about an end of human agency.

But it's a call for increased human agency, so in that sense maybe it's functional, but I want to go little deeper in it by proposing that the biggest threat of AI is probably the one that's due to AI not actually existing, to the idea being a fraud, or at least such a poorly constructed idea that it's phony. In other words, what I'm proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing.

What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field. Now, if we talk about the particular technical challenges that AI researchers might be interested in, we end up with something that sounds a little duller and makes a lot more sense.

For instance, we can talk about pattern classification. Can you get programs that recognize faces, that sort of thing? And that's a field where I've been active. I was the chief scientist of the company Google bought that got them into that particular game some time ago. And I love that stuff. It's a wonderful field, and it's been wonderfully useful.

But when you add to it this religious narrative that's a version of the Frankenstein myth, where you say well, but these things are all leading to a creation of life, and this life will be superior to us and will be dangerous ... when you do all of that, you create a series of negative consequences that undermine engineering practice, and also undermine scientific method, and also undermine the economy.

The problem I see isn't so much with the particular techniques, which I find fascinating and useful, and am very positive about, and should be explored more and developed, but the mythology around them which is destructive. I'm going to go through a couple of layers of how the mythology does harm.

The most obvious one, which everyone in any related field can understand, is that it creates this ripple every few years of what have sometimes been called AI winters, where there's all this overpromising that AIs will be about to do this or that. It might be to become fully autonomous driving vehicles instead of only partially autonomous, or it might be being able to fully have a conversation as opposed to only having a useful part of a conversation to help you interface with the device.

This kind of overpromise then leads to disappointment because it was premature, and then that leads to reduced funding and startups crashing and careers destroyed, and this happens periodically, and it's a shame. It hurt a lot of careers. It has helped other careers, but that has been kind of random; depending on where you fit in the phase of this process as you're coming up. It's just immature and ridiculous, and I wish that cycle could be shut down. And that's a widely shared criticism. I'm not saying anything at all unusual.

Let's go to another layer of how it's dysfunctional. And this has to do with just clarity of user interface, and then that turns into an economic effect. People are social creatures. We want to be pleasant, we want to get along. We've all spent many years as children learning how to adjust ourselves so that we can get along in the world. If a program tells you, well, this is how things are, this is who you are, this is what you like, or this is what you should do, we have a tendency to accept that.

Since our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. And people often accept that because there's no empirical alternative to compare it to, there's no baseline. It's bad personal science. It's bad self-understanding.

I'll give you a few examples of what I mean by that. Maybe I'll start with Netflix. The thing about Netflix is that there isn't much on it. There's a paucity of content on it. If you think of any particular movie you might want to see, the chances are it's not available for streaming, that is; that's what I'm talking about. And yet there's this recommendation engine, and the recommendation engine has the effect of serving as a cover to distract you from the fact that there's very little available from it. And yet people accept it as being intelligent, because a lot of what's available is perfectly fine.

The one thing I want to say about this is I'm not blaming Netflix for doing anything bad, because the whole point of Netflix is to deliver theatrical illusions to you, so this is just another layer of theatrical illusion—more power to them. That's them being a good presenter. What's a theater without a barker on the street? That's what it is, and that's fine. But it does contribute, at a macro level, to this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there's not much choice anyway.

There are other cases where the recommendation engine is not serving that function, because there is a lot of choice, and yet there's still no evidence that the recommendations are particularly good. There's no way to compare them to an alternative, so you don't know what might have been. If you want to put the work into it, you can play with that; you can try to erase your history, or have multiple personas on a site to compare them. That's the sort of thing I do, just to get a sense. I've also had a chance to work on the algorithms themselves, on the back side, and they're interesting, but they're vastly, vastly overrated.

I want to get to an even deeper problem, which is that there's no way to tell where the border is between measurement and manipulation in these systems. For instance, if the theory is that you're getting big data by observing a lot of people who make choices, and then you're doing correlations to make suggestions to yet more people, if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there's not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.

In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don't work, which is very different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened. That's a pretty clear thing. What's not clear is where the boundary is.

If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There's no way to know. At this point there's no way to know, because it's too universal. The same thing can be said for any other big data system that recommends courses of action to people, whether it's the Google ad business, or social networks like Facebook deciding what you see, or any of the myriad of dating apps. All of these things, there's no baseline, so we don't know to what degree they're measurement versus manipulation.

Dating always has an element of manipulation; shopping always has an element of manipulation; in a sense, a lot of the things that people use these things for have always been a little manipulative. There's always been a little bit of nonsense. And that's not necessarily a terrible thing, or the end of the world.

But it's important to understand it if this is becoming the basis of the whole economy and the whole civilization. If people are deciding what books to read based on a momentum within the recommendation engine that isn't going back to a virgin population, that hasn't been manipulated, then the whole thing is spun out of control and doesn't mean anything anymore. It's not so much a rise of evil as a rise of nonsense. It's a mass incompetence, as opposed to Skynet from the Terminator movies. That's what this type of AI turns into. But I'm going to get back to that in a second.

To go yet another rung deeper, I'll revive an argument I've made previously, which is that it turns into an economic problem. The easiest entry point for understanding the link between the religious way of confusing AI with an economic problem is through automatic language translation. If somebody has heard me talk about that before, my apologies for repeating myself, but it has been the most readily clear example.

For three decades, the AI world was trying to create an ideal, little, crystalline algorithm that could take two dictionaries for two languages and turn out translations between them. Intellectually, this had its origins particularly around MIT and Stanford. Back in the 50s, because of Chomsky's work, there had been a notion of a very compact and elegant core to language. It wasn't a bad hypothesis, it was a legitimate, perfectly reasonable hypothesis to test. But over time, the hypothesis failed because nobody could do it.

Finally, in the 1990s, researchers at IBM and elsewhere figured out that the way to do it was with what we now call big data, where you get a very large example set, which interestingly, we call a corpus—call it a dead person. That's the term of art for these things. If you have enough examples, you can correlate examples of real translations phrase by phrase with new documents that need to be translated. You mash them all up, and you end up with something that's readable. It's not perfect, is not artful, it's not necessarily correct, but suddenly it's usable. And you know what? It's fantastic. I love the idea that you can take some memo, and instead of having to find a translator and wait for them to do the work, you can just have something approximate right away, because that's often all you need. That's a benefit to the world. I'm happy it's been done. It's a great thing.

The thing that we have to notice though is that, because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri. The problem with that is that these are not freestanding services.

In other words, if you go back to some of the thought experiments from philosophical debates about AI from the old days, there are lots of experiments, like if you have some black box that can do something—it can understand language—why wouldn't you call that a person? There are many, many variations on these kinds of thought experiments, starting with the Turing test, of course, through Mary the color scientist, and a zillion other ones that have come up.

This is not one of those. What this is, is behind the curtain, is literally millions of human translators who have to provide the examples. The thing is, they didn't just provide one corpus once way back. Instead, they're providing a new corpus every day, because the world of references, current events, and slang does change every day. We have to go and scrape examples from literally millions of translators, unbeknownst to them, every single day, to help keep those services working.

The problem here should be clear, but just let me state it explicitly: we're not paying the people who are providing the examples to the corpora—which is the plural of corpus—that we need in order to make AI algorithms work. In order to create this illusion of a freestanding autonomous artificial intelligent creature, we have to ignore the contributions from all the people whose data we're grabbing in order to make it work. That has a negative economic consequence.

This, to me, is where it becomes serious. Everything up to now, you can say, "Well, look, if people want to have an algorithm tell them who to date, is that any stupider than how we decided who to sleep with when we were young, before the Internet was working?" Doubtful, because we were pretty stupid back then. I doubt it could have that much negative consequence.

This is all of a sudden a pretty big deal. If you talk to translators, they're facing a predicament, which is very similar to some of the other early victim populations, due to the particular way we digitize things. It's similar to what's happened with recording musicians, or investigative journalists—which is the one that bothers me the most—or photographers. What they're seeing is a severe decline in how much they're paid, what opportunities they have, their long-term prospects. They're seeing certain opportunities for continuing, particularly in real-time translation… but I should point out that's going away soon too. We're going to have real-time translation on Skype soon.

The thing is, they're still needed. There's an impulse, a correct impulse, to be skeptical when somebody bemoans what's been lost because of new technology. For the usual thought experiments that come up, a common point of reference is the buggy whip: You might say, "Well, you wouldn't want to preserve the buggy whip industry."

But translators are not buggy whips, because they're still needed for the big data scheme to work. They're the opposite of a buggy whip. What's happened here is that translators haven't been made obsolete. What's happened instead is that the structure through which we receive the efforts of real people in order to make translations happen has been optimized, but those people are still needed.

This pattern—of AI only working when there's what we call big data, but then using big data in order to not pay large numbers of people who are contributing—is a rising trend in our civilization, which is totally non-sustainable. Big data systems are useful. There should be more and more of them. If that's going to mean more and more people not being paid for their actual contributions, then we have a problem.

The usual counterargument to that is that they are being paid in the sense that they too benefit from all the free stuff and reduced-cost stuff that comes out of the system. I don't buy that argument, because you need formal economic benefit to have a civilization, not just informal economic benefit. The difference between a slum and the city is whether everybody gets by on day-to-day informal benefits or real formal benefits.

The difference between formal and informal has to do with whether it's strictly real-time or not. If you're living on informal benefits and you're a musician, you have to play a gig every day. If you get sick, or if you have a sick kid, or whatever and you can't do it, suddenly you don't get paid that day. Everything's real-time. If we were all perfect, immortal robots, that would be fine. As real people, we can't do it, so informal benefits aren't enough. And that's precisely why things, like employment, savings, real estate, and ownership of property and all these things were invented—to acknowledge the truth of the fragility of the human condition, and that's what made civilization.

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous. It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.

For all those reasons, the mythology is the problem, not the algorithms. To back up again, I've given two reasons why the mythology of AI is stupid, even if the actual stuff is great. The first one is that it results in periodic disappointments that cause damage to careers and startups, and it's a ridiculous, seasonal disappointment and devastation that we shouldn't be randomly imposing on people according to when they happen to hit the cycle. That's the AI winter problem. The second one is that it causes unnecessary negative benefits to society for technologies that are useful and good. The mythology brings the problems, not the technology.

Having said all that, let's address directly this problem of whether AI is going to destroy civilization and people, and take over the planet and everything. Here I want to suggest a simple thought experiment of my own. There are so many technologies I could use for this, but just for a random one, let's suppose somebody comes up with a way to 3-D print a little assassination drone that can go buzz around and kill somebody. Let's suppose that these are cheap to make.

I'm going to give you two scenarios. In one scenario, there's suddenly a bunch of these, and some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly. There's so many of them that it's hard to find all of them to shut it down, and there keep on being more and more of them. That's one scenario; it's a pretty ugly scenario.

There's another one where there's so-called artificial intelligence, some kind of big data scheme, that's doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people. The question is, does it make any difference which it is?

The truth is that the part that causes the problem is the actuator. It's the interface to physicality. It's the fact that there's this little killer drone thing that's coming around. It's not so much whether it's a bunch of teenagers or terrorists behind it or some AI, or even, for that matter, if there's enough of them, it could just be an utterly random process. The whole AI thing, in a sense, distracts us from what the real problem would be. The AI component would be only ambiguously there and of little importance.

This notion of attacking the problem on the level of some sort of autonomy algorithm, instead of on the actuator level is totally misdirected. This is where it becomes a policy issue. The sad fact is that, as a society, we have to do something to not have little killer drones proliferate. And maybe that problem will never take place anyway. What we don't have to worry about is the AI algorithm running them, because that's speculative. There isn't an AI algorithm that's good enough to do that for the time being. An equivalent problem can come about, whether or not the AI algorithm happens. In a sense, it's a massive misdirection.

This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it. There are about to be a whole bunch of those. And that'll involve some kind of new societal structure that isn't perfect anarchy. Nobody in the tech world wants to face that, so we lose ourselves in these fantasies of AI. But if you could somehow prevent AI from ever happening, it would have nothing to do with the actual problem that we fear, and that's the sad thing, the difficult thing we have to face.

I haven't gone through a whole litany of reasons that the mythology of it AI does damage. There's a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science, not just because we raise expectations and then fail to meet them repeatedly, but because we confuse generations of young scientists. Just to be absolutely clear, we don't know how most kinds of thoughts are represented in the brain. We're starting to understand a little bit about some narrow things. That doesn't mean we never will, but we have to be honest about what we understand in the present.

A retort to that caution is that there's some exponential increase in our understanding, so we can predict that we'll understand everything soon. To me, that's crazy, because we don't know what the goal is. We don't know what the scale of achieving the goal would be... So to say, "Well, just because I'm accelerating, I know I'll reach my goal soon," is absurd if you don't know the basic geography which you're traversing. As impressive as your acceleration might be, reality can also be impressive in the obstacles and the challenges it puts up. We just have no idea.

This is something I've called, in the past, "premature mystery reduction," and it's a reflection of poor scientific mental discipline. You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you're a lesser scientist. I don't see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things. A great example is the Human Brain Project in Europe, which is a lot of public money going into science that's very influenced by this point of view, and it has upset some in the neuroscience community for precisely the reason I described.

There is a social and psychological phenomenon that has been going on for some decades now: A core of technically proficient, digitally-minded people reject traditional religions and superstitions. They set out to come up with a better, more scientific framework. But then they re-create versions of those old religious superstitions! In the technical world these superstitions are just as confusing and just as damaging as before, and in similar ways.

To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world. All of the damages are essentially mirror images of old damages that religion has brought to science in the past.

There's an anticipation of a threshold, an end of days. This thing we call artificial intelligence, or a new kind of personhood… If it were to come into existence it would soon gain all power, supreme power, and exceed people.

The notion of this particular threshold—which is sometimes called the singularity, or super-intelligence, or all sorts of different terms in different periods—is similar to divinity. Not all ideas about divinity, but a certain kind of superstitious idea about divinity, that there's this entity that will run the world, that maybe you can pray to, maybe you can influence, but it runs the world, and you should be in terrified awe of it.

That particular idea has been dysfunctional in human history. It's dysfunctional now, in distorting our relationship to our technology. It's been dysfunctional in the past in exactly the same way. Only the words have changed.

In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.

That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, "Well, but they're helping the AI, it's not us, they're helping the AI." It reminds me of somebody saying, "Oh, build these pyramids, it's in the service of this deity," but, on the ground, it's in the service of an elite. It's an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.

There is an incredibly retrograde quality to the mythology of AI. I know I said it already, but I just have to repeat that this is not a criticism of the particular algorithms. To me, what would be ridiculous is for somebody to say, "Oh, you mustn't study deep learning networks," or "you mustn't study theorem provers," or whatever technique you're interested in. Those things are incredibly interesting and incredibly useful. It's the mythology that we have to become more self-aware of.

This is analogous to saying that in traditional religion there was a lot of extremely interesting thinking, and a lot of great art. And you have to be able to kind of tease that apart and say this is the part that's great, and this is the part that's self-defeating. We have to do it exactly the same thing with AI now.

This is a hard topic to talk about, because the accepted vocabulary undermines you at every turn. This is also similar to a problem traditional religion. If I talk about AI, am I talking about the particular technical work, or the mythology that influences how we integrate that into our world, into our society? Well, the vocabulary that we typically use doesn't give us an easy way to distinguish those things. And it becomes very confusing.

If AI means this mythology of this new creature we're creating, then it's just a stupid mess that's confusing everybody, and harming the future of the economy. If what we're talking about is a set of algorithms and actuators that we can improve and apply in useful ways, then I'm very interested, and I'm very much a participant in the community that's improving those things.

Unfortunately, the standard vocabulary that people use doesn't give us a great way to distinguish those two entirely different items that one might reference. I could try to coin some phrases, but for the moment, I'll just say these are two entirely different things that deserve to have entirely distinguishing vocabulary. Once again, this vocabulary problem is entirely retrograde and entirely characteristic of traditional religions.

Maybe it's worse today, because in the old days, at least we had the distinction between, say, ethics and morality, where you could talk about two similar things, where one was a little bit more engaged with the mythology of religion, and one is a little less engaged. We don't quite have that yet for our new technical world, and we certainly need it.

Having said all this, I'll mention one other similarity, which is that just because a mythology has a ridiculous quality that can undermine people in many cases doesn't mean that the people who adhere to it are necessarily unsympathetic or bad people. A lot of them are great. In the religious world, there are lots of people I love. We have a cool Pope now, there are a lot of cool rabbis in the world. A lot of people in the religious world are just great, and I respect and like them. That goes hand-in-hand with my feeling that some of the mythology in big religion still leads us into trouble that we impose on ourselves and don't need.

In the same way, if you think of the people who are the most successful in the new economy in this digital world—I'm probably one of them; it's been great to me—they're, in general, great. I like the people who've done well in the cloud computer economy. They're cool. But that doesn't detract from all of the things I just said.

That does create yet another layer of potential confusion and differentiation that becomes tedious to state over and over again, but it's important to say.

Reality Club Discussion

We are now growing at a pace that's fully exponential—with a doubling time of 1.5 years. If we are concerned with exponentials, then we must also consider biotech—improving with an even faster rate of change. Synthetic neurobiology (BRAIN initiative) and AI are now competing and synergizing. Moving beyond mere warnings of existential risks to strategies for risk reduction and scenario testing—join us at: http://cser.org , http://thefutureoflife.org

Myths can intentionally confuse us into supporting an elite. My 2 bits: Even without myths or confusion, we favor elites in a darwinian sense. New technologies(algorithms) lead from apes to sapiens, from Spanish to British to American hegemonies. Perhaps what concerns us more than myth and elites, is whether or not the new elite(borg) that we join shares our darwinian(ethical) goals. Even well-intentioned geniuses make irreversible mistakes. Will the new regime result in the equivalent of Easter Island deforestation between 1550 and 1720? Will AI+Neurotech move so quickly that the good parts of ancient algorithms in evolved instincts and ethics are ignored to the potential detriment of both old and new cultures? Does AI-ethics fall out naturally from general-AI or do we need to have this as a major goal? Do we expect feral children to have the same ethics as Albert Schweitzer?

(1) I'm not concerned about the long-term, "adult" General A.I.... It’s the 3-5 year old child version that concerns me most as the A.I. grows up. I have twin 3 year-old boys who don’t understand when they are being destructive in their play;

(2) The government’s first reaction is always to regulate, which IMHO is the last thing we want/need because it simply drives the work off-shore, and hampers the "trusted" players, while hackers (for lack of a better word) continue anyway;

(3) Best analogy I know is what happened back in 1975 with theAsilomar Conference on Recombinant DNA. The purpose was to draw up voluntary guidelines to ensure the safety of recombinant DNA technology.

I am puzzled by the arguments put forward by those who say we should worry about a coming AI, singularity, because all they seem to offer is a prediction based on Moore's law. But, an exponential increase is not enough to demonstrate that a qualitative change in behavior will take place. Besides which, the zeroth law of economics is that exponential change never goes on forever. What specific capacities do they fear computers may acquire before Moore's law runs out, and why do they think these could get "out of control"? Is there any concrete evidence for a programmable digital computer evolving the ability of taking initiatives or making choices which are not on a list of options programmed in by a human programmer? Finally, is there any detailed reason to think that a programmable digital computer is a good model for what goes on in the brain?

My present Mac Air is exponentially faster and more capacious than my original Mac SE, but it doesn't do anything qualitatively different: I launch programs and they run. MS word offers now exponentially more features but it doesn't come any closer to writing text without me than the primitive text editing program on my Commodore 64.

During the evolution of life on the planet, a vast number of options for a cell to behave have been invented and tested, yet major qualitative transitions in the capacities of cells have been few, often taking billions of years. Maynard Smith and Szathmáry identify only eight in four billion years, two are the invention of eukaryotic cells and the invention of language. But we talking about another such major transition. If possible at all, why shouldn't it take as long as the transition from single cell amebas to multicellular creatures? Do we really think Google has at its disposal more processing power in a decade than a billion years of planetary wide evolution of procaryotes?

Why aren't we more worried by the implications of a major transition in the organization of life which is undoubtedly underway, due to unanticipated consequences of run away technology? This is the growth of technology to the point its waste products disrupt the natural feedback mechanisms that control the climate, i.e. climate change. This is the unavoidable first step in a process that must, if we are to survive as an industrial civilization, end in a synthesis of the natural and artificial control systems on the planet. To the extent that the feedback systems that control the carbon cycle on the planet have a rudimentary intelligence, this is where the merging of natural and artificial intelligence could first prove decisive for humanity.

Those who worry that an exponential increase in the capacity of computers could bring about a qualitative transition in their behavior that trumps what took vast numbers of cells four billion years to develop, are making a mistake analogous to cosmologists who posit that our universe is one of a vast number of copies. If we can't explain why our universe has the laws or initial conditions it does, we can invent a story in which a universe like ours arises randomly in a vast enough collection. Similarly, if we can't yet understand how natural intelligence is produced by a human brain, take the short cut of imagining that the mechanisms which must somehow be present in neuronal circuitry will arise by chance in a large enough network of computers.

Neuroscience is advancing quickly; so sometime in this century we may understand how the several aspects of human intelligence arise. But why couldn't such progress require us to come to a detailed understanding of how natural intelligence differs qualitatively from any behavior that a present day computer could exhibit. Why should our early 21st century conception of computation fully encompasses natural intelligence, which took communities of cells four billion years to invent?

Here's an essay I wrote an essay on this topic I wrote a week ago, the day before your interview with Jaron came out. Although I focus on the mistaken fear of malevolent AI, my arguments equally apply to Peter's "3 -5 year old child version"....

Artificial Intelligence Is A Tool Not A Threat
Rodney A. Brooks

Recently there has been a spate of articles in the mainstream press, and a spate of high profile people who are in tech but not AI, speculating about the dangers of malevolent AI being developed, and how we should be worried about that possibility. I say relax. Chill. This all comes from some fundamental misunderstandings of the nature of the undeniable progress that is being made in AI, and from a misunderstanding of how far we really are from having volitional or intentional artificially intelligent beings, whether they be deeply benevolent or malevolent.

By the way, this is not a new fear, and we’ve seen it played out in movies for a long time, from "2001: A Space Odyssey", in 1968, "Colossus: The Forbin Project" in 1970, through many others, and then "I, Robot" in 2004. In all cases a computer decided that humans couldn’t be trusted to run things and started murdering them. The computer knew better than the people who built them, so it started killing them. (Fortunately that doesn’t happen with most teenagers, who always know better than the parents who built them.)

I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years. I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence. Recent advances in deep machine learning let us teach our machines things like how to distinguish classes of inputs and to fit curves to time data. This lets our machines "know" whether an image is that of a cat or not, or to "know" what is about to fail as the temperature increases in a particular sensor inside a jet engine. But this is only part of being intelligent, and Moore’s Law applied to this very real technical advance will not by itself bring about human level or super human level intelligence. While deep learning may come up with a category of things appearing in videos that correlates with cats, it doesn’t help very much at all in "knowing" what catness is, as distinct from dogness, nor that those concepts are much more similar to each other than to salamanderness. And deep learning does not help in giving a machine "intent", or any overarching goals or "wants." And it doesn’t help a machine explain how it is that it "knows" something, or what the implications of the knowledge are, or when that knowledge might be applicable, or counterfactually what would be the consequences of that knowledge being false. Malevolent AI would need all these capabilities, and then some. Both an intent to do something and an understanding of human goals, motivations, and behaviors would be keys to being evil towards humans.

Michael Jordan, of UC Berkeley, was recently interviewed in IEEE Spectrum, where he said some very reasonable, but somewhat dry, academic, things about big data. He very clearly and carefully laid out why even within the limited domain of machine learning, just one aspect of intelligence, there are pitfalls as we don’t yet have solid science on understanding exactly when and what classifications are accurate. And he very politely throws cold water on claims of near term full brain emulation and talks about us being decades or centuries from fully understanding the deep principles of the brain.

The Roomba, the floor cleaning robot from my previous company, iRobot, is perhaps the robot with the most volition and intention of any robots out there in the world. Most others are working in completely repetitive environments, or have a human operator providing the second by second volition for what they should do next.

When a Roomba has been scheduled to come out on a daily or weekly basis it operates as an autonomous machine (except that all models still require a person to empty their bin). It comes out and cleans the floor on its schedule. The house might have its furniture re-arranged since last time, but the Roomba finds its way around, slowing down when it gets close to obstacles, it senses them before contact, and then heading away from them, and it detects drops in the floor, such as from a step or stair with triply redundant methods and avoids falling down. Furthermore it has a rudimentary understanding of dirt. When its acoustic sensors in its suction system hear dirt banging around in the air flow, it stops exploring and circles in that area over and over again until the dirt is gone, or at least until the banging around drops below a pre-defined threshold.

But the Roomba does not connect its sense of understanding to the bigger world. It doesn’t know that humans exist–if it is about to run into one it makes no distinction between a human and any other obstacle; by contrast dogs and even sheep understand the special category of humans and have some expectations about them when they detect them. The Roomba does not. And it certainly has no understanding that humans are related to the dirt that triggers its acoustic sensor, nor that its real mission is to clean the houses of those humans. It doesn’t know that houses exist.

At Rethink Robotics our robot Baxter is a little less intentional than a Roomba, but more dexterous and more aware of people. A person trains Baxter to do a task, and then that is what Baxter keeps doing, over and over. But it "knows" a little bit about the world with just a little common sense. For instance it knows that if it is moving its arm towards a box to place a part there and for whatever reason there is no longer something in its hand then there is no point continuing the motion. And it knows what forces it should feel on its arms as it moves them and is able to react if the forces are different. It uses that awareness to seat parts in fixtures, and it is aware when it has collided with a person and knows that it should immediately stop forward motion and back off. But it doesn’t have any semantic connection between a person who is in its way, and a person who trains it–they don’t share the same category in its very limited ontology.

OK, so what about connecting an IBM Watson like understanding of the world to a Roomba or a Baxter? No one is really trying as the technical difficulties are enormous, poorly understood, and the benefits are not yet known. There is some good work happening on "cloud robotics", connecting the semantic knowledge learned by many robots into a common shared representation. This means that anything that is learned is quickly shared and becomes useful to all, but while it provides larger data sets for machine learning it does not lead directly to connecting to the other parts of intelligence beyond machine learning.

It is not like this lack of connection is a new problem. We’ve known about it for decades, and it has long been referred to as the symbol grounding problem. We just haven’t made much progress on it, and really there has not been much application demand for it.

Doug Lenat has been working on his Cyc project for twenty years. He and his team have been collecting millions, really, of carefully crafted logical sentences to describe the world, to describe how concepts in the world are connected, and to provide an encoding of common sense knowledge that all of us humans pick up during our childhoods. While it has been a heroic effort it has not led to an AI system being able to master even a simple understanding of the world. Trying to scale up collection of detailed knowledge a few years ago Pushpinder Singh, at MIT, decided to try to use the wisdom of the crowds and set up the Open Mind Common Sense web site, which involved a number of interfaces that ordinary people could use to contribute common sense knowledge. The interfaces ranged from typing in simple declarative sentences in plain English, to categorizing shapes of objects. Push developed ways for the system to automatically mine millions of relationships from this raw data. The knowledge represented by both Cyc and Open Mind has been very useful for many research projects but researchers are still struggling to use it in game changing ways by AI systems.

Why so many years? As a comparison, consider that we have had winged flying machines for well over 100 years. But it is only very recently that people like Russ Tedrake at MIT CSAIL have been able to get them to land on a branch, something that is done by a bird somewhere in the world at least every microsecond. Was it just Moore’s law that allowed this to start happening? Not really. It was figuring out the equations and the problems and the regimes of stall, etc., through mathematical understanding of the equations. Moore’s law has helped with MATLAB and other tools, but it has not simply been a matter of pouring more computation onto flying and having it magically transform. And it has taken a long, long time.

Expecting more computation to just magically get to intentional intelligences, who understand the world is similarly unlikely. And, there is a further category error that we may be making here. That is the intellectual shortcut that says computation and brains are the same thing. Maybe, but perhaps not.

In the 1930′s Turing was inspired by how "human computers", the people who did computations for physicists and ballistics experts alike, followed simple sets of rules while calculating to produce the first models of abstract computation. In the 1940′s McCullough and Pitts at MIT used what was known about neurons and their axons and dendrites to come up with models of how computation could be implemented in hardware, with very, very abstract models of those neurons. Brains were the metaphors used to figure out how to do computation. Over the last 65 years those models have now gotten flipped around and people use computers as the metaphor for brains. So much so that enormous resources are being devoted to "whole brain simulations." I say show me a simulation of the brain of a simple worm that produces all its behaviors, and then I might start to believe that jumping to the big kahuna of simulating the cerebral cortex of a human has any chance at all of being successful in the next 50 years. And then only if we are extremely lucky.

In order for there to be a successful volitional AI, especially one that could be successfully malevolent, it would need a direct understanding of the world, it would need to have the dexterous hands and/or other tools that could out manipulate people, and to have a deep understanding of humans in order to outwit them. Each of these requires much harder innovations than a winged vehicle landing on a tree branch. It is going to take a lot of deep thought and hard work from thousands of scientists and engineers. And, most likely, centuries.

The science is in and accepted on the world being round, evolution, climate change, and on the safety of vaccinations. The science on AI has hardly yet been started, and even its time scale is completely an open question.

Just how open the question of time scale for when we will have human level AI is highlighted by a recent report by Stuart Armstrong and Kaj Sotala, of the Machine Intelligence Research Institute, an organization that itself has researchers worrying about evil AI. But in this more sober report, the authors analyze 95 predictions made between 1950 and the present on when human level AI will come about. They show that there is no difference between predictions made by experts and non-experts. And they also show that over that 60 year time frame there is a strong bias towards predicting the arrival of human level AI as between 15 and 25 years from the time the prediction was made. To me that says that no one knows, they just guess, and historically so far most predictions have been outright wrong!

I say relax everybody. If we are spectacularly lucky we’ll have AI over the next thirty years with the intentionality of a lizard, and robots using that AI will be useful tools. And they probably won’t really be aware of us in any serious way. Worrying about AI that will be intentionally evil to us is pure fear mongering. And an immense waste of time.

Let’s get on with inventing better and smarter AI. It is going to take a long time, but there will be rewards at every step along the way. Robots will become abundant in our homes, stores, farms, offices, hospitals, and all our work places. Like our current day hand-held devices we won’t know how we lived without them.

Somebody has to play skeptic or naysayer. It is truly bizarre that that role seems to have fallen on me, but here goes.

I would really love if AI was working so damn well that it was about to get scary. I think that day may well come—I have no objection to the idea that a machine can ultimately think as well or better than humans. Computers have been on track for exponential increase in at least some measures of computing power (basic operations, accessing memory, floating point multiplication…). Algorithms have built on that to let s do some amazing things, and conceptual progress on algorithms has arguably been as fast or faster than the raw hardware power.

Given that, the surprising thing about AI is how ridiculously slow the progress has been. Tasks like chess playing have been solved using the exponential advance in brute force rather than general cognition. When a grandmaster plays a computer we can be pretty sure that the grandmaster is not using the same algorithm or even the same architecture. Look, for example, at the clock rate difference. Somehow the grandmaster uses parallelism to make up for his or her terribly slow biological systems, but we don't have a very good picture of how that happens.

My view is that is at least one, and more likely several, miracles of understanding between us and general AI. At present, lots of "progress" is being made in some sense of the term—but it is a bit like early 19th century biologists Cuvier, Agassiz or Owen thinking they were making "progress" understanding the diversity of life on earth by filling museums with specimens.They needed Darwin to come up with a miracle of understanding – they were just stamp collecting. Those stamp collections later proved useful, to be sure, but their efforts weren't real progress at all. No amount of further stamp collecting would actually put us closer to understanding the origin of species—it took Darwin and Wallace having a breakthrough idea.

So color me a skeptic that deepmind or others have cracked this—indeed I think I know enough about what they, and people trying to supersede them are doing to be pretty confident about that. Someday this will be solved. Hopefully soon, but there is really no way to predict how it takes for a miracle of understanding to occur.

Indeed many of you young'uns didn't live through previous cycles of AI hype, followed by equally irrational AI disappointment. Some of us have seen this movie before, with equally smart and earnest people thinking that AI was right around the corner. I wish it were so. I hope to live to see it be so. But I am not betting that it will happen very soon. Indeed one of the most amusing things about this discussion is that we are back to using the term "AI"—it was so thoroughly in disrepute that would have been hard to imagine a decade ago.

Until then I don't think that we have that much to worry about with respect to machine directed calamites to humankind.

Besides there are so many human directed calamites! The NYT has a story today about ISIS creating slave markets, for example, something that one would have hoped died out centuries ago, but has somehow been revived. Boko Haram isn't much better, and there is a lot else bad going on in the world. This is going to sound like an NRA bumper sticker but to an excellent approximation computers don't kill people. People kill people. Worrying about the threat to humanity from AI is a bit odd in a world so full of here-and-now threats.

As many of you know I have put a lot of effort into worrying about both bioterrorism and natural pandemics (I have spent much of the last three months working on ebola). Asteroid impact is virtually certain to pose an existential threat to humanity (the uncertainty is only in the timescale). Although its much slower, climate change also has dire consequences if unchecked. Martin Rees and others have cataloged a wide variety of threats to society, civilization or even the species itself. My opinion is that these threat—some natural, some human directed—are way more serious, and way more deserving of time and attention than existential threats from AI.

One can still make the argument that ultimately we could face an existential threat from AI, even if it is not breathlessly imminent. That's a very good question but it is one that I am pretty sure we have more time to explore. As Rodney Brooks says in his essay, we need to relax a bit.

~

But most of this has nothing to do with Jaron's complex and nuanced essay! He devotes a few paragraphs to dismissing the existential threat from AI. That is an easy topic to discuss and take positions on, so that is what we have done. It's a distraction.

Jaron's main point is far more subtle—the term AI is really a broad abstraction that is misleading in how it is applied and how we think about it. think that is a very good point and it is much richer and more nuanced than the headline "The Myth Of AI." Here is a paragraph that makes his most important point

What do I mean by AI being a fake thing? That it adds a layer of religious thinking to what otherwise should be a technical field. Now, if we talk about the particular technical challenges that AI researchers might be interested in, we end up with something that sounds a little duller and makes a lot more sense.

​He isn't saying that the technical content is all a myth—but rather that the rubric "AI" brings along a ton of baggage that obscures very important things. I think that Jaron has a good point here.

Boat designer; Author, Turing’s Cathedral and Darwin Among the Machines

Jaron, as always, is articulate, and I agree with most of what he says. He (and others) however, seem to be ignoring the real elephant in the room: analog computing.

The brain (of a human or of a fruit fly) is not a digital computer, and intelligence is not an algorithm. The difficulty of turning this around, despite some initial optimism, and achieving even fruit fly level intelligence with algorithms running on digital computers should have put this fear to rest by now. Listen to Jaron, and relax.

Now, back to the elephant. The brain is an analog computer, and if we are going to worry about artificial intelligence, it is analog computers, not digital computers, that we should be worried about. We are currently in the midst of the greatest revolution in analog computing since the development of the first nervous systems. Should we be worried? Yes.

On the other hand, will the results govern the world (and govern us) better (and perhaps more fairly) than we have, using our own brains, so far? Let's hope.

Author, Machines Who Think, The Universal Machine, Bounded Rationality, This Could Be Important; Co-author (with Edward Feigenbaum), The Fifth Generation

Corporations aren't people and machines aren't people either. In the more than half century that I've been watching AI, I've never heard a researcher say they were equivalent. Sadly, I've heard outsiders attribute such beliefs to AI researchers, even to me, but it wasn't and isn't so.

The impulse to create intelligence outside the human cranium is profound and age-old (Homer, the early Egyptians, through the Middle Ages, the Industrial Age). This impulse deserves to be studied and understood better: its persistence is mighty, even though it isn't exactly the joy of sex.

The original motive for AI at Carnegie Mellon in the mid-1950s was to model some part of human cognition, in this case, logical reasoning—a "small but fairly important subset of what's going on in mind," Herb Simon put it to me. Marvin Minsky and John McCarthy were also fascinated by the human mind, and felt that this new thing called the computer could be made to think, in some way. Minsky later told me that whatever detours he later took, he returned finally to focusing on the human mind.

A model is not the phenomenon itself, as any scientist knows. If you argue that in this case, the models will someday outdo the phenomena they're modeling, you have a novelty in science, and something to think about on those grounds alone. But of course the stakes are even higher.

Yes, the machines are getting smarter—we're working hard to achieve that. I agree with Nick Bostrom that the process must call upon our own deepest intelligence, so that we enjoy the benefits, which are real, without succumbing to the perils, which are just as real. Working out the ethics of what smart machines should, or should not do—looking after the frail elderly, or deciding whom to kill on the battlefield—won't be settled by fast thinking, snap judgments, no matter how heartfelt. This will be a slow inquiry, calling on ethicists, jurists, computer scientists, philosophers, and many others. As with all ethical issues, stances will be provisional, evolve, be subject to revision. I'm glad to say that for the past five years the Association for the Advancement of Artificial Intelligence has formally addressed these ethical issues in detail, with a series of panels, and plans are underway to expand the effort. As Bostrom says, this is the essential task of our century.

In addition I would make a distinction between machine intelligence and machine decision-making.

We should be afraid. Not of intelligent machines. But of machines making decisions that they do not have the intelligence to make. I am far more afraid of machine stupidity than of machine intelligence.

Machine stupidity creates a tail risk. Machines can make many many good decisions and then one day fail spectacularly on some a tail event that did not appear in their training data. This is the difference between specific and general intelligence.

I'm especially afraid since we seem to increasingly be confusing the brilliant specific intelligence machines are demonstrating with general intelligence.

Jaron Lanier has pointed out one reason that paranoid worries about artificial intelligence are a waste of time: Human-level AI is still the proverbial 15-to-25 years away, just as it always has been, and many of its recently touted advances have shallow (and human-nourished) roots. But there are other reasons not to worry about killer bots and other machines running amok.

One is that disaster scenarios are cheap to play out in the probability-free zone of our imaginations, and they can always find a worried, technophobic, or morbidly fascinated audience. In the past we were chastened from trying to know or do too much by the fables of Adam and Eve, Prometheus, Pandora, the Golem, Faust, the Sorcerer’s Apprentice, Pinocchio, Frankenstein, and HAL. More recently we have been kept awake by the population bomb, polywater, resource depletion, Andromeda strains, suitcase nukes, the Y2K bug, and engulfment by nanotechnological gray goo.

A recent parallel is instructive. The cloning of a sheep in 1997 led to confident prophesies that in just a few years we would have immortal senior citizens, armies of goose-stepping Hitlers, parents implanting Einstein genes in their unborn children, and massive warehouses of zombies kept alive to provide people with spare organs. The fear-mongering led George W. Bush to set up his (fortunately ineffectual) President’s Council on Bioethics, which he packed with theoconservatives who spent several years fretting about Brave New World and deliberating on how to mire biomedical research in red tape or criminalize some forms altogether.

The other problem with AI dystopias is that they project a parochial alpha-male psychology onto the concept of intelligence. Even if we did have superhumanly intelligent robots, why would they want to depose their masters, massacre bystanders, or take over the world? Intelligence is the ability to deploy novel means to attain a goal, but the goals are extraneous to the intelligence itself: being smart is not the same as wanting something. History does turn up the occasional megalomaniacal despot or psychopathic serial killer, but these are products of a history of natural selection shaping testosterone-sensitive circuits in a certain species of primate, not an inevitable feature of intelligent systems. It’s telling that many of our techno-prophets can’t entertain the possibility that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no burning desire to annihilate innocents or dominate the civilization.

Of course we can imagine an evil genius who deliberately designed, built, and released a battalion of robots to sow mass destruction. But we should keep in mind the chain of probabilities that would have to multiply out before it would be a reality. A Dr. Evil would have to arise with the combination of a thirst for pointless mass murder and a genius for technological innovation. He would have to recruit and manage a team of co-conspirators that exercised perfect secrecy, loyalty, and competence. And the operation would have to survive the hazards of detection, betrayal, stings, blunders, and bad luck. In theory it could happen, but I think we have more pressing things to worry about.

The history of technology advancing has been one of sigmoids that begin as exponentials. The exponential phase comes with utopian dreams paired with fears of existential threats, followed by the sigmoidal crossover which is identified by both arguments fading into irrelevance. Both the fears and dreams have value in inspiring and moderating progress, but both are best viewed as markers of a transitional evolutionary stage.

Nuclear power was going to be too cheap to meter, or poison the planet; now it's part of any comprehensive climate change package. Automation was going to eliminate the need for work, or eliminate the possibility of work; lean manufacturing has settled on hybrid work cells. Nanotechnology was going to lead to a diamond age, or grey goo; it's actually given us better LEDs. At one time the frontier was horses racing trains; the trains won, but we still have horses doing what trains can't. More recently, humans competing with computers to play chess was riveting; again, the computers won, but the humans are still playing chess.

Now AI is going to either save or doom us. I see two problems with either belief. The first is that the advances in machine intelligence that have qualitatively changed what computers can do, like convex relaxations for intractable problems, and compressed sensing for incomplete measurements, use models to generalize from observations. The data itself isn't good or bad, the people who use it are. That's not nice, but it's also not new.

And the second is that the symbiosis between people and machines isn't a projection for the future, it's a reality. Much as I enjoy tromping about in the woods, my ability to do research rests on access a web of tools that enhance my ability to observe, remember, and reason. The way my kids are augmented by their smart phones is far beyond the capabilities of any of the early wearable-computing cyborgs. We're already hybrids; closer integration between people and machines also isn't new.

Neither history nor technology support the belief that AI is different from any of the prior revolutions that matured into sigmoids. As before, a combination of hope and fear is appropriate, but neither is grounds for abandoning human intelligence.

George Church makes a very important point in his comment on your Edge discussion about the Lanier piece: that synthetic neurobiology and computing are going to be increasingly merging. While the human brain and body might not do many things as well as digital supercomputers, they are pretty good substrates for lots of complex activity, very little of which we understand in any detail today.

However we define consciousness, it strikes me that AI boosters consistently underestimate its dependence on biological substrates. In humans, for example, the conscious experience of fear seems to be inseparable from the body's physical responses to corresponding stimuli: sweaty palms, tense muscles, enhanced hearing, and so forth. Can a disembodied brain in a vat experience fear in the same way that we do? Can a video game? Will simulated agents in AI arrays? How about greed, jealousy, ambition, or love?

I think the answer is likely "no." And if I'm right, then the greatest worries about AI's are actually worries about ourselves, in particular, that we might capriciously employ AI's to do irrational, emotional things. This is why I think there is legitimate concern about a company like Google buying the best robotics companies in the World and the best AI companies in the World, and then supercharging them with virtually unlimited money. The AI's don't need to be all that advanced for the robot dogs to be dangerous if they are controlled by democratically unaccountable elites operating in areas where they have captured regulatory processes.

Since our fear of AI is really just a fear of other people with power, money, and advanced weapons (both software and actuators), I think our real priority should be to help synthetic neurobiology accelerate so that we can improve human intelligence. Our greatest global problems are coordination problems: our biology isn't predisposed to multi-billion person cooperation on the scale that we require. And if non-biologic AI does happen to develop into anything autonomous and scary, then this is our best defense, as well, since we as a species will be best served if the greatest intelligence on the planet is biophilic.

In short, whether AI is something to fear of itself or not, humanity is, and we should fear our own stupidity far more than the hypothetical brilliance of algorithms we have not yet invented.

The latest round of handwringing over the potential for computers, machines, or robots to turn evil overlooks the fundamental difference between artificial intelligence (AI) and natural intelligence (NI). AI is intelligently designed whereas NI is the product of natural selection that produced emotions—both good and evil—to direct behavior. Machines with AI are not subject to the pressures of natural selection and therefore will not evolve emotions of any kind, good or evil.

Acquisitiveness, greed, selfishness, anger, aggression, jealousy, fear, rage…these are all emotions (that we generally gather under the label "evil") that evolved to help NI organisms engage with other NI organisms who may try to exploit them or poach their mates (and thus compromise their reproductive potential—the key to natural selection). Such emotions are not a flaw in the system that can be programmed out of NI organisms; they are the inevitable byproduct of any organism subject to natural selection that must interact with other such organisms. The selfish genes of NI organisms direct them to want to horde resources and exploit others to their benefit, but other NI organisms have the same selfish desires. An NI organism knows that other NI organisms want to exploit it, and it knows that they know that it knows…and all run through the same calculation, leading to a suite of emotions and behaviors that produce the delicate balance between competition and cooperation, exploitation and helpfulness, greed and generosity, and war and peace.

AI machines have no such emotions and never will because they are not subject to the forces of natural selection. NI organisms may program AI machines to harm or kill other NI organisms (they're called drones or IEDs) or for an NI organism to program another NI organism to kill others (they're called soldiers or terrorists). But this is an NI issue, not an AI issue, so we would be well advised to continue our efforts toward a better scientific understanding of NI, which is where the real danger lies.

Professor of Biological Sciences, Physics, Astronomy, University of Calgary; Author, Reinventing the Sacred

Since Turing and the explosive growth of algorithmic artificial intelligence, many of us think we are machines. I will argue we are surely not machines at all, but rather, trapped in an inadequate theory. Turing machines are discrete state, (0,1), discrete time (T, T + 1) subsets of continuous state continuous time classical physics. We have made amazing advances with universal computers, and with continuous models of neural systems as nonlinear dynamical systems. In all these cases the present state of the system entirely determines the next state of the system, so that next state is "entailed" by the laws of motion of the computer or classical dynamical system. Many hope that consciousness might emerge in such a system. Were that to happen, which is possible, the causal closure of classical physics demands that there is nothing for such a conscious mind to do, for the current state of the system suffices entirely for the next state. Worse, there is no way such a mind could alter the behavior of the classical physical system. At best such a mind could only be epiphenomenal. Why then did mind evolve to use so much real estate in us?

My next statement sounds strange: Name all the uses of a screw driver! Well, screw in a screw, wedge a door open/ closed, scrape putty off a window, tie to a stick and spear a fish, rent spear to locals and take 5% of the catch....Do we agree that the "number of uses of a screw driver" is indefinite"? And these uses are just "different uses", a nominal scale, not even an ordering relation such as "greater than" or an ordinal or ratio scale. Thus the uses of a screw driver cannot be ordered. But then no effective procedure, or algorithm, can list all the uses of a screw driver, nor find a new use of a screw driver. Yet we do so all the time in technological and economic evolution, as does the evolution of the biosphere. The human mind can be algorithmic, but can be more. In particular, we can find new questions never asked of nature, something like Pierce's abduction—not an algorithmic process.

Recently, my colleagues Gabor Vattay, Samuli Niiranen and I have obtained a US patent on the "Poised Realm" hovering reversibly between quantum and "classical" behavior via decoherence and recoherence and measurement. We hope to construct Trans Turing Systems, TTS, which are quantum, poised realm, and classical internally, and with the same sets of inputs and outputs including non-local connections between entangled pairs or more of TTS. Such systems are not subsets, or all, of classical physics. The mind-brain system may be a form of TTS, beyond the causal closure of classical physics, where a non-epiphenomenal quantum mind can have acausal consequences for the classical meat of the brain. TTS are not algorithmic, portending new broad technology. We always build our theory of the mind on our most complex systems, and computers have held that position since Turing. We are no longer so constrained.

Senior Maverick, Wired; Author, What Technology Wants and The Inevitable

Why I don’t fear super intelligence.

It is wise to think through the implications of new technology. I understand the good intentions of Jaron Lanier and others who have raised an alarm about AI. But I think their method of considering the challenges of AI relies too much on fear, and is not based on the evidence we have so far. I propose a counterview with four parts:

1. AI is not improving exponentially.

2. We’ll reprogram the AIs if we are not satisfied with their performance.

3. Reprogramming themselves, on their own, is the least likely of many scenarios.

4. Rather than hype fear, this is a great opportunity.

I expand each point below.

1. AI is not improving exponentially.

In researching my recent article on the benefits of commercial AI, I was surprised to find out AI was not following Moores Law. I specifically asked AI researchers if the performance of AI was improving exponentially. They could point to an exponential growth in the inputs to AI. The number of processors, cycles, data learning sets, etc. were in many cases increasing exponentially. But there was no exponential increase in the output intelligence because in part, there is no metric for intelligence. We have benchmarks for particular kinds of learning and smartness, such as speech recognition, and those are converging on an asymptote of 0 error. But we have no ruler to measure the continuum of intelligence. We don't even have an operational definition of intelligence. We don't even have an operational definition of intelligence. There is simply no evidence showing a metric of intelligence that is doubling every X.

The fact that AI is improving steadily, but not exponentially is important because it gives us time (decades) for the following.

2. We’ll reprogram the AIs if we are not satisfied with their performance.

While it is not following Moore’s Law, AI is becoming more useful faster. So the utility of AI may be increasing exponentially, if we could measure that. But in the past century the utility of electricity exploded as more use trigger yet more devices to use, yet the quality of electricity didn’t grow exponentially. As the usefulness of AI increases very fast, it brings fear of disruption. Recently, that fear is being fanned by people familiar with the technology. The main thing they seem to be afraid of is that AI is taking over decisions once made by humans. Diagnosing x-rays, driving cars, aiming bomb missiles. These can be life and death decisions. As far as I can tell from the little documented by those afraid, their grand fear – the threat of extinction – is that AI will take over more and more decisions and then decide they don’t want humans, or in some way the AIs will derail civilization.

This is an engineering problem. So far as I can tell, AIs have not yet made a decision that its human creators have regretted. If they do (or when they do), then we change their algorithms. If AIs are making decisions that our society, our laws, our moral consensus, or the consumer market, does not approve of, we then should, and will, modify the principles that govern the AI, or create better ones that do make decisions we approve. Of course machines will make “mistakes,” even big mistakes – but so do humans. We keep correcting them. There will be tons of scrutiny on the actions of AI, so the world is watching. However, we don’t have universal consensus on what we find appropriate, so that is where most of the friction about them will come from. As we decide, our AI will decide.

3. Reprogramming themselves, on their own, is the least likely of many scenarios.

The great fear pumped up by some, though, is that as AI gain our confidence in making decisions, they will somehow prevent us from altering their decisions. The fear is they lock us out. They go rogue. It is very difficult to imagine how this happens. It seems highly improbable that human engineers would program an AI so that it could not be altered in any way. That is possible, but so impractical. That hobble does not even serve a bad actor. The usual scary scenario is that an AI will reprogram itself on its own to be unalterable by outsiders. This is conjectured to be a selfish move on the AI’s part, but it is unclear how an unalterable program is an advantage to an AI. It would also be an incredible achievement for a gang of human engineers to create a system that could not be hacked. Still it may be possible at some distant time, but it is only one of many possibilities. An AI could just as likely decide on its own to let anyone change it, in open source mode. Or it could decide that it wanted to merge with human will power. Why not? In the only example we have of an introspective self-aware intelligence (hominids), we have found that evolution seems to have designed our minds to not be easily self-reprogrammable. Except for a few yogis, you can’t go in and change your core mental code easily. There seems to be an evolutionary disadvantage to being able to easily muck with your basic operating system, and it is possible that AIs may need the same self-protection. We don’t know. But the possibility they, on their own, decide to lock out their partners (and doctors) is just one of many possibilities, and not necessarily the most probable one.

4 Rather than hype fear, this is a great opportunity.

Since AIs (embodied at times in robots) are assuming many of the tasks that humans do, we have much to teach them. For without this teaching and guidance, they would be scary, even with minimal levels of smartness. But motivation based on fear is unproductive. When people act out of fear, they do stupid things. A much better way to cast the need for teaching AIs ethics, morality, equity, common sense, judgment and wisdom is to see this as an opportunity.

AI gives us the opportunity to elevate and sharpen our own ethics and morality and ambition. We smugly believe humans – all humans – have superior behavior to machines, but human ethics are sloppy, slippery, inconsistent, and often suspect. When we drive down the road, we don’t have any better solution to the dilemma of who to hit (child or group of adults) than a robo car does – even though we think we do. If we aim to shoot someone in war, our criteria are inconsistent and vague. The clear ethical programing AIs need to follow will force us to bear down and be much clearer about why we believe what we think we believe. Under what conditions do we want to be relativistic? What specific contexts do we want the law to be contextual? Human morality is a mess of conundrums that could benefit from scrutiny, less superstition, and more evidence-based thinking. We’ll quickly find that trying to train AIs to be more humanistic will challenge us to be more humanistic. In the way that children can better their parents, the challenge of rearing AIs is an opportunity – not a horror. We should welcome it. I wish those with a loud following would also welcome it.

The myth of AI?

Finally, I am not worried about Jaron’s main peeve about the semantic warp caused by AI because culturally (rather than technically) we have defined “real” AI as that intelligence which we can not produce today with machines, so anything we produce with machines today cannot be AI, and therefore AI in its most narrow sense will always be coming tomorrow. Since tomorrow is always about to arrive, no matter what the machines do today, we won’t bestow the blessing of calling it AI. Society calls any smartness by machines machine learning, or machine intelligence, or some other name. In this cultural sense, even when everyone is using it all day every day, AI will remain a myth.

Motivated by this discussion, my institute at ASU, The Origins Project, will run a high level workshop and associated public event on "The Dangers of AI?"—exact title to be determined—during the 2015-2016 academic year, when our Origins theme will be "Life and Death in the 21st Century", (and during which we will host other workshops on subjects that will likely include The Origin of Life, and the Origin of Disease). Am expecting and hoping a number of the participants in this Edge conversation will participate in this event, and I will let you know when we have determined specific timing so you can save the date…will probably be Jan, Feb, or April 2016…all delightful dates to be in Phoenix. The public event will be held in our 3000 seat Gammage auditorium, which we traditionally fill with paying members from the public who come from across the country, and will be filmed in HD and distributed on the web, where we reach a broad audience.

The analysis of what is impressive/unimpressive, or scary/not scary about artificial intelligence (AI) benefits from an appreciation of the human intelligence and behavior being simulated.

We preening, self-important humans overestimate some of our capacities, underestimate others, and neglect quirks of our mental life. We walk, talk, compute, speed-up and slow-down, turn right and left, and avoid obstacles, thinking ourselves knowledgeable and wise, in full conscious control of our actions. Research suggests otherwise. We are emotionally charged beasts of the herd, driven by subconscious instincts, acting out our species' ancient biologic scripts, making decisions before we are aware of problems, doing the best that we can with sluggish, analog, parallel-processing, neurological wetware.

In the sensory realm, we know the physical world only through a neurologically generated, virtual model that we consider reality. Even our life history is a neurological construct. Our brains, seeking order in life's muddled events, generate the plot-driven narratives that we live by. These narratives are imprecise, but good enough for us to bumble along. When they are too divorced from reality, the result of brain damage or disease, they become maladaptive and are termed confabulations and delusions, symptoms of neuropathology and psychopathology. The grail of consciousness, a human trait, is not what it seems, its presence overestimated because we are conscious only of our conscious state. Are we conscious 10 percent, 50 percent, or 95 percent of our waking hours, and does it make a difference? Stranger still is our spending one-third of our life in sleep, a circadian behavior of uncertain function, and obvious disadvantage in a competition against tireless machines.

Our brain is an inelegant kludge of neurological subroutines, a programmer's nightmare, but it benefits from being a catch-as-catch-can product of natural selection that is fine-tuned to our physical, biological, and social environment. Although we may be bested on specific tasks, overall, we will fare well in competition against machines, malevolent, or otherwise. When provoked, we humans are nasty adversaries. Machines are far from simulating our capacity for flexibility, cunning, deception, anger, fear, revenge, aggression, and teamwork. While respecting the narrow, chess playing prowess of Deep Blue, we should not be intimidated; anyone can pull its plug and beat it into rubble with a hammer. If necessary, we can recruit aggressive friends to assist.

"We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief."

So wrote Leo Szilard, describing the events of March 3, 1939, when he demonstrated a neutron-induced uranium fission reaction. According to the historian Richard Rhodes, Szilard had the idea for a neutron-induced chain reaction on September 12, 1933, while crossing the road next to Russell Square in London. The previous day, Ernest Rutherford, a world authority on radioactivity, had given a "warning…to those who seek a source of power in the transmutation of atoms – such expectations are the merest moonshine."

Thus, the gap between authoritative statements of technological impossibility and the "miracle of understanding" (to borrow a phrase from Nathan Myhrvold) that renders the impossible possible may sometimes be measured not in centuries, as Rod Brooks suggests, but in hours.

None of this proves that AI, or gray goo, or strangelets, will be the end of the world. But there is no need for a proof, just a convincing argument pointing to a more-than-infinitesimal possibility. There have been many unconvincing arguments – especially those involving blunt applications of Moore's law or the spontaneous emergence of consciousness and evil intent. Many of the contributors to this conversation seem to be responding to those arguments and ignoring the more substantial arguments proposed by Omohundro, Bostrom, and others.

The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:

1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.

2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker – especially one connected through the Internet to all the world's information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.

This is not a minor difficulty. Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research – the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius. AI research has been accelerating rapidly as pieces of the conceptual framework fall into place, the building blocks gain in size and strength, and commercial investment outstrips academic research activity. Senior AI researchers express noticeably more optimism about the field's prospects than was the case even a few years ago, and correspondingly greater concern about the potential risks.

No one in the field is calling for regulation of basic research; given the potential benefits of AI for humanity, that seems both infeasible and misdirected. The right response seems to be to change the goals of the field itself; instead of pure intelligence, we need to build intelligence that is provably aligned with human values. For practical reasons, we will need to solve the value alignment problem even for relatively unintelligent AI systems that operate in the human environment. There is cause for optimism, if we understand that this issue is an intrinsic part of AI, much as containment is an intrinsic part of modern nuclear fusion research. The world need not be headed for grief.

You are staring at a couple, kissing. A passionate wild moment. She is all over him. Moaning sounds. And then you realize...as you focus closer on the details... ...that there is a pane of glass between them!

It only looks like the real thing...—but frankly, "kissing through a pane of glass".. ...is simply a million miles away from the real thing. Could anyone possibly disagree with that?

The glass is such a tiny separation, it may only be a tenth of an inch of transparent material, and from the right angle you can absolutely not even see it, but... ... the kiss as such—simply is not real. It is merely 'a faint shadow of the actual concept'.

And there you have it, in a nutshell. A.I. is... like kissing through a pane of glass. It may look like the real thing...but...

Sure, IBM's Watson spitting out Jeopardy answers 'before the humans can even reach for the buzzer' may seem like genius at work, but does it have anything to do with actual intelligence? Are we near A.I. ?

Here I would actually like to vacillate between both sides, because there is a fascinating duality here:

Yes: One can make a pretty eloquent argument here, how A.I. is still so very far away from the holy grail goals. Even if Watson might win a Turing Test soon, it still would amount merely to 'a really nicely done... shadow of the real thing'.

There is no real understanding involved, no deeper concepts, but rather a very fancy look-up, an expert system, rules and facts connected to responses. A far chasm between "gathering data" and "knowledge."

You can show a picture of a baby—and A.I. could even find seemingly emotional responses "if shown a small human with a large head and eyes, you are supposed to find that cute" —but how can it actually go beyond that semantic linkage to reasoning and truly "get it"—and react without being told, without data, without rules—a zillion situations never included in its "assets"—which ultimately would amount to the even more complex term: "consciousness."

On the other hand...: many of the simple objections "it could not possibly have all those facts" will probably be proven wrong soon. Indeed one can see how you could connect up all of the data in the Britannica, Wikipedia, Smithsonian, Nature as well as Nasa, OED, BBC and the Library of Congress and have it all appear behind a Siri voice with instant access...petabytes of it...

And yet, on the other other hand: anyone ever playing with Siri knows instantly how very far away that is from anything real. How trivially easy one can unmask the pretense...Watson is fancier, but only in degree, not in true depth. If you enter "hey, how is it going, old fart?" there is a good chance that at some point even the definition of an "old fart" may be in there—( would love to see how they define that actually, ha!) but how could it have even the faintest clue what that truly is...?

Well, here it flips again. The other other other side: Yes one can make the observation that without the proper senses, probably the majority of language is without meaning. Clearly if you cannot see and hear and feel and taste, how could you ever understand what the deeper significance is behind any of the words in any sentence...?

As far as the senses: there will be cameras, input for audio, video, touch, pressure, and no reason why a 'digital nose' could not be a million times more accurate than any human in identifying any substance or smell...it will just take time.

"The sky is blue and so are you"—it can be taught to look up "the range of x nanometers wavelengths is what we call "blue", but will it make the inference to being sad? Well, I would not at all be surprised if the 'blue as sad' will soon pop up and the next Watson will respond "awww, thats nice..."

But but but, I hear the trolls and YouTube commentors twitch already... but they don't know really, do they? Well... (Actually, the comments on YouTube raise the question if indeed there even is such a thing as human intelligence, ahem, but... let's not dwell on that.) ( that's is something to worry about now)

Here is one very specific aspect that I find not properly understood by the general public.

If you decided tomorrow to learn a language, say Mongolian, you would end up sweating through the grammar and syntax, the vocabulary and the even the alphabet, a long an arduous journey, likely it would take weeks to get started, months to get reasonable, years to be proficient.

There is no shortcut to experience. Yes there may be better methods now than we had in school, but it still is necessary to build up that network and acquire that knowledge. And the sad thing is: once you learned that, there is no way for you to pass this on to anyone else, nor can you be given this knowledge by anyone—you have to unravel the entire journey on your own...

And that is the huge difference between our idea of learning and A.I.: One has to really let that seep in: if any machine acquires Mongolian, spending the equivalent of say 50 years worth of super-studying it...—then any other machine can have it, within a fraction of a second!

The point is that any advance in any aspect is instantly available to all other entities, and from then on all future variants of the A.I. host. You have to viscerally understand how that leads to explosive growth beyond all our human intuition—we have no comparison here, no precedence case...

You can already see it in our own limited time frame: How Google Translate is still "silly as heck" now, but... it suddenly does translate dozens of combinations of languages... They could add ten more in one day. 10 years ago that was a pipe dream, 50 years ago it would have been an outright miracle, 500 years ago it would have made you king, or be burnt at the stake.

The point is: I believe both sides are right and wrong...

Yes, there is absolutely no such this as A.I. right now, they are very, very deep in the uncanny valley, and all the demos are merely posturing and shadow games still.

but also:

No, to absolutely doubt that it will happen is simply being blind to the essence of the path forward: An increase in complexity and speed alone will not do it—but we are talking about not just more, but a truly "un-fucking-believable" range forward in total complexity, by factors of trillions, completely beyond our comprehension and reference frames.

You can see some of this advance in our lifetime: I recall starting on a 4k machine. Years later that was 4 megs of storage. Then a hard disk with 40 megs, soon 400 and 4 Gigabytes. Now at 4 Terabytes, with a GPU that can compute billions of events per second—and it happened in the blink of an eye in nature's time scale.

Do you remember the silly crude graphics on the first Mac or Amiga, blocky stairsteps... now that is not even an ICON any more on the this 14 million pixel screen. You can see it in any Hollywood blockbuster: no one can tell any more: was that car real ? Did he really touch that lion ? Did she really fall off that cliff? We have seen the path from ridiculous "Pong" to "amazing" within years, maybe slower than one hoped in some way, but utterly fast in the larger picture.

So my view on this is really: yes it will get there, it is only a matter of time. But it could be considerably longer than some pundits propose—"2028" is as far forward as "WTC 9/11" is looking back, and that sometimes seems like "just yesterday" in many ways.

My hunch is it will take a good 100 years to say: this A.I. entity is actually a sentient being, and it has consciousness and a conscience. It acquires all the data that our senses can—and a million times more. It knows about abstraction, and inferences, and subtlety, it has sarcasm and orgasms; )

And guess what, one final reversal: Once you hook up that machine—with the fantabulous computing engine, yottabytes of data—it may well look at our very human issues and tasks and problems and actually do figure them out truly on its own. and then say "2 plus 2 is 4... I think... but I am not sure right now, I feel a little rushed here. Could you not look at me like that, it makes me uncomfortable. And anyway, why do you need to know? And isn't it a little hot in here? I am getting an upgrade next week and it makes me a bit nervous. Your nose looks funny, too."

That's true A.I. then. I will start worrying about it in October 2088.