George Washington University's Peter Bock, the Defense Advanced Research Projects Agency's Paul Cohen, and MIT's Andrew McAfee join Amy Alving, former chief technology officer of Science Applications International, to discuss recent innovations in artificial intelligence as well as the economic and security implications of these technological advances.

ALVING: Good afternoon and welcome to the Council on Foreign Relations' discussion on the future of artificial intelligence, robots and beyond. I'm Amy Alving, and I'll be your moderator for today. We have a very distinguished panel here, and in your information, you have a detailed bio on everybody on the stage, so we won't go into those specifics.

But briefly, let me introduce Professor Peter Bock, emeritus from George Washington University, who has decades of experience in building and developing artificial intelligence systems. Next to me we have Paul Cohen, also an academic from the University of Arizona, who is now at my alma mater, DARPA, working for the Defense Department's most advanced research and development organization. And we also have Andy McAfee from MIT, who comes to this from a business and economic background with long experience looking at the impact of artificial intelligence from an economic perspective.

So we'll start today with thirty minutes of moderated discussion amongst the panelists, and then we'll turn it over to the audience for Q&A.

I think in this area, it's important to make sure that we have some common understanding of what we're talking about when we say artificial intelligence. And so I'll ask Peter to start off by describing to us, what is artificial intelligence more than just smart software?

BOCK: Yeah, in my TED talk, I described people who come up to me and say that AI is really the field that tries to solve very, very, very hard problems, and I always found that definition a bit smarmy, because all of us here are involved in solving very, very, very hard problems. That's not it at all.

It's a general purpose problem-solving engine that has a more or less broad domain of applications so that a single solution can apply to many different situations even in different fields. That's beginning—a beginning definition for AI, and also probably a longer definition, an engine that can eventually be broadened into beginning to imitate, shall we say, or, in fact, emulate the cognition of our own thinking patterns.

I think I'll stop there and let the rest jump in.

ALVING: OK. So, Paul, I know that from your perspective, artificial intelligence is about more than just crunching a lot of numbers. You know, the buzzword in—out in the world today is big data, big data is going to solve all our problems. But big data isn't sufficient, is that correct?

COHEN: That's right. So do you want me to talk about what's AI or why big data isn't sufficient?

COHEN: Let me give an example. I'm working on a program—managing a program at DARPA now called Big Mechanism. Big Mechanism is sort of a poke in the eye to big data. But it's actually—it's based on exactly this distinction between crunching numbers and understanding what the data is telling you.

So the purpose of the Big Mechanism program is for machines to read the primary literature in cancer biology, assemble models of cell signaling pathways in cancer biology that are much bigger or more detailed than any human can comprehend, and then figure out from those models how to attack and suppress cancer.

Now, data certainly is an important part of that, but I think the difference between big data and Big Mechanism is that we seek causal models of something really complicated. Data informs those models, but the understanding comes from those models. And AI has always been about understanding, understanding the visual world, understanding speech, understanding—it's always been about understanding.

ALVING: And so is the artificial intelligence creating the understanding?

COHEN: Yeah.

ALVING: Or are you building in the understanding...

COHEN: No, no, the machine will read the literature. I mean, you know, you see it in the papers. The papers say things like, well, you know, we suppressed this gene and the following stuff happened, and so you take that little piece of causal knowledge and you put it into your big, complicated model. And as the model gets bigger and more complicated, you get a more and more refined understanding of how cancer works.

ALVING: So, Andy, I know you look at this from more of an economic impact perspective. Where do you see this play between model-based understanding and big data playing out in the market today?

MCAFEE: And this is the Jets versus the Sharks of the artificial intelligence world. There are these two camps that have been going at it for as long as we've been thinking about these problems. There's the model first camp. You need to understand cause and effect. You need to understand the world before we can think about simulating it in a piece of—or embedding it in a piece of technology.

There's the other part that says, No, actually. And the best distinction I ever heard between those two approaches—the one that brought it home to me—was the way a child learns language versus the way and adult learns language. So if I were to start learning a language tomorrow, I would do it the model-based way. I'd sit down with a grammar textbook. I'd try to understand how to conjugate the verbs. I'd understand if there are masculine and feminine. I'd go through this arduous model-based process of trying to acquire a new language.

And like we all know, that's not how a two-year-old does it. She just sits around and listens to the adults around her talking and talking to her, and she builds up a very much data-first understanding of the world to the point that she acquires language flawlessly, without having a single grammar lesson.

And what's further interesting about that is that if you ask her, why did you add the S to that word? Why is it "I go" but "he goes"? She would say, "I have no idea—I've never heard the word conjugation before. I just know that's how language works."

So this divide is a really, really fundamentally important divide. The news from the trenches that I can bring you is that in the world of—in real-world applications, the data side is winning. And there's almost a dismissive term for the model-based view these days among a lot of the AI geeks that are doing work at Google and Apple and Facebook and putting things in front of us. They call the model-based view "feature engineering," and they put kind of air quotes around it, and they're almost quite dismissive about it.

And in general, in head-to-head competitions among different approaches in areas that we care about, image recognition, natural language processing, artificial speech, things like that, the model-based approach is the one that's winning these competitions and, therefore, is being embedded in the commercial technologies that we're using.

COHEN: You meant to say the data...

MCAFEE: I meant to say the data-based—thank you—the data-based side is winning. My single favorite example of that—this was a crazy demonstration—a team out of—that founded a start-up called DeepMind built a completely data-first learning system, and they asked it to play old-fashioned Atari videogames from the 1980s. And they said, we're not going to even try to teach you the rules of Pac-Man or Battlezone or anything like that. All you're going to do is try to minimize this thing in the upper-right-hand corner called the score. You figure it out from there.

They pointed it at seven different Atari games. On three of those games, the system eventually got better than any human player. So I'm not—I personally am not taking a stand on the model versus data. I'm just saying, over and over again these days, the data world is winning the competitions.

ALVING: Peter?

BOCK: I couldn't agree with Andrew more. I'm in that...

MCAFEE: Boring panel.

(LAUGHTER)

BOCK: I'm in the same camp that he describes as being data-driven, not rule-driven. I have been developing, since 1980, programs that he describes called collective learning systems that play games and get better than humans at them, simple games, but it soon became obvious to me that it's interesting to play games, but none of you out there is spending much of your life playing games, and do you really want an opponent who is not a natural opponent to play games with you? I think you should be talking to your therapist about that.

The ultimate result of that, which is a totally data-driven game—that is, it's all—it doesn't have any rules at all—is the art that you see being generated out in the place where we were having lunch on the screen. That is trained by the masters. Simply we show where all of these—all the paintings of Van Gogh or Renoir or Rembrandt and so forth, and then we say, here's a photograph. Render it in the style of that.

And when it's all through, you—if you were to say to ELIZA, so what does this mean over here? She would say, "I don't understand the question," because she doesn't know how to answer questions like that. She just knows how to paint. Probably Van Gogh would be incensed with the remark or at least simply turn away and walk into the fields to paint again.

And one last thing. It was Fu at Purdue University who said many years ago, if you have the right features, almost any decision-making apparatus will work. If you don't have the right features, no apparatus will work. So, once again, we have to say that the data side people do have to pay attention to the extremely important aspect of extracting what you're looking at and what the important aspects of that are or listening to or smelling or feeling and use that, those features, as the basis for assembling a lot of statistical data.

Three of my graduate students are now building exactly the system that Andrew just described, a natural language understanding system. They have read 11,000 English novels—not the students, the machine—they haven't read any of them...

(LAUGHTER)

... which disturbs me a bit. And it can carry on a coherent conversation. It's still under development, so I'm not ready to show it yet, but it can carry on a coherent conversation with somebody who cares to converse with it. It tends to wander around a bit, sort of like people who've had perhaps a few beers and are talking about who knows what, but nonetheless, it is coherent and that's a step in the right direction. It is understanding based on learning and experience.

COHEN: So we don't entirely agree about everything. Let's go back to your game example for just a moment, because I don't want people to think that the distinction between model-based approaches and data-based approaches is quite so cut and dried.

Humans who have played—who have learned to play one videogame will learn to play the next more quickly. That's not true of computers. It'll be a fixed cost per game. And the rate at which the machine learns, the next game will be unaffected by anything it learned in the first game.

Now, why is that not true for humans? Well, it's obviously because, as you learn games or, in fact, learn anything at all, they abstract general principles from what they're learning. Call that models, if you like. It's as good a word as any, but it is something that we know machines don't do very well. In fact, DARPA has sunk vast amounts of money into programs with names like transfer learning, right, where the goal is to try and transfer knowledge acquired in one context to another. Can't do it.

I also don't want to leave anyone with the impression that we're trying to have humans build models of anything, right? We're trying to have machines...

BOCK: This is important.

COHEN: ... build—trying to have machines build models of things by reading.

MCAFEE: With data, build the model.

COHEN: With the data, build the model. And it's not any old model. It has to be a causal model.

MCAFEE: Good.

COHEN: Right? Because only when you have a causal model does it make sense to say, push this button and the following thing will happen. If it's not a causal model—and so that sort of brings me to the fundamental limitation of data-driven science as it is practiced today is that really what you've got there is correlation versus cause.

I mean, what it all boils down to is machines are exceptionally good at finding associations between things that co-vary—that's correlation—and they're exceptionally bad at figuring out what causes what, right? And, you know, if you want to do science, if you want to change the world, it's important to understand why things are the way they are, not just that if you, you know, profile your customer in the following way, you can increase the probability that they'll buy some product.

BOCK: My experience is quite different, Paul. My experience is that these games that learn how—these machines that learn how to play games can be built to understand the general purpose roles that hold by expanding their knowledge base so that one of the features they have is, what is the game like? And they bring in and they notice the similarity. They don't bother re-learning that lesson that greed is good and red is bad. They know that already, and, in fact, a disadvantage of that is that in some instances, green is bad and red is good, and they have to unlearn that. Does that sound familiar to any of you?

ALVING: So it sounds like there's some agreement that there's a heavy emphasis on data-driven autonomous systems today. They'll get better when they have more of a model-based understanding that includes a causal...

(CROSSTALK)

BOCK: And more resources.

MCAFEE: And part of—yep, more data is always good, but part of I think what we're all excited about is a move away from what we—what some people call pure black box models. In other words, purely data-driven. And if you ask the model, how did you arrive at that? It says, I have no idea.

We're getting better at building data-driven models that are not purely about black box. So my colleagues who do image recognition using these—the most popular toolkit today is called deep learning. It's a class of algorithms that's just cleaning up in all these competitions.

If you do a deep learning algorithm to do image recognition, for example, it can get pretty good at it, quite good at it. What's encouraging to me is if you—you can then—my language is horribly informal—you can query the algorithm to say, how are you doing it? And it will say, well, you have to do edge detection, you have to do gradient detection, so I've got these kinds of things. I intuited those things from the data, and they're now built into the model.

And our models I believe are now sophisticated enough that we can say, oh, yeah, there's the edge detection going on over there. There's the gradient sensing going on over there. We can learn about the world via querying our models, which is a lovely development.

BOCK: I suppose we should speak a little bit about how that compares with human learning and biological models. Certainly, Andrew is absolutely right. These machines have no idea how to answer verbal questions when they're asked to recognize targets or artists. And they have no idea about that.

But it is—let me postulate for you a moment that sometime in the future we have a being called MADA, who is an artificially intelligent being on the same cognitive scale as we are, OK, and it's sitting right here. Well, one morning I walk in and I say to MADA, "Good morning, MADA." No response. I say, "Good morning, MADA." No response. "MADA?" And she says, "I'm not speaking to you." You're not speaking to me? Why? "You know why."

(LAUGHTER)

That's the sort of thing which will make me go, eureka!

ALVING: When your AI doesn't talk to you, OK.

(LAUGHTER)

BOCK: No, no, no, no.

ALVING: Let's shift gears and talk a little bit about what you guys see coming in the near term, in terms of artificial intelligence. What's the most exciting thing that you think we're likely to see in the next couple of years? You know, we've seen things like theory, for example. What...

BOCK: Nothing.

ALVING: Nothing exciting?

BOCK: Nothing, other than the art, which is out there, which I invite you all to...

MCAFEE: I'm a lot more excited than that. Our brains are pattern-matching computers. And what we're getting good at is building pattern-matching machines that over and over again are demonstrating superhuman performance. I think that's a great development. I can't wait until that's applied to something like medical diagnostics. I have a very good primary care physician. There's no possible way he can stay on top of all the relevant medical knowledge that he would have to read. There's no way he can be insured against forgetting anything. He has good days and bad days. He wakes up tired. Human computers are amazing, but they have lots of glitches. They have all kinds of flaws and biases.

BOCK: You think that's going to happen in a couple of years, Andrew?

MCAFEE: I think we're—yeah.

COHEN: Yeah.

MCAFEE: I don't say superhuman universal medical diagnostics.

BOCK: No, no, no, no.

MCAFEE: But I think—but I think...

BOCK: In the hands of physicians?

MCAFEE: That's not because of technological reasons. That's because of inertia and bureaucracy and regulation and stuff like that. My point is, the things that I'm excited about are this dramatic increase in pattern recognition ability and our efforts to combine what machines are good at with what our brains are still really good at and better at and to find fruitful combinations of those two kinds of smarts.

BOCK: Let me clarify something I just said. And I know Paul wants to jump in here. When I said nothing, I meant nothing that you will see. It's going on like crazy behind the scenes. You wouldn't believe the horsepower of the engines that are running. But it's not going to be self-parking cars, and it's not going to be this and that and the other that you want—a vacuum cleaner that really does a good job of vacuuming and doesn't break your vase. And it isn't going to be a self-parking car that turns into the side of the car in front of it, which I saw one of the manufacturers demonstrate a couple of years ago in Germany, which sort of gives away a possibility of the carmakers. But, anyway, that's the sort of thing...

ALVING: So you don't see Amazon air-dropping packages onto our doorsteps in the next couple years?

BOCK: No, not in two—not in a couple of years, no.

ALVING: Paul?

COHEN: No, I mean, to a great extent, I agree with Andy. I think I'm—you know, when I started in AI, things were mostly logical, rule-based. There was almost no data to work with. What we've seen over the last twenty years is kind of a perfect storm, a magical combination of, well, what is it? The web makes vast amounts of data, structured and unstructured, available. Machine-learning algorithms, the technology has just blossomed. These days, it's called data mining. There really isn't that much difference.

And so what we've seen is that AI has gotten awfully good at solving problems that can be solved by various kinds of processing on data, typically finding general rules from very large numbers of specific instances. And so I agree with you.

And I also think we've gotten good at figuring out what machines are not good at and figuring out clever ways to get humans to help with them, like crowd-sourcing, for example, you know, Mechanical Turk and things like that.

So I think it's a fantastic future. I'm very excited about Jeopardy!. I think that the move of the IBM Watson team into medical diagnosis is going to be a real game-changer. But I also think that the competitions those machines are cleaning up at are competitions designed by the people who had particular technologies that they wanted to show off, right?

MCAFEE: They've got a thumb on the scales?

COHEN: Well, no, I mean, it's just the nature of things, right? You say, I can do the following. Let's have a competition to see if anyone can do better than me at that.

BOCK: That's right.

COHEN: So, you know, keep in mind a couple of things. For everything that Google and Siri can do, they still can't regularly answer questions like this. If you're sitting in this room, where's your nose? Right? So common sense continues to be an absolutely—gosh, you know, if I had one wish, it would be to solve that problem, to solve the problem of common sense, the problem of endowing a computer with the knowledge that every five-year-old has.

And honestly, after all of this time, I really believe that you can go a long way to solving some kinds of problems by clever processing of data. For example, you can translate text from one language to another, but you can't understand the text, right? I can have a machine take a message to somebody in Italy to reserve a hotel and it'll do a spectacular job of turning it into Italian. But if you ask the machine, what was that message about? It really doesn't know.

So I agree with you. I think that there is a class of problems—there's a class of problems on which AI is just doing magnificent things. And that class of problems is—it's the class that we would call making finer and finer distinctions, medical diagnosis is about making finer and finer distinctions. Online marketing is about making finer and finer distinctions.

If you think about it, much of the technology you interact with is about putting you in a particular bucket, right? And we're getting awfully good at that stuff. We just can't basically understand language or see our way around the world.

ALVING: So let's follow up on the piece that AI is good at today. And, Andy, I want to turn to you from an economic perspective. As the things—whatever they are—that artificial intelligence is good at--take over, gain more traction, you see some pretty profound implications in the economy, especially in the workforce. Can you speak about that?

MCAFEE: Yeah, exactly, because as Paul says, what we're getting good at is combining where humans still have the comparative advantage with where machines do. As the machines get so much better at important things, that balance is shifting. So what smart companies are doing is buttressing a few brains with a ton of processing power and data.

And I think the economic consequences of that are going to be really profound and are going to come sooner than a lot of us think. I mean, there's both great news and real challenges associated with that. The great news is, you know, affluence, abundance, bounty, better medical diagnoses, better maps for your cars, just more good stuff, more goods and services in the world. It's really important not to lowball that. I think it's the best economic news on the planet.

The challenge that comes along with that is that very few of us are professional investors. We don't offer our capital to the economy; we offer our labor instead. And when I look at what a lot of knowledge workers actually get paid to do—and I compare that with the trajectories I'm seeing with improvement in technology—I don't think a lot of employers are going to be willing to pay a lot of people for doing a lot of what they're currently doing these days.

It's pretty clear that tech progress has been one of the main drivers behind the polarization in the economy, the hollowing out of the middle class. Personally, I think we ain't seen nothing yet.

BOCK: Oh, I agree. It's the two years that I took exception with. I have—may I do a little show-and-tell?

ALVING: Sure.

BOCK: This is the brain of an animal. Now, it's very, very much larger than it is in real size. This is actually a half-a-millimeter from this side to this side. It is the brain of an animal called drosophila, which some of you may know as the fruit fly.

And it has in it 100,000 neurons that are working to give it not only artificial intelligence—a natural intelligence, excuse me—but also natural robotic capabilities. Our brain is 100,000 times as powerful as this brain. It has 100 billion neurons in it, all of them working as an adult, working together without much redundancy at all. Redundancy is—another name for redundancy in the brain is subtlety, OK? It seems to be redundant, but the situation is just slightly different, so you need another set of neurons to make the distinction.

That is the resource limitation that I was talking about. We are about a factor of 10,000 away from being able to build something that is equivalent to us in resources. That sounds really huge, but that's going to happen in less than twelve years.

ALVING: The factor of 10,000 is a hardware limitation you're talking about?

BOCK: Yes, is the hardware limitation in memory, in memory. That's going to happen in about ten to twelve years. In 2024, you will see that imaginary being that I talked about called MADA born. But when it's born, it will be like you were born. It won't be able to do anything. In fact, you'll have to do everything for it and take care of it. And it will take—no surprises here—thirteen years to become a teenager.

That's the sort of thing that I'm looking forward to in terms of breakthroughs, but there are troubled waters ahead. We can discuss that later.

ALVING: Very good. Well, actually, this is probably a good time to turn to the audience. Let me just summarize a little bit about what we've heard. Artificial intelligence is both here now and not here for a while, depending on what aspect of artificial intelligence you mean. From an economic perspective, from the things that are likely to impact our lives in the near future, it's very data-driven. You're going to start to see more and more of that.

To really get to the full promise of artificial intelligence, we have a ways to go. But that future has many bright things about it. There are also profound implications in the labor market, for example. I think what Andy said was Paul should go ask for his salary to be tripled when he walks out of the door and the rest of us maybe will be in a little bit more of a difficult situation.

With that...

COHEN: You really did it.

(LAUGHTER)

ALVING: Or if you might not think so, and then you get back to work. But with that, we'd like to open it up to the audience. I'll invite you one at a time to ask your questions. Please wait for the microphone, which we have. When you speak, please speak directly into it. Stand, state your name and your affiliation, and then ask your question. So here's the microphone. Why don't we start in the back here?

QUESTION: Thank you. My name is Jaime Yassif, and I'm—I'm a AAAS science and technology policy fellow at the U.S. Department of Defense. My background is biophysics, and I think in recent years it's been very exciting to see the ability of image processing technology to develop in a way that allows automated processes—automated processing of images of cells and movies of cells in a way that makes it much more efficient. Whereas it used to be that graduate students would have to sit for months and click on many individual frames, now a computer can analyze data very quickly.

And so there are obvious implications for health care and basic research and many other developments. What I'm curious about is, what are the implications for image processing from satellites? I'm very interested in the security field. And presumably this will carry over into those areas, as well. And as we sort of turn more of those—that analysis over to machines, what are the ethical implications and how do we keep humans in the loop for the decisions that are linked to that? Thank you.

ALVING: Yeah, so my prediction came true. We did not talk about ethics during the first half-hour, because I figured that would be foremost in the audience's mind. So who would like to take that? Paul, do you want to take that...

COHEN: I don't know anything about satellite imagery. And if I did, I probably wouldn't be talking about it. So, no, I'll pass on that one.

BOCK: I'm in the same position.

MCAFEE: I'll take a stab, even though I know nothing about satellite imagery. It doesn't seem that that's a big enough exception that it would be immune from the term that you identified...

BOCK: Well, that's...

MCAFEE: ... which is that image recognition is one of these areas where the technologies have gone from kind of laughably bad to superhuman good, to my eyes, just in the blink of an eye. So AI will now give a superhuman performance on recognizing street signs as a car is driving down the road, on recognizing handwritten Chinese characters, boom, boom, boom, on we go. I think satellite imagery would clearly not be any exception there.

You bring up the, what ethical implications does that bring? In the field of defense, there are immediate, really obvious ethical questions that are going on. We have tons of drones. We have tons of automated machinery for warfare these days. To my knowledge, the only place where we have a greenlight for the machine to make a firing decision is in the demilitarized zone of North Korea. I understand there are some systems that will go off on their own there. Aside from that, I believe that we always have a human in the loop before any ordnance is deployed or anything is fired.

ALVING: Yeah, let me jump in, even though I'm just a moderator. I'll say that in the imagery analysis, that's been a big area of investment in the Defense Department for a long time. And one of the lessons learned in trying to offload some of the imagery analyst tasks to machines is that although there are some things that machines are very good at, very prescribed tasks—there's a set of characters, match them to a part of the image to read a sign—there are other things that the machines are not good at because it's very difficult to train them.

And so the Defense Department has learned to kind of divide imagery analysis into things that the machine's good at and things that the humans are good at. And it's not really an either/or question. It's actually very analogous—excuse me—to some of the examples about where machines are used in warfare. It's not either/or. There are humans in the loop and there are machines doing some things...

COHEN: Could I say a word about that? Yeah, I worked for a while on surveillance video programs to try and understand surveillance videos. And the system problem from my group and everyone else's is if you have two people walking like this, and this one is A and this one is B, sometimes the labels will switch when they go past each other.

And, you know, you think, "Well, that's dumb. I mean, humans wouldn't make that mistake. They would"—and then you come up with a whole bunch of answers, a bunch of fixes. You'd say, well, if one's wearing a yellow shirt and one's wearing a red shirt, how could we possibly get confused? If one's male and one's female, how could we possibly get confused?

And here I think we're really seeing the boundary between entirely data-driven methods and knowledge-driven or model-driven methods, because the algorithms that get confused don't have that commonsense knowledge that you just regularly bring to bear on the problem. And so there are problems in computer vision that are problems largely because the machine just doesn't know anything.

MCAFEE: Well, another category of things that the machines are lousy at is novelty. So if something new shows up in an image for the first time, the technology typically goes, "I have no idea here." We—and, again, the clever combinations are what's going to win the day here.

In sentiment analysis, companies pay a lot of money to find out how their brands are doing, how their shows are doing on TV. People talk on Twitter and Facebook constantly, in huge volumes, about these. You can mine that data to do sentiment analysis. This is a very fine-grained, pretty advanced discipline by now.

However, when there's a new character on "Mad Men," for example, and the tweets light up about that new character, the technology goes, "What on Earth just happened here? I have no idea." They honestly have people manning the consoles, and when the technology gets confused, they say, oh, yeah, that's because a new character showed up last night, and put—and we'll (inaudible) that into our infrastructure.

BOCK: You know, there's an interesting way to tell an image processing—and once—as long as we move away from satellite, I get the same restrictions that Paul had, classification and not knowing is a pretty strong limitation—but, anyway, if you move away from that, imagine yourself watching TV. You're watching, I don't know, an episode of some mystery, thriller, and it's episode nine, and it's going.

And one moment, you see two people having lunch on the balcony. Next minute, you see fighter jets coming down and bombing, and the next minute, it's a quiet peaceful yacht on a lagoon. And the next minute, it's an ad, a commercial. In the same ten milliseconds it takes for you to—your brain to capture that image, you know it's a commercial.

We can't build a machine that's able to do that. It will think it's just another scene in the thing or not know or—and, by the way, of course, when it doesn't know, it doesn't come back and say, "I don't know what that is." It just throws an exception or whatever.

This is a very interesting way to look at the ability of the machine to solve a problem that Paul was talking about, the causal problem, the things happening one after the other. We have built ELIZA engines that can detect roads from satellites actually, and we have—and do anomaly detection, novelty detection all the time. It's easy.

But there's an interesting limitation that also exists for you. It's worth mentioning, perhaps. They will notice something new on the wall when they walk into the house. What they won't notice is something that has been removed from the wall, because they've learned the wall, which is most of the space, and if there's a little more wall, that doesn't bother them.

ALVING: Great.

BOCK: And you have the same trouble as humans, the same thing, with your spouse.

ALVING: We'll take another question here in front.

QUESTION: Hi, I'm Barbara Slavin from the Atlantic Council. And I'm completely illiterate about these topics, but I have a son who programs computers, who writes code for a living. Will we ever get to a stage where computers will be able to write the code and people who do his sort of work won't be necessary anymore?

BOCK: Well, if MADA's going to happen, MADA could become a programmer. After all, she might become an actress. You notice I use the feminine. It's because my graduate students all assume it's going to be a woman. I don't know why. But, anyway, you sort of have to slip into some resonant kind of absorbing state.

Yeah. I mean, why not? They can be anything they want that we can do. And, in fact, if it is really resource limited, suppose they end up with ten times the memory that we have.

ALVING: Paul?

COHEN: Yeah, there's a really interesting program just started at DARPA. It's sort of an automatic programming program. And it's based on a change in the way people write code that's happened over the last fifteen, twenty years.

I used to write all my own code, every single line of it. Nowadays, I sort of assemble code that other people have written. And there are lots of places online. You know, my favorite is Stack Overflow, but there are other places like that, where you type in, in English—you say, you know, gosh, I want to be able to—whatever it happens to be in, and some really good programmer has already done it, so there's sort of no point in me trying. And actually, I forbad my daughter from using Stack Overflow for the first six months, because I was afraid she'd never learn to write code at all, but only assemble code. Anyway, so DARPA has a program that's basically on automatic programming by assembling code.

BOCK: Wow, that field in computer science has been going on for years.

QUESTION: (OFF-MIKE)

COHEN: I'm sorry?

QUESTION: (OFF-MIKE)

COHEN: Well, I think that's really where the science of the program lies. It's in figuring out how you tell the computer what you want the code to do and how the computer then figures out which pieces of code to tie together.

But if you think about it, fifty years ago, we would have been having exactly the same conversation about this amazing device called the Fortran compiler, right, which is basically a bridge between the humans saying what they want and the machine code, except, you know, the language was nowhere near as natural as it's going to be in future.

MCAFEE: To answer your question, no, you'd have to be a vitalist, literally. You'd have to believe there's something ineffable about the human brain—that there's some kind of spark of a soul or something that could never be understood or...

BOCK: That programmers have.

MCAFEE: ... and therefore put into a program. I don't believe that. I'm not a vitalist. However, there are things that we do that have proved really, really resistant to understanding, let alone automation. I think of programming as long-form creative work. I have never seen a long-form creative output from a machine that was anything except a joke or mishmash.

COHEN: Yeah, I agree completely.

ALVING: So job prospects are good, which may...

(CROSSTALK)

BOCK: Yeah, yeah. You're fine.

ALVING: Another question here?

QUESTION: Hi, I'm David Bray, chief information officer at the Federal Communications Commission. So we've already seen this year a company actually elect to its board an algorithm and give it voting rights. Where do you see in the workplace workers actually seeing sort of artificial intelligence or machine intelligence first? And what's going to disrupt the economy the most?

BOCK: Are you talking about high-level artificial intelligence, replacing a human being?

QUESTION: Or augmenting what humans are doing.

ALVING: I think your emphasis was on first. What is it that we'll next see out of the chute?

BOCK: Oh. Well, they already are—we have robots that people are using and they're interacting with them. You remember the headline in the Japanese newspaper sort of about ten years ago, "Robot kills worker"? Murders was another one, translation that was used. It was not, you know, accidentally kills or runs—they attributed intention to this robot.

Well, who knows? You know, I doubt it, because there wasn't any understanding of that basis. You know, if you asked it, why did you kill a person? It would not say anything. OK, or say, I don't know.

ALVING: Well, asking about murderous robots...

MCAFEE: So far, it's been routine knowledge workers who are most displaced by technology. The reason that's a problem is that the American middle class has been largely composed of routine knowledge workers. Your average American doesn't dig ditches for a living and they're not a—you know, they're not an executive in a company. A payroll clerk has been a really good stepping stone to the American middle class. We don't need payroll clerks anymore.

To answer your question, where I see things going next is basically an encroachment upward in that education or that skill ladder with knowledge work. So, for example, financial advice today is given almost exclusively by human beings. That's a bad joke.

BOCK: It is.

MCAFEE: Right, that entire—that should be algorithimized—let alone the fact that your stock broker giving you a hot stock tip is a bad joke. Again, there's no way a human can keep on top of all possible financial instruments, analyze their performance in any rigorous way, assemble them in a portfolio that makes sense for where you are in your life and what your goals are.

The fact that we're relying on humans to do that, I think, is a bad joke. We'll all be better served by completely automated financial advice, and it can be done so cheaply that people who are not affluent can have decent financial planning.

Fantastic. There are a lot of people who give financial advice for a living these days. That's where I think it's going next.

BOCK: You know they talked about the bell curve of intelligence? You know, that doesn't change with us, that we're all the same, we're the same as we were 70,000 years ago, we have this bell curve of intelligence. But the bell curve of intelligence of machines is slowly moving to the right, OK, to the right this way.

And for a long time, it was OK, because if you dug ditches, you could learn how to do a—what do you call it, a backhoe, you know? And you could learn how to run it. But there are now transitions that are necessary from hardware operation to software operation, where the transfer is not as easy, and we are going to displace workers eventually who just can't find a place in the workforce anymore.

ALVING: Question over here?

QUESTION: Hi, Esther Lee. I started the Office of Innovation and Entrepreneurship at Commerce. Good to see you, Andrew. So my son—four-year-old son's preschool just started STEM. I don't know what they're doing in STEM, and they actually call it STEAM, because they add an A for arts. And my daughter went to a robotics camp this year, this summer. I don't know that they...

MCAFEE: Your four-year-old daughter went to a robotics camp?

QUESTION: My five-year-old daughter went to a robotics camp.

(LAUGHTER)

ALVING: That makes it better.

MCAFEE: OK.

(CROSSTALK)

QUESTION: So I don't know that they know—I don't know if the content is going to prepare them for the kind of future we're talking about. What needs to happen in K-12 education to really prepare kids for the world we're talking about? I don't see a lot that's changed in K-12 from when I went to elementary school.

ALVING: So we got three academics on the panel.

BOCK: None of whom have ever taught in K-12.

ALVING: Right.

(LAUGHTER)

BOCK: I assume.

MCAFEE: No, I gave my TED talk in 2013. The TED prize that year was given to a guy named Sugata Mitra. You should all go watch his TED talk. It was fantastic. The point that he makes is our K-12 educational system is really well designed to turn out the clerks needed for the Victorian Empire.

And he makes a very compelling case. And, you know, the implication is obvious—we don't need those kinds of workers anymore. And he said, basic arithmetic skills, the ability to read and write with some proficiency, and the ability to be an obedient worker who's going to do what they say—what you tell them to do when you deploy them in Christchurch or Ottawa or Calcutta or something like that. We don't need those people anymore.

My super short answer for how K-12 education needs to change is Montessori. I was a Montessori kid for the early years of my education. And I am so grateful for that, like that hippie-style of go poke at the world education.

BOCK: I'll second that.

COHEN: Can I take a crack at—so Andrew says, well, look, here's what's happening. The machines are sort of moving into higher and higher echelons, for want of a better word. Well, what's happening in education is that what used to be a masters is now sort of roughly equivalent to a bachelor's. What used to be a bachelor's...

(CROSSTALK)

COHEN: And we're going in the other direction. So—so—exactly. Exactly. I mean, I think it really is a significant problem, is that education is not getting better and machines are. So that's a problem.

But to your point, and this really worries me a lot, when we started the school of information at the University of Arizona, we recognized that our children were taking exactly the same courses as we had forty years earlier as if the information revolution had never happened. And so we said, well, look, what should an information age curriculum look like? And here's the bad news. It involves a lot of abstractions. It involves abstractions like know what information is, understanding that data isn't what's in databases. Data—every data point actually tells a story. It came to be, right? It understands—it involves abstractions like game theory and—I mean, lots and lots and lots of abstractions that we learn in economics, in physics, in computer science, and so on.

Those are the abstractions that make the world work today. The reason that biology is essentially computer science is that the underlying ideas are essentially the same.

BOCK: That's right.

COHEN: And I can easily—you know, we can easily document it. Here's the problem. Depending on which theory of cognitive development you follow, abstraction is a late-developing skill and, for many students, it never develops at all.

So people always thought that what's called—what Piaget called formal operations was sort of the late-stage of development occurring eleven or twelve years old. It now appears that it's actually the product of a Western upper-middle-class education, right? What Piaget thought was a natural developmental stage actually isn't.

So I am quite worried about it, because I think that the things that tie fields together largely by merit of us understanding that everything is an information process of one kind or another, that idea is very, very hard to get across. And I know this because I teach the 100 level course at the University of Arizona, and I look at these faces, and I can see they're not getting it.

MCAFEE: Really.

COHEN: And we've worked for years to try and get this across, and I just don't know what to do about it. It's a real problem.

BOCK: My daughter is a primary school teacher and has been for forty—for thirty years. And she says that things haven't changed a great deal for her, except the infusion of an enormous amount of modern equipment.

ALVING: Which they don't have to deal with.

BOCK: Oh, on the contrary, my grandson, who is—was one of her students—is an expert on this machine and has been since he was four.

ALVING: But are the teachers experts?

BOCK: She is. You know, we have to be a little bit careful, you know? The generations are changing here. The younger generations, the Millenials consider this all—well, wasn't it like this when you were a kid? You know, what's wrong? What's the big deal here, you know? Can't you handle five remotes at the same time? Part of that is, of course, that I wish I was 72 again, you know? But nonetheless, there is not much of a change from her point of view in terms of educational methods, and that has to do with exactly what Paul was saying, bringing as much sensation Montessori-wise into experience, feeling things, feeling egg whites as opposed to water. Very different feels.

ALVING: Let's take a question from the back over here.

QUESTION: Yes, hi. Jay Parker, I'm a department chair in the College of International Security Affairs at National Defense University. There were two points in the discussion here that seemed to me as linked, and I'm wondering if Peter in particular can expand on them.

You were talking earlier about this—one of the programs that you're involved with involves computers essentially reading 11,000 novels, but you said none of my graduate students read novels, and that's the problem, and you made mention of the fact that STEM has been changed to add A, which strikes me as very unusual, where arts, literature, all those things where we would think abstract learning and wrestling with those concepts came from are dramatically shrinking as the very specific technical kinds of education that you're talking about here is increasing.

And I wonder if you could kind of pull on that thread a little bit more and talk about the implications and—and what some possible alternatives might be.

BOCK: Well, imagine a driver who—an artificially intelligent driver who has to drive you to work and some of the obvious things that must—it must know is it must know the route and how the car works and so forth, but I maintain it also must know how to go to the grocery store and how it must know how to go to the movies and play tennis or walk on a beach and appreciate how the sunsets and so forth, because that will come into play eventually. And if it is not answered correctly, both of them—the natural and the artificial intelligence—are going to get killed, along with perhaps some other people who are not in the car.

What you know that is in—what we call the arts and social sciences and the humanities is a major part of why creativity and innovation is almost exclusively an American thing. They do not teach those things in Europe. They do not teach them in the universities in China. They do not teach them in the universities in Europe. We teach the arts, the social sciences, and the humanities, the A in STEAM—I'm so glad to hear that.

I was terribly afraid that they were just going to take out everything except arithmetic and computers and all the technical subjects, and I think it's very important for people to understand what the categorical imperative is, Immanuel Kant, 1780. I think it's very important for people to understand what Galileo's problem with Maffeo Barberini was, 1620. I think it's very important for...

MCAFEE: You think it's important for them to be able to recite the dates?

BOCK: No.

(LAUGHTER)

There's only one date we need to know, and that is when George Washington stood up in the boat—was that it—when he was crossing the Delaware or something. I don't know. I'm kidding.

MCAFEE: And maybe not even that one.

BOCK: Yeah, history was taught so poorly when we were—when I was—when we were students.

ALVING: But your point is that creativity is a fundamental element of even technology innovation, forget the larger world.

BOCK: Social sciences, arts and humanities are an absolutely essential ingredient of creativity and innovation, absolutely essential.

ALVING: Yeah. Great. Next question?

QUESTION: Bill Nitze with Oceana Energy. I love the reference...

ALVING: Could you use the microphone, please?

QUESTION: Yes, Bill Nitze with Oceana Energy. I love the reference to the categorical imperative, because I think it applies to all intelligent machines.

BOCK: Which ones?

QUESTION: And I think Isaac Asimov's "I, Robot" rules are just dead wrong.

BOCK: Absolutely.

QUESTION: Now, that leads to a broader question, though, about rules for intelligent machines. If we attempt to anticipate the future, we are bound to make serious mistakes which will have collateral consequences that we cannot even imagine. If we allow the big cloud—that's all of us with our reptilian brains—to evolve those rules in an age of ever-accelerating change, we're going to experience some interesting versions of a Thirty Years' War among ourselves with robots. I mean, it could get really, really interesting. If you think the Luddites were bad, wait and see what the Baptists are going to do down the road.

So what do we do? How do we approach this question of perhaps being the masters of psycho history and just anticipating the ethical challenges enough to minimize the truly terrible outcomes and slightly weight the future in favor of the better outcomes?

BOCK: Oh, boy. Oh, boy, oh, boy.

ALVING: And how much time do we have to answer that?

BOCK: Yeah, really. I see it all the time. The problem is—but you know what? That problem has been here since the day I remember thinking about it, which is a very long time ago, OK? It is not a new problem. It is a growing problem, perhaps, but it is not a new problem.

We just have to make sure that we understand that students need to learn not things and items and data, but how to think and how to understand and how to feel and how to associate, even if not causally, associate, but also causally, but even if not causally.

And that has to be inserted in their education, but it has to be made to live, because as a teacher, the three of us know our primary duty on that stage is—well, not the primary duty—an essential duty is to entertain. We have to keep the students motivated. Otherwise, they do this when we're teaching, especially if they're part-time students and they've been working all day.

So you have to keep them motivated. You have to keep them understanding how everything sort of works together or let their minds understand how they must think about how everything is linked together. The Thirty Years' War is a wonderful example, because what we're seeing now is another one, of course, in a slightly different part of the world. It's no longer Gustavus Adolphus against the Hapsburgs. It's now something that we don't really understand yet going on over in the Middle East.

But it's about religion on the surface. The Thirty Years' War was about religion on the surface, also. Maybe the answer partially to your question is, and it's a very vague answer, it's one of those abstractions—we need to get rid of more of the magic. We need to stop listening to people who are not informed saying, "Well, what I believe is this and I know it's true because I saw it with my own eyes." What a defensive statement. Which eyes would they have seen it with?

ALVING: On that note, I think we have time for one brief question here on the aisle.

QUESTION: Chris Broughton, Millennium Challenge Corporation. Just to return the question of employment, which we've danced around our entire session, recent Economist article citing a U.N. study estimating one-fifth of all global employment potentially displaced by machines' artificial intelligence as we look to our global population increasing from 6.7 billion to 9.0 billion in the same time period. What credence do you give these statistics? And in the age of innovation that you talked about, new products, services, possibilities for well-being and economic advancement, what new types of employment do we see potentially? And will the new employment be able to keep up with the old employment that's lost? Thank you.

MCAFEE: I give almost no weight to any point estimate that I've heard about net job loss at any point in the future. And the simple reason I do that is because previous estimates have been almost uniformly terrible. So is it 20 percent, is it 47 percent? I don't know. Nobody knows.

I do think the broad challenge, though, is that we could be automating work and jobs more quickly than we're creating. The tendency that we have is to low ball the creation side. That's why for 200 years allegedly smart guys and Luddites and different people have been predicting the era of technological unemployment and they've been wrong.

The question is, is this time finally different? I think this time is finally different. I'm fully aware there are 200 years of people saying that and being wrong. I also think our homework is to make sure this time is not different. And the way to do that is not to try to throttle back the technology or channel it so it can't ever put anybody out of work. The right thing to do is create environments that let the innovators and the entrepreneurs do what they do, which is come up with new stuff, and as they're coming up with that new stuff, they need to hire people to help them get that out there to market. That is our only real possible way to keep people gainfully employed.

I'm in my mid-forties. Let's say I've got half a century ahead of me. I honestly believe that I'm going to live to see a ridiculously bountiful economy that just doesn't need very much human labor. So I think we're getting there in that timeframe. That's about the most confident prediction I can make.

ALVING: And on that note, we've reached the end of our session, so I will just remind everybody that this meeting has been on-the-record. And please join me in thanking our panelists.

Christine Fair, associate professor at Georgetown University’s Edmund A. Walsh School of Foreign Service and expert on South Asian political and military affairs, joins James M. Lindsay to discuss Pakistan’s July 25 election and incoming Prime Minister Imran Khan.