Artificial intelligence has entered mainstream consciousness surrounded by marketing hype, jargon, inflated expectations, and fear. Given the importance of AI, we have started a new series of CXOTALK videos, speaking with experts in areas such as technology, data science, ethics, and public policy.

This series kicks off with episode number 203 of CXOTALK and a conversation between one of the top legal experts in the world on AI ethics and a respected expert on public policy.

Kay Firth-Butterfield is an attorney, author, judge, and public speaker on topics related to AI and ethics. Kay’s experience in this field is quite amazing as you can see on her LinkedIn page. David Bray is a frequent guest on CXOTALK. He is an Eisenhower Fellow, Visiting Executive In-Residence at Harvard, and Chief Information Officer at the Federal Communications Commission.

The conversation offers a fascinating look at the implications of AI for society. It explores issues such as the speed of change due to advances in computing technology; loss of control and privacy; job destruction due to automation; and advice on law and public policy related to technology and AI.

The video embedded below is a summary of the entire 45-minute conversation. You can watch the entire video and read a complete transcript at the CXOTALK site.

Here is an edited version of the transcript taken from this summary video:

Why should we care about the legal, policy, and ethical issues of AI?

Kay Firth-Butterfield: One of the things that stick out in my mind is some research that McKinsey did recently, where they describe AI as a contributing factor to the transformation of society. And I just want to quote what they’re saying about the transformation of our society: that it’s happening ten times faster, and at three hundred times the scale, or roughly three thousand times faster than the impact of the industrial revolution. And you know, a lot of people compare this revolution to the industrial revolution. But, I think it’s the speed and the real, core underpinning that AI is contributing to that transformation of our society that makes these discussions so important.

David Bray: It’s not just about handing over judgment and decisions to a machine that a human would do otherwise. It is about the loss of a locus of control, either a loss of a locus of control for the individual. So, when you’re in an autonomous car, you know, you are not driving; the car is driving unless you have the ability to stop within milliseconds that might not be possible. It’s really about are we handing over control to an entity that we are willing to trust that will be as fair, if not fairer than a human. And that’s where it gets to what Kay said with Europe.

So, I think it’s just the scale at which it may be used, and the scale and the impacts of the decisions. There’s always been the ability to tailor your experience even before the Internet regarding what services were provided to you. People were making sense by hand what things you should receive in the mail regarding ads, or what was called “automated data processing in the 1970’s.

Privacy laws came back in the 1970’s when you started doing automated data processing. And again, these machines were nowhere near as fast as what we have today, but that somehow there could be a correlation of “This person lives at this address; they’re getting this type of heart medication; they also are on this type of insurance.” At what point do you need to say, “Well, those are correlations you shouldn’t draw unless that person is giving consent?” So I think artificial intelligence, much like those things that came before, it’s just the scale and the impact of what this machine might be able to make decisions that will impact your life will be. So you’re right it’s the same trend. But, I think it’s the sheer scope and impact that I think we need to take into consideration.

Are scale and pervasiveness the driving forces?

Kay Firth-Butterfield: Obviously, you know the seminal quote from Stephen Hawking on the first of May, 2014, when he said that this could be the best thing that we’ve ever done or our last. And I think that captured the attention of the media. And where there were lots of us thinking about these things before, it’s become so much part of a more public conversation now.

That’s a really important thing. One of the questions we have been talking about is taking some control for ourselves as individuals. And unless we empower people to do that through education, then people are not going to be able to take back that power. And so, and also I think that there’s an issue with what we’re seeing in social media at the moment. I have seen a lot on Twitter in the last two days that people are saying, “Oh well move. We have to defend our privacy.” And there’s a lot of fear of surveillance ─ switching to Tor and more secure uses of email and things like that. That is not a positive sign for the way that some people in our society are thinking about artificial intelligence.

What about robots, jobs, and the impact on people?

Kay Firth-Butterfield: AI, in my view, is a technology that will benefit mankind or humankind enormously. And, there are some great challenges that we have as humans and for our planet that we really can’t solve without AI. And so, we certainly don’t want to see a groundswell of opinion against AI by people who are losing their jobs to it. We’ve all read the Oxford Martin study, and the Bank of America [Merrill Lynch] study that says that 47% and I think 52% of jobs in America currently done will go to automation in the next 15 or 20 years. But we have to think about the complexity of job loss because we don’t know what the future jobs are going to be. But what we do know is that as people lose their jobs, and some think that hasn’t been done in the past, we need, and can use AI to retool and re-skill those that workforce to create the jobs of the future.

David Bray: As jobs are lost because they can be automated, what do we as society owe those people whose jobs have been displaced, to help them retool, retrain as best as possible for something else. And the jury is out as to whether more jobs will be created vs. destroyed as a result of artificial intelligence. So, we need to monitor them and be aware of it. We must also be aware of there is what’s called the “unemployment effect” on people’s health, which is we humans need to have a purpose. And so, a future in which we don’t need to work because artificial intelligence is doing everything may not be a nirvana as it sounds like because we won’t find purposes. Or we may find purposes in avocations as opposed to vocations. But that’s a collective conversation we need to have, which is, “Where are we going together as a society? How can we make sure we bring as many people along?” As Kay said, ideally make it so they’re not as fearful of artificial intelligence.

Kay Firth-Butterfield: As a historian by background, I worry about the analogies with the industrial revolution because the industrial revolution hurt a lot of people over a long period. And yes, we came through it and we developed something better. But, it looks as if this industrial revolution will be much faster, and we need to prepare not to hurt as many people very quickly.

David Bray: So Kay’s right. It’s going to happen in a much shorter period. It may be as big if not bigger change. And so, having again that conversation about what do we, as a society, owe each other is key to have now because we don’t know! And none of us know if the job we’re currently doing today in two or three years will be done better by machines.

What advice do you have for people writing laws?

Kay Firth-Butterfield: Well I think the advice to lawyers is that very soon, you will be receiving… You will see those cases coming across your desk, and you need to get up to speed around artificial intelligence. And, what’s going on in artificial intelligence now, I think just going back to that job creation thing, there are going to be a lot of jobs around, so we’re not going to kill all the lawyers by automating them just yet because we are going to see experts needed in court. For example, instead of cross-examining a driver, we might have to cross-examine an algorithm, a.k.a. an expert on the system. If you are in any business, you need to be looking at what AI can do for you, and what the impact of AI will be on your business. So there are two pieces of that because I genuinely believe that AI will change everything. And if you don’t start looking now, you will be too far behind.

How about advice for policymakers?

David Bray: Cloud computing in some respects is the appetizer, artificial intelligence and the Internet of Everything is going to be the main course that we’re going to be consuming over the next five years. And, I don’t know if I can necessarily give advice necessarily to policymakers, but I’ll say what Kay said. Any organization and any entity should recognize that this will disrupt how you operate and it’s a question of whether or not you are very intentional about it. Or, someone else is going to do it to you. So, start on that journey now. Start having conversations.

There’s one thing I want call out, looking at the OpenAI effort and other efforts that are trying to make this open and available to people. Try to either begin experimenting or if you don’t have the time to experiment, maybe have some of your employees begin to experiment with what’s possible. Because we’re only going to get the expertise we need to know in this era through the experiments that we need to do with artificial intelligence.

Please see the list ofupcomingCXOTALK episodes. Thank you to my colleague, Lisbeth Shaw, for assistance with this post.

Post Views: 106

Share:

Well-known expert on why IT projects fail, CEO of Asuret, a Brookline, MA consultancy that uses specialized tools to measure and detect potential vulnerabilities in projects, programs, and initiatives. Also a popular and prolific blogger, writing the IT Project Failures blog for ZDNet. Frequently quoted by the press on topics related to IT management.