Voices in AI – Episode 40: A Conversation with Dennis Laudick

Yeah. I think it’s definitely something to be taken seriously. It’s something that I know we are certainly very active around and we have been for quite some time. What I am pleased about is the fact that it’s become very topical. So, to kind of go back to your question, it is a concern. It is a genuine concern. We are attaching more and more devices to the internet. We’ve seen early examples where someone was able to gain control of a camera in a casino and people were able to launch denial of service attacks because they’d taken over a class of devices in the home. So I think the examples are there to move beyond the question of whether or not this is something we will need to be concerned about. And we are connecting more and more devices at a huge rate and these devices are more and more intelligent, and it just flows from that that we really do need to take security quite seriously.

I think there’s been a range of events that have happened particularly over the course of the last couple of years where if you have a retail credit card machine that’s been compromised through, as I said, cameras and so forth, that has really woken every one up. And it’s kind of interesting within the sort of fundamental technologies that we work on, it’s something that we’ve been taking very, very seriously for a long time but it wasn’t really something that everybody took seriously, and to some extent, we felt like we were banging a drum when no one was marching to it. But events have driven it to the forefront of people’s thinking a lot more lately and that’s been a positive thing to see.

Advertisement

One of the things that we view from, again, a platform technology perspective is, you really you can’t think of security as an afterthought. Many years ago, there were a lot of people who would build devices in the way that they had before and then say, oh, hold on, somebody said, I had to have some security, why don’t we bolt on some security at the end. It doesn’t really work that way. You can achieve a certain level but it’s easy to obfuscate in many ways. So, you really need to think about it at the very fundamentals. It needs to be something that’s as integral to the design as the 1s and 0s that you start with, to build it up. If you do that, then, that’s the right approach. Will we ever get to perfection? Probably not, but certainly it needs to be taken with the level of sincerity and gravitas that it deserves and I think people are starting to do that. We’ve seen people start to look at the security aspects of the device from the very beginning, in the very inception and carry that through to the end and thinking about things like, you know, the fact that we need to be able to manage and update these devices and so forth.

So yes, I think it’s a genuine concern. It’s something we do need to take very seriously, and like I said, the examples are already there to show. The positive note is that the trends that we are seeing from the low levels of design on up, people are actually taking it very serious, and that’s globally. Up until last summer, I lived in China for five years and over that time, I saw it become much more serious over there. So yeah, I think we are headed in the right direction. We’ve still got further to go, and, again, I think there’s lots of room for innovation in terms of what people can do around security. Will we ever get out of the cat and mouse chase? I am not sure we ever will, but it’s beholden on the people with the white hats to do the best they can from the very beginning and that seems to be the direction people are doing.

My final question is, when the media gets a hold of these topics about artificial intelligence and machine learning, automation, the effect on jobs, security, privacy, all of them—there is often kind of a dystopian narrative that is put forward. So I just kind of want to ask you flat out, are you optimistic about the future, especially with regard to these technologies? Do you think they are the empowering, wealth generating, information freeing up, cognitive skill enhancing, technologies that are going to transform the world into a better place? Or is the jury still out and we don’t know? Or is there always going to be kind of this dystopian narrative that’s breathing over our shoulder?

Yeah. So I think it’s a little bit of both to be honest with you. At the moment, there is certainly a huge amount of dialogue around the dystopian views of future. To some degree you kind of see these whenever there’s an element of unknown. It’s very easy to paint the worst and in some ways, that’s probably healthy because it means that you try to build the new world in such a way that it’s safe and it keeps you out of those kind of situations. So, I am not saying it’s a bad thing but I do think it’s fueled a lot by the unknown. We’ve had a significant jump forward in terms of what the technology can do, where it’s going to lead us. It’s almost impossible to say. I think those that are in the technology space have a sense of the limits of where it can take us and it’s far from those dystopian domains or even the AGI type domains, but they are incapable of seeing the end, so we can’t deterministically say where the end of the capabilities are. That leaves the world outside of the technology sphere with a huge amount of uncertainty and fear, and so I think that’s the generation behind a lot of the dialogue in the market and those are healthy. Thinking about what we find acceptable and unacceptable in the future is a perfectly sensible discussion to be having.

So is it actually going to produce a dystopian world? I am an optimist. I see the machine learning what it can do, the positives it can bring, just what it’s doing in the medical space alone is just incredible in terms of improving human health and giving us medical benefits. What it can do in automotive and so forth is, again, quite incredible. Projecting into the future, there’s a lot of questions around what’s going to happen with jobs and the ethics and so forth. I am not going to sit here and say, “I have a crystal ball that makes it clear anymore than anyone else does,” but I have an inherent belief in human society and I do think that we are going to have some disruption while we reconstruct our social norms around what’s allowed and what’s not allowed. But, as I said earlier, the technology is inert, so it really comes down to how we decide as human beings to manifest the technology’s capabilities. Although there may be individuals that may have a particular dark nefarious side to them, history would suggest that as a collective and as a whole, we tend to build our social norms of what can and can’t be done in a positive direction.

So I have a tremendous amount of faith that this is all ultimately happening within the apparatus of the human society, and that we will drive the capabilities in what actually gets achieved in a very largely positive direction. And so, from my standpoint, I think there is a massive amount of benefit to be had around machine learning, and I am personally very excited about what it might be able to produce, even if it is as simple as not having to worry about losing the remote on my TV. Sure, there is potential to abuse and misuse it and bring about negative consequences, as there is with any new technology, and to some degree, almost any classical technology, but I do have great faith in society’s guidance and where that’s going to actually end up being manifested.

Well, that’s a wonderful place to leave it. I want to thank you for an exciting hour of challenging and interesting thoughts.

Likewise, it’s been very interesting.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.