Lawrence Lessig Is Fired Up About Campaign Corruption, Dangers of AI

My guest for this episode of Fast Forward is Lawrence Lessig, a professor of Law and Leadership at Harvard University, where we recorded the show.

In 1999, he wrote Code and Other Laws of Cyberspace; he has since updated that book and written many others. He's also the co-founder of Creative Commons and made a run for president in 2016, which he sadly did not win.

He's also the creator of the Lessig Method of Presentation. I encourage you to Google it and look at the YouTube videos; it will change the way you give presentations. I know it changed the way I give mine.

Dan Costa: We're going to talk about the law, we're going to talk about the internet, we're going to talk about that messy place where the two collide. I want to start way back in 1999, when you wrote this book and one of the things that you wrote in it is that "the code is the law." And you suggested that the internet will evolve into a more regulatable form. That was 18 years ago. Have we been moving in the right direction or the wrong direction?

Lawrence Lessig: Well, I think unfortunately, I was right. I remember when I wrote the book there was deep skepticism because the argument of the book was the internet gives us lots of freedom right now, but that's only because of its architecture. And there will be lots of incentive both in business and in government, and business working with government, to change it into a much more regulable space. And that's in fact what's happened.

Whether it's just business, trying to make it easier to track and supervise what people do on the web to make it easier to sell them things, or business and government, where the government has worked with business to build back doors into software, or make it so that you can report more easily what's been happening. I think we've seen a pretty strong trend in the direction of regulatability.

And then some very interesting point of resistance. When Apple said they were going to encrypt by default, that was a really important step in the other direction. But strikingly, really the first by a major company to resist the increasing regulatbility of the net, and I think the fact that it's the first is what's really concerning me.

You also pointed out that there was going to be this trend toward consolidation. That there were a lot of forces that were going to drive this. At that point, it was wide open media, nobody even really thought of it as a business platform, they thought of it as an open communications platform. But we have seen incredible consolidation in this space.

Yes, and it was actually something I wasn't thinking about at all, but it was so obvious in retrospect. It's just the fact that data is infinitely more valuable than even ownership of the platform and certainly ownership of the infrastructure. Because what data does is leverage all sorts of capability and potential for control and commerce. And the consolidation we've seen is that data induces a kind of winner-take-all dynamic for these platforms and you can't really afford to be number two, but if you are number one you are really, really incredibly successful. And I think nobody has a clear sense of how traditional principles of antitrust should apply in the context of this type of really dominant position that these companies play. And it's certainly too hard for me. I'm glad I'm far from this debate now.

So, you ran for president. You went through all the processes and checked all the boxes. And you ran, in part, to fix democracy, but also to get the money out of politics. While this was happening, we now know that all this money was being funneled into Facebook, funneled into targeting advertisements. Did you have any sense that this was going on while you were operating your campaign?

The real objective of running was to try to get a chance to be in the debates and to frame the debates in a way that made it impossible for the Democratic Party, at least, to ignore the deeply corrupting influence of money inside of our politics. And not just money, but also the way gerrymandering renders us unequal, the way the suppression of votes renders us unequal. It wasn't a focus then but it's a focus now. The way the Electoral College renders us unequal. All the ways in which the simplest commitment of a democracy that everybody has equal political liberty, is denied in our democracy and most grotesquely by the way money infects the system.

To make my campaign credible we said, "Okay, we'll raise a million dollars in a month." And we raised it in less than a month. And at that point, I had raised more than half the field of Democrats and Republicans together. It's a significant amount of money. But the rules for the debate turned out to be more malleable than I had expected them to be because just when we qualified to get into the debate, the Democratic Party evolved the rules to exclude me from the debate. So I never got a chance to be in to make this case. And that's obviously why I had to step aside.

The Facebook dynamic is really troubling, but it's different from the thing that really concerns me. I was not so concerned - I'm more concerned now. But I was not so concerned about the way speech affects the public. What concerns me about the money inside the system is the way it gets raised and the corrupting influence it has when it's raised.

If you think about a member of Congress, the average member spends anywhere from 30 and 70 percent of their time raising money to get back into Congress, or get their party back into power. And they raise that money not from the average American. They raise that money from the tiniest slice of the 1 percent. Probably no more than 100,000 Americans qualify for the honor of being likely to be called by a member of Congress asking them to give money to a campaign. And when you think about the life, you must think in a very human way. If you think about spending all your time sucking up to money, it's really hard to think how you can be a leader after you've done that for half your time as a Congressperson.

Until we change the way we fund campaigns, we're never going to get a government that can represent us. They'll be representing the funders of their campaigns. And that's not really about what the money is being spent on, it's really just about the corrupting dynamic of having to raise that money. And we still have made no progress in the public's awareness about both the nature of that problem and also how it can be solved.

The Obama campaigns raised a lot of money online. They were for smaller donors. Bernie Sanders did very well with smaller donors. But in the context of all of the money that's going into these campaigns, those small donor donations just don't compensate for the fact that corporations are dumping millions of dollar in.

Well, that's true. And what's so frustrating to me, is the missed opportunity of both the Obama administration and frankly, the Bernie Sanders campaign because you know, Bernie did a great job in rallying people to the corrupting influence of money and how terrible it was. But he used that message for the purpose of beating up Hillary Clinton. Whenever he was asked about changing the way we fund congressional campaigns, he would say, "Oh yeah, that's something for the long run."

And it's like, what do you mean, the long run? What are you going to get done in the short run while Congress is bending over backward to suck up to big money to get the money they need to run the campaigns. And so what frustrated me was that rather than using this opportunity he had on this enormous public stage to explain to the American people, this could actually change the way our government works, he used the issue to convince everybody that Hillary Clinton was quote "Crooked Hillary." Which, of course, Donald Trump was very happy to inherit once he became her opponent.

Do you think there's anything unique about Facebook advertising and the micro targeting of individual consumers and playing on their fears, playing on their weaknesses, that platforms like Facebook and even Google and Twitter, enable what just hasn't been there in the past?

Absolutely. I think the AI component to these machines is terrifying. You know, there's this great movie, maybe in the '60s, maybe early '70s called [Colossus] The Forbin Project. Do you know this movie?

In The Forbin Project, Dr. Forbin has invented a computer that's smart enough to run the defense of the United States. And so the president turns over to this computer control of the decision whether to launch nuclear weapons. Like, what could possibly go wrong?

At this point, it seems like that could be an improvement.

It could be. I'd take Dr. Forbin's computer over what we've got.

The computer, when they turn it on, immediately says,"There's a Russian computer."

And they say, "What do you mean?"

"The Soviets have a computer."

So it turns out the Soviets do have a computer and then he says, "I want to talk to the Soviet computer." And so they connect the two. And they're first speaking in a language of English and Russian and then eventually they evolve to a language nobody can understand. And eventually, after a couple of days, they make some demands, the computers. And the demands are basically the computers are taking over the world and they resist and say, "No, you work for us." And then the last scenes of the movie are these nuclear bombs being dropped in various parts of the world because the computers have asserted their control and they have control.

That dynamic is people losing control of their technology. They don't understand where it's going to go. And there are really catastrophic consequences that can flow from that, and that's what I fear is happening in many different contexts here. When it turned out Facebook was selling ads to the category "Jew Haters," you know, there was no Facebook employee who created the category, "Jew Hater." This is generated by recognizing what the people in Facebook wanted in order to sell ads. And so it's a perfectly efficient technology for giving people what they want, but the fear is that dynamic can produce all sorts of ugliness that we don't actually want.

The data came out. I was on a panel with the CTO of Cambridge Analytica, who had been the CTO of the RNC, and he said, "Yeah, during the Trump campaign, we never let these things run on their own because we were afraid of where they would go." And you begin to get a sense of just how dangerous it is if even the Trump campaign was afraid of where it might go. And that's what keeps me up when I think about this dynamic. Because you're not going to turn off the incentive to become as efficient as you possibly can in delivering product to people on the commercial side. The danger is that that same dynamic when shifted to the democracy side, can produce all sorts of ugliness that I don't think we have a good way to think about solving.

I don't know that there's anything in opposition to that. Facebook is how the majority of Americans get their news these days and it's written by Artificial Intelligence engines that are selecting what people want to read automatically. It's a commercial enterprise. Yet it is informing their voting patterns. It's informing how they think about the body politic at large. And so there doesn't seem to be an alternative that's not driven by commercial forces.

That's right. I've been in a couple of campaigns where we've seen people struggle to figure out how you speak through Facebook. And when you realize there's this editor in the middle - not a person, but an AI editor that you have to try to figure out how to get through to be able to communicate. You realize, we have turned over a huge, important part of our discourse to a machine we don't really understand.

So in other bad news, Equifax has released the private information or leaked the private information of 143 million people. These are the Equifax profiles that have everything from our Social Security numbers, our employment history, our history of all the places we've ever lived, and that was Equifax's business, to collect this information, to sell it to businesses. People didn't have a choice to opt in. So we don't even know what the consequences of that will be to individual consumers. But what should be the consequence for Equifax?

Well, talk about a totally failed and fumbled crisis. Because what's clear, at least so far, is that once they discovered the problem, they first went out and they hired a company that would help them profit from solving the problem. They then announced a solution to their customers that was essentially where the people that were affected by this would essentially opt them in to an automatic protection system after a year. And so, very quickly, people are like, "Wait a minute, you're taking this absolutely clumsy flaw of your company and turning it into a money making opportunity. And this just should not be allowed."

The worst part of that original deal was that in order to sign up to get your protection, you had to waive your right to complain, to sue Equifax. And I cannot believe that there are any sentient beings in the middle of that decision too. It's like, maybe AI is running Equifax as well as Facebook Ads.

I think that what's got to happen, and maybe this will trigger that, we need to recognize how important the law is in disciplining these powerful companies. I mean, we've seen a trend pushed by both parties but mainly the Republican Party towards protecting companies from the law, allowing them to automatically sign their people up for what's called arbitration, but essentially is a complete corruption on the idea of being able to recover money for damages that have been done. And basically removing these companies from the public litigation or justice system and allowing them to move into a completely private litigation and justice system where it turns out, surprise, surprise, they benefit over the consumer.

And if we don't have the ability to go in and hold their feet to the fire when they do something as dramatically horrible as this, then we have no chance in standing up to these companies in the future because there will be fewer and fewer of them, and they'll have more and more power, and we'll have less and less opportunity to speak back. And especially in a context where politics is so deeply corrupted, I can tell you the amount of campaign contributions coming from Equifax in this next cycle are going to be orders of magnitude higher than anything before, which means very quickly the push-back from the representative bodies will begin to dim down.

So I hope we take this as a lesson, that we have law for a reason. And you know lawyers, I make lawyers for a living, so I might be a little biased about this but the idea of the law is to force people who do harm to pay the cost of their harm. And I would love to see Equifax pay the cost of the harm they've done here.

It seems like there are two parts to it. A lot of people look to proactively legislate and control how companies can use personal data. That can be very complicated and very hard to do given how fast the technology's changing. But then, when they do make mistakes, holding them accountable seems like something we should all agree on. Like, maybe you had a great business, maybe Equifax was a great business for as long as it lasted, but this type of malfeasance needs to have some kind of consequence.

Yes, and it doesn't stop. I, like many I'm sure, went to check, were you affected? And I was affected. It then said, "Come back in three days and you can sign up for this service." So I came back in three days and signed up for the service and they said, "We will send you, eventually, the link to complete your sign up."

It's been a week and I haven't heard anything from them. So, it's not just the negligence ex-ante, it's also the negligence ex-post. And you're like, when do they ever come around to being the kind of company that ought to be holding this amount of personal data? The other thing that this should force us to recognize is just the danger in encouraging this kind of company to exist like this. The idea of pulling this amount of data together in a particular place as a target, is really a silly way to architect security. And I think there are lots of companies experimenting with ways to achieve the same kind of confidence and intelligence you need to be able to do commerce in the business world without building this kind of database of data that is a constant risk in vulnerability to the kind of attacks that obviously the malfeasance, you know, there at black hat crackers out there who are eager to try to facilitate.

Is there anything about the data itself that individuals and consumers should feel they own, that they should have ownership of? Because, I mean, Equifax built these profiles into your credit history. They sell that to third parties. You can go in and ask them to correct errors. But there's nothing you can do to keep them from compiling that file on you. And it seems to me in a lot of ways analogous to what all tech companies do. Facebook has all of your preferences, all of your click history, all of your likes and not likes. Google has a similar profile on you and they use this to sell advertising, of course, but they can really use it for any number of different things. Is there any legal right that consumers have to just that data? Their own personal data?

Right now, the only rights they have are given to them by statutes or by agreements that they sign up with as they sign up to a site. You know, in Code I said, "We ought to think about privacy as a kind of property right. I have a right to my data and if you want to take it, then you need to get permission. Just like copyright holders say, if you want to take a copy of my copyrighted material, you need to get my permission." And if you shifted the burden to the other side and basically said, "It's presumptively mine. Negotiate with me to get it from me." Then you would obviously increase the protection to the individual as you would make it more ... it would increase the obligation on the other side to be acquiring this data.

So, it would be interesting to see how that develops. I think what we need to get to though is a better understanding of how to be protecting this data. I think the focus has got to be on the uses made of data. And not the actual data itself. Because the real problem is whether the data is being used in a way that is threatening me or harming me against my will. You know, I don't actually have any problem in Google watching me drive and determining that this road is a better road to be taking at 5 o'clock on Friday afternoons because it turns out there's less traffic there. That's great.

What I have a problem with is Google, and they don't do this but I'm saying if they did, Google taking data about my driving and calling up the local police and saying, "You know, this guy speeds every time he goes around this corner. Why don't you hang out and watch him." Or a company taking information about my DNA and then reporting it to an insurance company so the insurance company says, "We're not going to give you life insurance because you have blah, blah, blah."

There are ways in which we understand that data should not be used without our permission. And ways in which it doesn't really matter how it's being used because it's not affecting us in a personal way. And I think if we shifted the analysis of privacy more to the use and how we regulate use, then we could get a better infrastructure of protection for the things that really matter to us.

I think that most consumers don't even really appreciate this whole data economy and how it's operating. And there's sort of an inherent asymmetries between the corporations and the government which can take advantage of this data and analyze this data, make decisions on it, and individuals who only understand that they looked at a pair of shoes on Amazon, and now those pair of shoes are following them around the web. Which is creepy but even now, everybody's used to it.

I don't think people, consumers, have thought enough about that next generation of applications. You know, Allstate's selling these ODB data ports that are lowering your insurance rates if you're a good driver and they're jacking up your insurance rates if you're a bad driver. Most people don't realize that's what's happening when they plug it into their car.

And again, I think there are certain contexts in which it's really great that the data more efficiently identify what I want. Because, you know, I don't want them to advertise sneakers to me. I know what sneakers I want. I'm glad to see the sneakers I chose when I was 18 are now coming back as the cool sneakers for people to be wearing.

Are these Converse All Stars?

Actually, Stan Smith.

Okay, just a guess.

But the point is, that not every context is unambiguously good like that. And I think we need to flush out how the data is being used and what the ethics or what the values behind it are. And we need to be able to be confident about this and the flow of this data and the flow of the influence. It's one of the things that whenever I need to do a review or find out what I should buy, I Google PCMag's editorial reviews because I have a confidence in the way the judgments are being made and more importantly, a confidence in how they're not being made. You're not being paid an extra $1,000 from Intuit to say something nice about the latest version of QuickBooks. And understanding that economy, is really important to developing the kind of trust you need in institutions and things on the web. And we have such a little, such imperfect understanding of how these flows are happening right now that it creates this general insecurity about what's happening.

At which point, people revert to fake news and they don't believe anything they read online.

Exactly.

And that doesn't help anybody. I want to talk to you a little bit about national security, Edward Snowden. You're one of the few people that have actually talked to Ed Snowden and I just want to say, how is he doing? What is he doing at this point?

I met him...around last Christmas in Moscow. There's a film that I was part of called Meeting Snowden. And the title of the film is really appropriate because the Edward Snowden that comes out in that film is completely different from the Edward Snowden you get to know when you watch Citizen Four or any of the other movies that have been made about Snowden.

Because what you see is that he is a deeply reflective, serious, brilliant, really brilliant kid. And he self consciously put himself at risk to do what he believed was in the interest of the country. He thought he'd be in jail for his whole life. His escaping to Russia was not anywhere in the plan. But he did what he did expecting he would spend the rest of the life in jail—conceive that some people called for him to be executed—but, spend his life in jail. But he resolved that was worth it to raise the alarm bell. And the alarm bells that he rang were certainly ones we needed to know about.

And what's more important, I think, in understanding him is the precise way he did it, right? Unlike Julian Assange, [Snowden] didn't just take this information and dump it. He was very careful to make sure that the information he was releasing would not put anybody at risk and did not reveal more than was necessary to achieve his objective, which was to convince the American people they were being lied to by their own governments about what was happening for cyber infrastructure and security. And so that was an enormously important thing for us to learn and it's had a significant effect in all sorts of ways.

But the consequence is that he is stranded in Russia. And so I saw him just at the moment when Donald Trump was being confirmed as our president. And there was real anxiety because everybody could see that Donald Trump and Putin were much closer than Obama and Putin, or certainly Hillary and Putin. So you know, it was pretty clear that if Donald Trump called Putin and said, "We want Snowden," Snowden would be back in America. Or Putin would say, "Look, you don't want to go through the hassle of that, why don't you let me take care of Snowden." And then Snowden would be nowhere.

So I've got to say, there are very few people who I've met who've moved me as much as Snowden did. And you know, I was the only candidate, I think, in that last election who called for his pardon. And I would go beyond pardon, I would say we need to start talking about Snowden as a hero in America. Not because necessarily you agree with his politics, but the idea that somebody would sacrifice himself so much for what he perceived to be clearly in the public good is something we need to encourage more of, not less.

I was going to ask you about the comparison with WikiLeaks, and the way the information was distributed was very, very different. And the consequences were very different. And it's something I think people don't always appreciate. They kind of lump the two of them together and they're very different approaches to transparency.

I think Julian Assange continues to try to put himself in the middle of these issues and what he did in the election of Donald Trump is really inexcusable. But for Julian Assange maybe one could rightfully say there wouldn't be a Donald Trump sitting in the White House right now, and that's an incredible thing to have to accept responsibility for.

And probably not something that he intended. Probably not the way he wants to be remembered.

You know, it's interesting. Sometimes I think that's got to be true. But sometimes I listen to him and I think this guy just wants to blow it up. And you know, I get the kind of romantic 15-year-old instinct—"Yeah, let's just blow it up." But there is real consequence to having Donald Trump as our president. And I'm not somebody that likes to focus on this. I think we've got really important problems to deal with in the political corruption of the system, some of which Donald Trump is talking about and I kind of think he's a distraction. As long as he doesn't drop a bomb on North Korea, we'll get through it.

But it's really extraordinarily damaging to the potential of the United States. And if he does any of these crazy things, it could instantly change the character of who we are in the world. You know, you drop a bomb on Korea, the exchange probably kills 10 million people. We become the Nazi Germany of the 21st century, not to mention the 10 million people. And so you just recognize how much has been placed at risk by the behavior that led to this man being president, and I think everybody in that chain needs to accept responsibility.

A quick, lighter turn before we get to closing questions. Talk to me a little about your involvement in Seed. It sounds like you're building a government for a virtual world.

Seed is this really, truly extraordinary game that's being developed by a company called Klang Games. I met the founder of Klang Games at a party and he described it to me. What they're building is a game where it would be the largest, persistent virtual AI game out there. And so, you as a player, will have any number of characters who are AIs and you will land in a pod in a certain place and you begin to build a community in that place. And what's significant is that when you log off, your AIs continue to live and do their work, and you have to define their life. So you say, "You're going to work 10 hours a day." And if you work them too hard, they could get depressed and take up drinking and then cause damage. So it's like this whole world you're generating, but in ways that are completely impossible to predict because it's about their interaction among these AIs and other AIs that you're going to be thinking about.

So when he described this to me, I was like, "So how are you going to deal with governance?" And, of course, they hadn't thought about governance at all. But we recognized quickly that if you could actually have structures and governance, you could build communities that were bigger than World of Warcraft clans. You could build communities that had thousands of people because you could have a structure for actually enabling them.

So what we decided to do was to mock up four kinds of governance that you'll be able to select in the initial version of this. One is basically a monarchy, where there's one person that's in charge. One is basically a direct democracy where every decision has to be made by everybody voting. And then there are two kinds of representative democracy. One is like we have in the real world where representatives are elected. And the other is a kind of hack of the real world, where representatives are randomly selected. You're called to jury duty, you're called to representative duty, you've got a period of time where you've got to make decisions for the colony.

And my real interest is, if we enable these different structures, and there are a million communities out there and they select these structures; and we can watch which communities flourish and which ones don't, what's the relationship between the communities that flourish and the forms of governance that they select? We in the real world can begin to learn something about how these communities work best, under what form of governance.

I spent the month of July in Berlin basically doing my work in their office, but then brainstorming with them about an hour or two hours every day and it was the coolest month I've ever had, not just because it was Berlin. When this game comes out it's going to be, I think, one of the most exciting gaming products that there is.

When is it slotted for release?

Probably late in 2018, maybe early 2019. They're still at the very beginning of it. But they've locked the funding. They're on top of Improbable, which is that incredible virtual machine that has a huge amount of support from Softbank. So I'm really excited. It's going to be something to watch.

Pretty cool. Well, when it comes out, PCMag will definitely want to get in on the beta round.

I'm sure we can make that work.

Let me ask you the questions I ask everybody that comes on the show. What technological trend concerns you the most? Is there anything that's keeping you up at night?

We kind of answered this, right? So, it's AI that concerns me the most. And not because I'm against AI, in the sense that I want to get rid of it. In fact, I really want it to be part of our world because I'd love it to liberate all sorts of people from work. But I don't think we're smart enough yet to understand how to live with it and make sure it doesn't take us over. So I'm very much on the Elon Musk end of that continuum of anxiety.

Is there a particular technology that you use everyday that inspires wonder in you?

Reddit. I mean, you grow up in the 20th century and you have this view of the world where the people at the top are great, and everybody all the way down are minions. Then you live in Reddit, and you just see the extraordinary talent and creativity everywhere. You know, you go into these different sub-reddits and you think, the talent and the creativity to produce these studies and these different ways of pulling things together, or the history sub-reddits, or the philosophy sub-reddits. The law ones aren't so interesting, but the point is that this demonstrates the rich diversity of the world and makes it visible in a way that no technology had really done well before. So yeah, this is the thing I feed on.

There's a lot of similarities to the 1999 internet and Reddit, in the sense that it's a little hard to follow, very text-based but the engagement is there, and the passion is there. It's surprising in a way that you wouldn't find on a mainstream website.

You would never find it. And, of course, they're always way ahead. But a little bit different from the '99 web in that it's a much more efficient way for aggregating this extraordinary beauty because the diversity and depth is just impossible to see anywhere else.

If people want to follow what you're doing—maybe make contributions to your next presidential campaign—how should they follow you online?

Easiest on Twitter is just @lessig. And the project I'm working on right now is that we've launched a campaign to try to challenge the Electoral College and the way the Electoral College works. And we just this week found out that David Boies, who was the big attorney in the Microsoft case, and the attorney that defended Al Gore in the Supreme Court, is going to be the lawyer in our case. So we're trying to raise the funds to make this case possible, and you can go to equalvotes.us and see what we're doing, and I hope follow us.

This is a really good final point. In that last election, unlike any other election, it's the first time I heard people literally say their vote doesn't matter. Because they live in New York and their vote doesn't matter in New York. It's going to go for Hillary and therefore, they're not even going to bother voting. And I heard that more this election than I ever had before, and of course, the outcome was a bit of a surprise to a lot of people.

Yeah, and 52 million people cast a ballot that did not matter because they lived in states that could not affect the results. Ninety-nine percent of campaign spending happened in just 14 states. And those 14 states are older, and whiter, and are focused on 19th century industry. So, I think they ought to be represented just like everybody else. But just as much as everybody else. We ought to have a presidential election system where the president cares about California and Texas and New York as much as he or she cares about Pennsylvania or Florida.

About the Author

Dan Costa is the Editor-in-Chief of PCMag.com and the Senior Vice President of Content for Ziff-Davis. He oversees the editorial operations for PCMag.com, Geek.com, ExtremeTech.com as well as PCMag's network of blogs, including AppScout and SecurityWatch. Dan makes frequent appearances on local, national, and international news programs, including CNN, MSNBC, FOX, ABC, and NBC where he shares his perspective on a variety of technology trends.

Dan began working at PC Magazine in 2005 as a senior editor, covering consumer electronics, blogging on Gearlog.com, and serving as the host of the weekly Gearlog Radio podcast. Prior to arriving at PCMag, Dan was Editor of the CNET Fortune ... See Full Bio