This is the second session of the Hacked Elections, Online Influence Operations, and the Threat to Democracy symposium.

The panelists will explore how the United States and the tech community can respond to foreign actors' use of online platforms to propagate disinformation and amplify specific viewpoints.

This symposium will convene policymakers, business executives, and other opinion leaders for a candid analysis of the cybersecurity threat to democracies, particularly to the election systems themselves and the subsequent attempts to shape the public debate through disinformation and online commentary.

KORNBLUH: All right. Can we have everyone sit down, please? Can we have everyone sit down? Thanks. Thank you.

Hi. Welcome to the second session of today’s symposium. This is titled “Combating Online Information Operations.” I’m Karen Kornbluh, senior fellow for digital policy at the Council.

And we’re very lucky to have these experts with us to discuss this issue. We have Renee DiResta, Thomas Rid, and Clint Watt(s). I anticipate a very lively and fascinating conversation.

I wanted to start with Thomas. When the Supreme Court in the U.S. decided Citizens United back in 2010, it predicated the whole idea that corporations should be able to spend money in elections because the internet, you know, which was then seen as this great engine of transparency and democracy—the Arab Spring was going on—that the internet was going to bring full transparency to American elections, and so that was going to be the magic bullet. And, you know, at the time I was in the Obama administration. We were really taken with the idea of internet freedom. But it seems that since then the openness of the internet, which we had hoped would solve a lot of political problems, which would undermine authoritarian governments, is almost being used to undermine democracy by some authoritarian governments. And I wonder if you could give us a little bit of history about information operations, what is it that we’ve been missing, and what do we need to be paying more attention to.

RID: Yeah. So thank you. I’m happy to try to provide some history. I’m writing a book on the history of disinformation right now, so please stop me if I start to skip into too much detail there. (Laughter.)

But disinformation—or active measures, to use the old Soviet term of art which emerged in the early ’60s—is, of course, a very old phenomenon. And if we go look at the Cold War, we literally have hundreds, more likely thousands, of examples of small individual active measures and disinformation operations.

I interviewed a few people who actually worked in active measures for their entire career. As you may be able to hear in my funny accent, I’m German. So I recently interviewed some Stasi—a former Stasi disinformation operator, which was an extraordinary experience. And from one of them I have this—got this great line that they think that—they thought the best mix between truth—of truth and fact, of fact and forgery, of truth and lie is 80/20—80 percent true, 20 percent false—because that makes it really hard for journalists or for experts like us to tell what’s actually true, what’s factual, and what’s not factual.

So let’s make an example of a particularly vicious operation from the year 1960 that was revealed in a congressional hearing in the mid-1980s. In 1960, this—the context here is decolonization and many African countries, newly independent, wondering whether they should join the West or the Soviet bloc. And in that context, suddenly a pamphlet appeared. A 16-page pamphlet appeared in 15, I believe, different African countries, in French as well as in English. And the pamphlet contained pictures and text, and it was about—it was titled “to our dear friends.” And it was on the face of it written by an African-American in the United States—an African-American organization to Africans in Africa, explaining to them the true ugly face of American culture at home. And it was full of racial discrimination, you know, lynchings in the South, police violence against African-Americans. Now, I checked and went through the press reports at the time, and almost every single detail in those 16 pages is completely accurate, down to very gruesome details that I’m not going to repeat here. But this is an example of an active measure that was a real headache for the State Department, very difficult to counter because it was based in true facts, but at the same time under a false cover of a nonexistent organization. So here’s just one of literally hundreds of examples that I think highlights some methods of operation that we still see today.

KORNBLUH: So, Clint, you’ve been talking about how the U.S. has to respond for quite some time, and we see that in some other Western democracies there is a response to some of the information operations. Can you talk a little bit about what seems to be working, what might be some interesting models?

WATTS: I don’t know that anything’s working yet. I mean, there is some—there’s the defense and then there’s the sort of countering portion.

And so the Europeans get it because they’ve been in this game much longer than we have in the United States. There was a two-part failure, you know, in the United States with Russia meddling. One, we didn’t understand that hacks were being used for influence. We were looking at it as investigations. And the second part was this was already going on in Eastern Europe, Ukraine, places like that, Brexit. You know, when we saw them, we didn’t think it would happen in the United States. We were arrogant to this, that it would never come to our shores. But they are more in the trenches on this. They’ve been dealing with it for a long time.

And so the number-one thing that they’ve done over probably a 50-year period is called education. We don’t invest in it in the same way here, but they very much put forward what their stance is on information, how to deal with it, what they believe.

And the other thing they’ve done is they’ve started to go ahead and acknowledge when these untruths are being leveraged towards them. And in certain places—Czech Republic’s got some of this together; Latvia is one that’s gone way out front. I was at the launch in Helsinki of the Hybrid Centre that they put together. They are organizing.

Now, for them it’s a challenge because they have different audiences. If you want to understand Russian active measures, it’s about language, not necessarily about culture, because that’s how you communicate in social media. So, if you want to track an influence campaign, you just need to look at the languages they’re using and the way they’re narrating on that.

But what’s interesting with all of those countries as opposed to our own is, you know, the basic rule you’re taught in boxing, which is you don’t punch back until your feet are on the ground. And so they understand what they want in their country and what they’re defending, what their policies are, and then can move to counter the influence narrative. We have failed in this for a decade now, whether it’s been terrorists or the Russian disinformation, in our counter influence because we don’t really know what we believe in and we don’t know what we stand for. You cannot counter back, whether it’s online or on the ground—a counter-influence campaign—unless you know what your nation’s policies are, what your belief systems are, and what you’re going to push back with.

If you look at the Cold War, whether it’s a European country or here at home, we were pro-democracy, we had nationalism, we had things that we, you know, were trying to advance around the world. Right now I am not sure that the Russian message is different than our own here at home. And so you can’t do counter influence or counter active measures until there’s some consensus at home about what we believe in here, what we’ll defend, and what we’ll promote overseas. You know, the narrative that we saw rise around the election was anti-EU, anti-NATO, let’s work together to kill ISIS, be a nationalist not a globalist, you first the world second. How do we counter that? It sounds pretty familiar, based on what I see.

KORNBLUH: Just to follow up—

WATTS: So I’m just saying, in terms of—whether we—you cannot move forward. The way the Europeans are moving forward is, even in their own countries, they have a baseline from which they are standing in their counter-influence campaigns, and they have some consensus around it. They know who’s in charge. I don’t think we have that here. We got rid of the U.S. Information Agency. So, both structurally and in terms of message, they’re just much more grounded. They can punch back.

KORNBLUH: So just to pick up on that, I mean, one thing I’ve heard people talk about is just that it’s just so much more—it’s just much easier to be negative.

WATTS: Right.

KORNBLUH: You know, if you’re—if you’re about tearing down, if you’re a nihilist, it’s much easier to get your message out than if you’re in favor of something.

WATTS: That’s right.

KORNBLUH: But what you’re saying is that there’s been some success with people who at least have a better ability to articulate what democracy’s about.

WATTS: Right. It’s not just about democracy. It might be using the nationalist message in certain European countries, say it’s about us first and not, you know, our adversary. But they have a clear way of communicating to their publics, both from a leadership perspective, and through their media and public affairs, where they communicate out. I think Finland, Sweden, the Scandinavian countries, you know, are great examples. They communicate out to their public very clearly this is what we stand for and this is what we believe in.

KORNBLUH: And so it’s a positive message. It’s not just an anti-Russian, for example.

WATTS: That’s right. And it may be nationalist, but it’s also about what their values are. And our biggest challenge right now, it has been—I will be coming up on the four-year mark, the first time I talked about this in government audiences. It was late spring/summer of 2014. The last government group that I talked to was three or four months ago, and I have the same deer-in-the-headlight look whenever I talk about this stuff. You know, I—not because they’re doing anything wrong. There are—there are agencies in the U.S. government that want to do things. But the way our system works is policy sets requirements, the requirements set funding. This is how we move, you know, our organizations. And I’m not sure anyone knows what their role is in countering influence online or who would have the ball.

I made specific recommendations. They’re pretty easy, actually. You know, FBI should look at investigations of hacks now for how might this be used for influence later, and that’s inoculation strategy. DHS and State Department—DHS at home, State Department abroad—should refute falsehoods, you know, almost immediately. We did this in Iraq, actually, against terrorists, and we’re pretty good at it. And in the intel community we have to decide what our strategy is around information and influence. But no one really knows, or at least I don’t know, who’s in charge. And it’s been a year since this happened now, and I haven’t seen a lot of gears moving, you know, in any direction at this point.

KORNBLUH: Well, let’s come back to that.

But, Renee, I want you to take us to the private sector and talk about the platforms. They’ve been doing a lot. Some of them have been doing more than others, I think, but putting in more people to review accounts, to review posts, to take away the monetary incentive for fake news. Talk to us a little bit about where the incentives of some of these platforms are, and to the extent to which they have the incentive to clean it up versus there’s a tension between their economic model and cleaning up disinformation.

DIRESTA: Sure. So there’s a—I want to first kind of piggyback on this idea that no one’s in charge, because that’s the problem in the private sector, too. Because these platforms are competitive with each other, because they all monetize, their business models are based on attention. They’re selling ads. They want to keep you on their platform. Each one wants you on their own platform because they want to be the one to serve you the ads, because that’s how they earn revenue.

So there’s a fundamental business case that’s underlying why these types of things are—you know, one of the kind of fundamental challenges here is doing things to make you happy on the platform is such a core part of the business, and that’s why it’s so personalized. You see the things that are likely to make you happy, that are likely to keep you on the platform. And so when that intersects with an influence operation, it’s very carefully tailored. Influence operations have been around for decades as, you know, co-panelists have said, but the vectors of dissemination have changed. The ability to personalize that content has changed. The ability to target individuals with exactly what is going to work for them, based on a corpus of data that the platforms have accrued about each one of us over years and year of use and feedback loops—what did you click on; that tells me something about you. If it doesn’t tell me something about you directly, I have a correlation to someone who is like you, so I can target you through what is known as a lookalike audience or a custom audience in which I can—you know, anybody running an ad or growing an audience on a—on a platform like Facebook is reaching people who are predisposed to be interested in the content. That’s why it’s such an effective means of delivery. So that’s the kind of base framework.

So the problem is, if—you know, this used to be—10 years ago now, the concept of the filter bubble became popular, the idea that the platforms were showing people what they wanted to see, and that was kind of creating these information siloes. When you look at what has to be done to break people out of that or to say these people are more likely to be predisposed to disinformation content, the platforms are not coming back and telling people who viewed this content that they were targeted. So right not a lot of the conversations we’ve been having is: What are the responsibilities of the platforms? Can we ask them to act against their own economic interests in the interest of society? And that was a theme that was underlying the hearings.

The way an information operation is conducted on social networks, though, is it’s not—it’s not unique to one network. So you might start, if you wanted to seed a story, by putting it on—by writing an article, creating what’s known as kind of a content farm or a blog or—you know, anyone can write anything on the internet. This is—this was supposed to be a great—a great advantage because we all have the opportunity to make our voices heard and to get information out there. But I can write something on my blog, and I can post it to Reddit, and if I post it to a Subreddit of interested people, maybe they upvote it. You know, and I can do this with tons and tons of content, and I can see what gets lift. I can see what resonated with the audience that I’m trying to reach, because I can see the ranking of what’s moving up the page. It’s being voted on by the readers. They’re endorsing it. So then I can take the content that plays really well and I can move it over to Facebook, and on Facebook I can use an ad campaign to grow an audience. But once I have some audience, then at that point I achieve what’s called organic lift. And that’s the idea that, rather than having to pay to serve content to somebody each time, my hundreds of thousands of people who have begun to follow my page or who have joined my group are going to push that content out for me.

So Facebook has a much larger audience than Reddit. So what I’ve just done is I’ve tested the content on Reddit. Perhaps I’ve tested it on 4chan. Perhaps I’ve tested it on Imgur. There’s a number of these kind of platforms where I can see the reaction of the community I want to go for. Then I can move it to Facebook, where I can have people begin to do the sharing work for me, which actually brings down the cost to run one of these campaigns because at this point I have hundreds of thousands of people disseminating my propaganda for free.

Also, I can take it to Twitter. And what I’m going to use Twitter for—because Twitter has a much smaller audience than Facebook also—is Twitter has a high concentration of media users. There’s a ton of journalists on Twitter. There’s a ton of influencers on Twitter, millions and millions of followers. Donald Trump, excellent example: 45 million, I think. At that point, I can kind of cross the Rubicon. And if I can make something trend on Twitter or if I can make a high-value influencer retweet my content and retweet my article, I can at that point, well, pretty much guarantee that there will be some media coverage of it.

And the media coverage might debunk it, but it doesn’t matter because even in the act of debunking it it’s still continuing to keep it in the public consciousness. The media can cover it uncritically, which is—you know, we’ve seen happen. We call them hoaxes, but it’s a very quaint term. Really, we should we using the term disinformation campaigns, you know. Or, if the media doesn’t cover it, I can start a conspiracy theory about why didn’t the media cover that trending topic. So I’m going to win either way if I can get a sufficient amount of attention on Twitter.

And so this is the way that somebody interested in conducing a campaign will do it in a—in a cross-platform strategy. And there is no one really responsible for shutting it down, because the platforms, I am told they have some kind of backchannel information sharing, but we didn’t see anything really remarkably effective in 2016. And we have continued to see some interesting hoaxes take place, you know, with regard to the Alabama election right now, ongoing.

KORNBLUH: So, Thomas, talk to us about this concept of organic and how bots play in. You know, what’s the role of the bots in what Renee was describing? And what’s the—what’s the nature of the problem?

RID: Bots are—bots are certainly an important problem. But before we talk about some of the more technical aspects of amplification operations on social media, I think we should take a small step back and speak about the role of the press and the role of journalists for a short moment, I think.

Because, again, historically, there’s this great line from Rolf Wagenbreth. He was the head of Stasi disinformation for the entire time of the—more than 30 years. It was brilliant. It was a brilliant—Stasi was better at this than KGB because the main target was West Germany, so they spoke the language. They were close to their targets. They literally could sometimes listen to them. You know, they could make German jokes and West Germans would laugh about them—you know, as much as Germans joke. (Laughter.) And so Wagenbreth had this line, “What would be the active measures operator without the journalist?” So the journalist is an integral part of disinformation.

And we saw that at play in 2016 in the U.S. election interference in a new way. Let’s just tease out how it was new. Active measures—I mentioned this particularly bad one from 1960—back in the day were artisanal. You needed to know what you’re doing. They were—they required craftsmanship from intelligence operators. Today, or rather in 2016, the active measure was very much industrial scale. They hacked a lot of data, put the data into the public domain through WikiLeaks and other fronts. And then it was the journalists of the victim society, of the victim country—in this case, the United States—that actually created the value in terms of the damage done, because they went in, looked for the gems and the nuggets, and reported them out, and ignored the source.

Now, every journalist, or everybody really, who thinks, well, now we certainly understand the risks. We wouldn’t do the same—make the same mistake again, I think we all have to think again. Two weeks ago a little thing happened in Germany which is remarkable. Two weeks ago Der Spiegel ran a story about Germany’s U.N. ambassador, the former national security advisor, Christoph Heusgen. And Der Spiegel reported that Heusgen had spent an email to the U.N. secretary-general asking in a somewhat improper way to create a job for his wife. OK, he probably shouldn’t have done that. But Der Spiegel quotes from that email that Heusgen sent to the U.N. secretary’s chief of staff. And Der Spiegel doesn’t say where they got the email from.

Now, the next day anonymous German sources tell another German newspaper: whoa, wait a minute, we know that APT28, and they explicitly identify that as Russian military intelligence, has hacked U.N. systems. They found the email. Gave it to a Spiegel journalist. And he ran the story, for the second time. He had done that already a couple months prior, knowing that he probably advances the interests of a Russian intelligence agency. And I think we underestimate the competitive—the rough, competitive nature of journalism in a crisis that is actually created by these social media companies. So we have the perfect storm for active measures.

KORNBLUH: Yeah, Clint, would you pick up on that? And you know, sometimes it’s the competitive forces. Sometimes it’s ignorance, right? And sometimes they feel they have no choice. Some things become, you know, trending. It’s—the bots are pushing it. The president has talked about it. What can be done? And if the government—if there’s a limit to what our government can do, civil society in other countries is taking measures to push back, aren’t they?

WATTS: Right. So I mean, he’s exactly right. Competition is one of the motives that makes it super easy to get active measures to work. The other one is fear. If you can scare a population, which the Russians and the Soviets before them were very smart about doing, calamitous messages. You hit them with fear, and then you load up a political message right behind it, they’re more likely to fall for it as well. And you see that with Benghazi conspiracies that would be pushed around, some of the things that we observed in social media space. And people would grab them. You know, very few, oftentimes, but it only takes a couple. And those with more followers, of those that are they key mavens in their social media networks can spread it much more quickly.

I think what—there are a few things that we need to think about. The internet and anonymity. Everyone comes to the internet or social media with the best of intentions. And those with the most resources, time, and worst intentions ultimately take control of it. This is—I mean, you can look at criminals and hackers. What happened to Anonymous, by the way, and LulzSec? Aren’t they going around the world making us all transparent and free? Anybody wonder what happened to those guys? You know, the big and the powerful ultimately come to learn how these things work. And if you aren’t under the rule of law, if you don’t have to worry about civil liberties, if you don’t have to worry about a free press checking you, you’re going to use this system. And it’s happening, you know, around the world today. I think Myanmar is a great case study of how this has just been duplicated within a year. All political parties will do this over the next two to three years if they don’t feel constrained. And I think we’re seeing this playing out in elections even today.

So things we can do. One is authenticity of authorship. Is it a real person that’s behind a social media account? There are ways we can protect their identity, there’s ways we can protect anonymity, but there are public safety factors. We always say First Amendment doesn’t protect the right for you to yell “fire” in a movie theater, right? We saw disinformation networks pumping conspiracies around, hey, JFK has been evacuated. Maybe it’s a terrorist attack. Maybe someone was shot. Maybe it was this. The truth—where’s the truth in that? It never—it never comes back. People believe the first thing they read. And it’s very hard to refute those things. So there is a public safety component to this that goes well-beyond just the political component of it.

The other thing is how do you deal with the news issue? And the social media companies initially jumped out to try to do factchecking. That was always going to be a giant waste of time. I can make fake news way faster than you can check it. If you want to stop an artillery barrage, you silence the guns. And to do that, you have to go after the outlets that are mostly producing this sort of information. So we had talked about a rating system, essentially nutrition labels for information, which would be kind of like sweeps for television. And it seems like, I think, Google, and maybe Facebook, with some of the media companies, are now on board. You know, they’re going through—at least trying to come up with a system to figure out what’s—who’s doing 80/20. Maybe if you’re a mainstream outlet and you’re doing 80/20 it will hurt you, the rating would, and that would be OK too.

The idea is to improve everyone’s journalism and reward those that are doing good journalism over time. And this will prevent those fake news outlets that we talk about, which kept popping up, from popping up so quickly and gaining so much trends. We were tracking into ’14, ’15, and ’16 the growth of outlets that suddenly would pop up in Eastern Europe and then wanted to talk about how the Federal Reserve was terrible and should be destroyed and gotten rid of, all day long, or the middle of the night where they were writing from. So how do you stop that? You know, you’ve got to put some sort of metric or challenge on it. But ultimately it comes down to public education around understanding information sources and understanding what they’re—and we got to put it back on the consumer.

That’s why I like the nutrition labels idea. Make the consumer decide. Don’t block the content from them. Don’t squash the outlet. If they want to write garbage and someone wants to read garbage 90 percent of the time, then fine. It’s like your crazy uncle who sends you the weird emails all the time and you go, uncle, go to, you know, this factcheck, and this is a false story. So let’s push it back to them, and let’s empower them. We have had a public that has come into social media that was never reading newspaper. Do you understand, like, how this happens? Like, people have jumped over. And they’ve gone from assessing news from their friends to assessing 1,000 inputs a day from social media.

This is a huge mental leap. And we are going to fail. Everybody falls for fake news once in a while. The more real the medium, the more you will fall for it. So we’re seeing now fake audio, fake video coming out. You know, this will make this even more dynamic. So we’ve got to inform our public and help them make better decisions on their own and empower themselves so they’re not pointing to social media companies, they’re not pointing to politicians, they’re not pointing to journalists. They got to be responsible for their own information consumption. And that’s really what the Europeans had done, you know, over the last 50 years. They’ve been much better about educating their public on it.

And we’re seeing a major shift, you know, even when you look at France and Germany. Part of the reason why—there’s lots of structural reasons—but they also consumed far less news on social media than they do from traditional news sources and even from friends and family, if you look at the actual numbers. But that will change over the next 10 to 20 years. I mean, you’re seeing the younger generation moving to this. So I think it’s super important that we sort of work on the public for them taking responsibility for themselves, but also help them understand the dangers.

You know, like, if you buy—we had this with consumer reports, you know, and bad products in the ’70s and ’80s. If you buy the Chinese import that is 75 percent cheaper than the good that it’s competing against, it might burn your house down. You know, that could happen. But that’s on you. You know, that was your choice, to purchase that. So informing the public and helping the public make better decisions I think is something that’s good all around for a country.

KORNBLUH: So on the nutrition labels or the factchecking, some of the ways in which the platforms have asked journalism or, you know, others on the outside to find their problems and help them correct that. To some extent, I keep thinking of your expression about artisanal. That feels sort of artisanal, whereas the bad stuff is coming at a much faster, more industrial rate. And, Renee, I just wonder, you know, what ways can the algorithms be used to fight back? People keep talking about this. How can the algorithms be used to—not to substitute for public education, obviously we need to do that, but to bat back some of the more dangerous things, given the First Amendment protections?

DIRESTA: You know, there are some interesting—it’s challenging, because algorithms are written by people, and so there are biases inherent in the algorithms. One thing that comes to mind when you ask the question is Facebook’s recommendation engine. So the recommendation engine is, as I said a little bit earlier, designed to serve you thinks that you want to see so that you stay on Facebook, so that it can continue to drive engagement. So if I like a page, here’s a very specific example. If you are prone to conspiracy thinking, actually, the greatest predictor of belief in a conspiracy is belief in a different conspiracy—within another conspiracy. It’s well-documented in psychological literature.

So if you like a page on chemtrails, or you like a page on—an anti-vaccine page, Facebook’s recommendation engine actually takes that as an input and begins to serve you content related to other conspiracies. And one of the things that we saw in late 2015, early 2016, was we began to see Facebook’s recommendations engine recommending Pizzagate—the conspiracy that Hillary Clinton ran a vast underground sex ring out of a D.C. pizza place—to anti-vaxxers and chemtrail believers and, you know, these sorts of things. And so it’s taking people who have belief in sort of pseudoscience and health-related conspiracies and then pushing them down that rabbit hole into antigovernment conspiracies, or other types of, you know, bizarre—the moon—the moon landing was fake, 9/11 was a hoax—you know, the kind of truther community. So there’s this weird intersection. And it’s actually because the recommendation engine is serving that content to people.

So this is an interesting problem, because from a Facebook business standpoint it’s giving the people what they want to see. But this is where we ask the question—and one of the conversations happening a lot in the Valley right now is what’s the kind of ethical design there? There’s a concept called sort of choice architecture. You don’t give people who are hungry—you know, if you show them the doughnuts first versus the salad, they’re going to eat the doughnuts. If you serve the salad, if you put the salad out there first, they’re more likely to make the choice that’s potentially better for them, from a health standpoint. So we think about how what are the unintended consequences of the algorithms? How are we thinking about what we’ve created and might we make more ethical decisions that don’t necessarily negatively impact profit, but do things that are better for people. So this is a kind of undercurrent in the Valley right now.

I think, you know, it’s not censorship to not suggest some of this content. If someone wants to go to Facebook and type in Pizzagate and join Pizzagate groups, that is Facebook’s decision to decide what remains on its platform under First Amendment protections or, you know, information sharing, information availability. But when you make the decision to serve something up, that’s a proactive action by a platform. And this is where things, you know, kind of get into a little bit of an area where we could potentially see the platforms make some design decisions that could have potentially quite a powerful impact.

RID: Can I tack on a comment there?

KORNBLUH: Yeah, sure.

RID: So it’s a comment on Twitter. And I think the design decision that some people who follow the Twitter abuse by bots, especially at—wondered why has Twitter not done this? Just an example, to make it very concrete. Some of you here in the room may remember when Twitter had egg profile pictures by default. You know, the eggs, there was this joke about eggs, and usually eggs didn’t provide interesting content. So you could opt out of eggs for a while. When you signed up for a new Twitter account, you could tick that box saying: I don’t want people in my feed that still have an egg picture as their profile picture. That was possible for a while.

Now, why is it not possible to opt out of bots? It’s possible to opt out of eggs. Why is it not possible to opt out of bot traffic? Because Twitter claimed in these hearings several times that they sophisticated machine-learning mechanisms in place that can automatically recognize bots. So why don’t they give you this—the opportunity to click—you know, to tick a box and have no more bot traffic? I’d say it’s probably because they would then, you know, cut down their entire active user base by doing that, by a significant order of magnitude.

DIRESTA: The notion of opt in versus opt out is quite profound. We’ve seen it outside of the digital world in organ donation, right? Do you voluntarily opt people in and make them decide not to participate, versus making them check the box? So this is an interesting thing with bots. And I will say, with Twitter the blue checkmark accounts—when I got my blue checkmark, which is a verification marker, I was scrolling through the new settings and it had something that said, turn off low-quality accounts. And I thought, on my goodness, they’ve had—this has been available the entire time. (Laughs.) So they have a sense of what is a low-quality account, and they’ve given—you know, blue checkmarks used to be only for famous people. And they’ve given celebrities and famous people the opportunity to not see them—(laughs)—for years. And that’s a decision that rather than creating this pleasant experience for everyone, that’s something that really took years to get to, the idea that maybe people would want to opt out of bot content, so.

KORNBLUH: Well, let’s open it up to members for questions. I want to remind everyone that this is on the record, and ask you to wait for the microphone, speak directly into it, stand, state your name and affiliation.

And I think we have a question right here.

Q: Thank you very much. Jill Dougherty from the Wilson Center.

I wanted to ask—and I don’t really care who answers it—but this controversy of having RT and Sputnik register as foreign agents. You know, the rationale behind that, obviously, is a law that was passed in 1938 to protect Americans from propaganda by the Nazis. And I’m just wondering whether that type of law really has any relevance today? Because how can you protect people against something that every minute something is coming into their box from one account or another? Is that law obsolete? And what’s your opinion on forcing RT and Sputnik to register as foreign agents? Thank you.

WATTS: Do you want to go first?

RID: Go for it, yeah.

WATTS: I mean, it’s great that they did that. It won’t affect anything, you know, that I see on social media. Most people that have sent me RT, and this has happened quite a bit, this is—I mean, in 2015 we were—I was receiving Russian propaganda from friends who were then arguing with me that I didn’t know what I was talking about. So I was like, OK, I’m glad in Missouri you don’t know what RT is. Do you know what RT is? Yeah, it’s RT. Well, like, OK. (Laughter.) I mean, people don’t assess their sources now, right, because you trust your friends and family who send you things more than you trust someone else.

So part of RT’s methodology, which was very brilliant, was, hey, we can’t beam in satellite television into every home. But we can put stuff on YouTube. And then we can have our producers and reporters share it with likeminded people. So by the time it moves along, you know, you don’t know where it came from. And this is part of the problem, regardless of RT or Sputnik News. They will just say, oh, it’s all propaganda. It’s your propaganda. NBC, CNN, Fox, it doesn’t—oh, that’s propaganda. It’s all propaganda. Which is, oh by the way, very much the Russian world of information in Russia. It’s your PR, their PR.

So we’ve lost that sort of bearing about reporting versus opinion and fact versus fiction. That has sort of gone sideways. And I don’t think declaring a source now—I think it’s way too late—as propaganda really helps the public. I don’t think they’ll know even—they’ll have read no story that said that RT or Sputnik News had to register. And even when they receive it, as long as it appeals to their preferences, they’re going to consume it. And so it’s good that we do that just so that there’s awareness around, OK, this is a state-sponsored news outlet. And there are many other state-sponsored news outlets from around the world. You know, we see this with all authoritarian regimes.

But how the public consumes, as long as it makes them happy they’re going to keep filling their belly and their noggins with whatever you keep feeding them on social media. And so any outlet, whether it’s U.S. or overseas, knows that’s the formula really for their content dissemination.

KORNBLUH: I think we have one back there.

Q: Hi. I’m Craig Charney of Charney Research.

Unlike most of the people here, who I think come from the foreign policy or tech communities, we do survey research for campaigns and marketing, as well as foreign policy issues. We worked in public diplomacy a decade ago. Now we’re working on these issues.

Two questions: One just came out of Clint’s comments. You know, the reason why RT looks so good, and so professional, and persuasive, is because it’s not designed in Russia. Their content is designed by Ketchum in New York, one of our best PR agencies. So one question is—

KORNBLUH: I’m sorry, I’m going to have to limit you to one question.

Q: OK. Well, I’ll stick with one then, since I started it, would it make sense to oblige American companies and organizations who are professionally assisting foreign influence operations to declare themselves foreign agents?

WATTS: Yes. I mean, that’s a simple answer for me. We haven’t put boundaries around it. The reason Russian active measures worked and Soviets’ didn’t is three parts. One is analogue versus digital. You can just do it a lot faster in the digital space. And I shouldn’t say it didn’t work. In the analogue space, they had great successes too, but it took much longer. The other part is what the Russians have figured out for Americans is that too much information is worse than no information. So they’ve taken the envelope and they’ve sort of opened it up, and then they’ve saturated. They’ve gone from we’ll try to control all information to I’ll bomb you with so much information you don’t know what’s true or false, which is very brilliant.

You know, the other part of why it works is because their economic—there’s enough economic openness that you can actually run a ground lever along with the virtual. So this is what Americans completely miss in all of our—we love our social media, so we keep talking about social media. The reason it has worked is because they take physical things, real-world things, facts, and then they use that and either manipulate the truths or other falsehoods to push the conspiracy. There are physical actors. Just like you mentioned, they have physical partners that are also helping them.

And if we’re going to be upset about this sort of influence, then we’ll have to look at, how do we characterize agencies like that if it is starting to break up our democracy? That’s really what is starting to happen now. We’re seeing diversions at such a level that I think we’re much closer to real breaks in the United States than people really understand at this point.

But if you have someone doing that kind of stuff, then the question will be, what if U.S. companies are doing it on behalf of the United States overseas or the U.S. is enlisting it? So it’s a two-way street. So as a policy question, it’s going to get super, super complicated, I think.

KORNBLUH: Thomas, did you want to add to that?

RID: I would just add a cautionary note. One of the things that makes this country so great and sort of still extremely attractive for the rest of the world, let’s just spell this out for a moment, is the First Amendment and the strength of the First Amendment.

So as soon as we start messing with the notion that we can declare certain forms of speech because they come from foreigners in a way that could be hostile, that are not OK anymore, you’re sort of crossing a line somewhere. I just would like to sort of, you know, call attention to that.

WATTS: Can I add to this?

KORNBLUH: Yeah. And I think of the—one of the lines that has been drawn, though, is on foreign interference in elections.

RID: Right.

KORNBLUH: You know, because that’s different than foreign speech.

RID: Fair enough, yeah.

KORNBLUH: But I do—but you’re right to draw that.

WATTS: That’s exactly what I wanted to zero in on, is we’re talking about an attack on the United States, it was an information attack, and so in that context then you have to look at repercussions that are about that attack.

What ultimately will come out is this counterinfluence thing, is the U.S. isn’t going to be able to do much of anything to counterinfluence. And so you’re going to have to pull a different strategic lever against an adversary. The U.S. should never repeat what was done to it to another country. I would be very upset if we hacked into thousands of people’s emails and dumped their personal information, of any country, out on the internet. I don’t want to see false journalist stories. I’ve seen some of that nonsense talked about the past week in the news, planting of news stories, you know, discrediting outlets. I’ll be very upset in our country if we do it.

There are some simple things we could do against Russia if we wanted to go in a counterattack, but it’s not to do their playbook back against them. It undermines our values, it hurts us as a country, it violates free speech.

And so with that, I think my answer of yes was we just suffered a major information attack that affected our elections, and we have got people in the United States that don’t believe their vote counted still. We just heard that in the previous panel. So we’ve got to come up with some sort of response.

KORNBLUH: I’m sorry, we’re going to have to move on.

We have a question right here, the lady in blue.

Q: Lilia Ramirez with Smiths Group.

It appears to me that we’re going to have to be more deliberate in our education system so that the youth, they’re more critical thinkers. So what would you recommend we do to improve the public school system, because I don’t believe that it’s going to be helping the situation?

WATTS: Well, I can speak to this a little bit. I mean, are we going to do public education anymore? I’m not really sure in this country, right? We’re going in some weird directions on public education. I went to a public school both, you know, in high school and for college, I went to the Military Academy. But, you know, one of the classes that we teach in the intelligence community ironically is called evaluating information sources. It’s a set curriculum, and it’s really good. And it was super helpful for me, you know, when I had it.

There are ways you can actually water that and boil that down, you know, for a high school curriculum that I think would be super valuable. And the European countries have done this in a lot of ways. I believe it’s Sweden has done this sort of thing, which is helping their people understand or think about ways to evaluate information sources without going into political biases and getting crazy with it. It would be hard to implement in the United States because of our state, you know, delivery of education services.

KORNBLUH: Renee, I think Italy just did it. Are the platforms doing—

WATTS: Who did?

DIRESTA: Italy. Italy just—

WATTS: Italy, yeah.

DIRESTA: —put out a curriculum specifically for this. I don’t know the specifics, but it was announced a couple of weeks ago.

KORNBLUH: I don’t know if you all remember. I remember the whole public education around subliminal advertising, which turned out studies later said was not really such a big threat, but we all were educated about it and scared by advertising for a long time.

Yeah.

Q: Alan Raul, Sidley Austin.

The discussion of education and Mr. Rid’s comment on what makes America so attractive to the rest of the world, First Amendment, free speech, really raises the issue. Maybe instead of focusing on educating the public about evaluating information, we should reemphasize teaching the values that make America great and are fundamental principles. We’ve been distracted by conspiracy theories, exaggerated news stories and so on, but what we don’t hear is promotion in the U.S. of First Amendment and due process and, you know, first principles, Constitution. We’ve moved away from that and teaching civic education. Maybe that’s what we need to reemphasize in order to bring the country back to, you know, kind of a reasonable appreciation of information.

WATTS: I think Europeans are, they are doing it, the European countries are. I mean, I just think it’s absolutely unreal that in 1980—you know, I watched the Olympics, you know, against the Soviet Union, and then in Charlottesville they’re chanting Russia is our friend. And, you know, it’s just the weirdest, like, 30 years’ transition. (Laughter.)

RID: One of the most fundamental—and this is almost a political, philosophical discussion to be had is about deletion. Twitter epitomizes this problem. We all have the same, probably the same intuition that Russian bots as well as presidents should not be able to delete tweets because it’s on the public record. We also have the same intuition that 16-year-olds who tweet something stupid should be able to delete tweets. How do you reconcile the two? That’s a fascinating question and I think we should pay more attention to it.

DIRESTA: Well, I can say that one of the things we were arguing for, as a researcher who looks at Twitter data and then the terms of service of the Twitter API are that if a tweet is deleted you’re supposed to no longer use it, which is why ahead of the hearings, Twitter, you know, compiled and then released its list of accounts. But at the time it was made available to the public and I believe to the Senate perhaps, they had already deleted all of the content.

So one of the things that we face as researchers is the platforms, they have a vested interest in not sharing that information. And so we’re trying to sort out things like this. You know, is there, you know—I’m saying, why do Russian bots have privacy rights? Because the justification for a right to be forgotten and the 16-year-old being able to delete her tweet is sort of a personal privacy thing. I’m saying these are fake accounts, these are manipulated accounts. It’s ludicrous to think that we are giving privacy, you know, privacy considerations to fake people. But that is the state of the conversation as it stands today.

KORNBLUH: Yeah, why don’t we go here.

Q: David Ensor of the G.W. Project for Media and National Security.

Panelists, Clint in particular, but all panelists, I guess my question right now on the topic of this panel is, can we trust Facebook and Google and others to get the problem that clearly emerged in the last election under some kind of control? Or, I mean, at what stage does there need to be regulation of our social media companies in order to prevent their platforms from being used to change the results of elections?

KORNBLUH: And I guess I want to add something that I feel like we haven’t talked about. We don’t want to be battling the last war. So in terms of both where the threat is coming from, it may not just be—may not just be Russia and, secondly, the different kinds of tools that’ll be used, what can—Renee, do you want to start? Like, what can social media platforms do and what should the government be doing?

DIRESTA: Sure. So I think with regulation you have a couple of different avenues. You have market-promoted regulation, which is where users get very, very angry and inspire the companies to change their behaviors to keep users happy. And that’s something that the media often helps push. Or you have self-regulation where the companies decide kind of as a consortium, as an industry, that this is something that’s worth their time. And then the third is government, which takes much longer. And I don’t think we’re going to see that happen by 2018, which is, you know, of course, a source of major concern for people who pay attention to this problem.

I think that, you know, we saw with ISIS a few years back—so this is not—Russia is not the first time that the tech platforms have had a disinformation and propaganda problem. It took several years to get the tech companies to kind of come together on the idea of creating this global internet forum to counter terrorism. I imagine you know a little bit more about than I do and perhaps can speak to it.

But I think that was about three years from the identification of the problem and the request that something be done to this organization being stood up to do something. So in many ways, I think we’re going to be dependent on media or researchers and people putting out, you know, much like what we’re seeing with some of the disinformation around the Roy Moore campaign in Alabama, hey, you need to look at this, we need to get the story out there, we need to have Twitter responding to researchers rather than attempting to diminish and discredit the work that independents are doing right now.

Maybe you want to add to that.

RID: So bots and abuse is a threat to Facebook’s business model because Facebook is ultimately about authentic, human accounts. And as a result, Facebook is trying to tackle the problem, and they’re throwing money and people and resources at the problem. And I think they’ve made some right moves. Getting a lot of bad press for it, but they’ve made the right moves.

Twitter, the opposite applies to Twitter. For Twitter, bots are not a threat. They’re actually helping Twitter’s business model because they make it appear larger. So from Twitter, we can expect the opposite. In fact, I wouldn’t be surprised if some Twitter engineers have literally left Twitter and move to Facebook to fix the problem at another company. So I think Twitter deserves right now a lot more attention and a lot more criticism than it is getting.

I will just highlight a thing that is technical, but I’ll put it in plain English, and I’ll use an analogy. Imagine the Economist or whatever, The New York Times decides, well, we should give our readers the ability to unpublish letters to the editor from our website. They could do that, right? Fair enough. That’s what Twitter is doing.

But Twitter is doing something else. Twitter is also saying we should give our readers the ability to unpublish letters to the editor, not just from the website of The New York Times, but also from the Library of Congress. And that is just not OK. If we have something on the public record from people who have chosen to put something on the public record for an effect, not necessarily the 16-year-olds, then they shouldn’t be able to remove the record from a nonpublic, sometimes a nonpublic, repository because the effect is that they make history and in fact the news editable as a result.

KORNBLUH: And you’re seeing that from foreign actors.

RID: How many—I mean, let’s just be—make this a little edgier. How many retweets that the @RealDonaldTrump account receives when he tweets about Russia, how many of the retweets are actually bots versus human beings, the retweets or likes? Answer, and it’s really an uncomfortable answer, answer is we don’t know and maybe Twitter doesn’t even know and couldn’t even find out as a result of these policies.

KORNBLUH: Because of the deletions.

WATTS: I’m not—

KORNBLUH: So what are—so that’s a policy idea. What other policy ideas?

You’ve talked about public education, nutrition labels.

WATTS: Yeah, I mean, I don’t really see—in terms of regulation for policy, let’s focus just on elections and politics, whatever the standard is for advertising on any other medium should be the same in social media. I don’t know why we treat it differently. That will help at least inform the public so they can make better choices, again, about what they’re consuming, they know what an ad is and where it’s coming from or how it’s being repurposed.

The other part is, we’ve seen political groups repeatedly use Russian disinformation in social media over and over again. They know that we’re emotional, we want to win, so, you know, that’s a big part of it.

I am not going to hold my breath for the social media companies to figure it out. And I’m not going to beat them up, either. They’re there to provide a service. And they’re a business. And when bad things start to happen, I expect them to move forward and try and make corrections. They’ve all been slow.

I’m, you know, overwhelmed by now terrorist videos are being taken down. This was an issue that emerged in 2005 and it only took us 12 short years to get really on top of this. So I have a very low confidence that the social media companies will save us.

With that, I would say both Google and Facebook have moved deliberately over the years to improve threat detection along with technical detection. So, I mean, there have been times where I’ve gone to social media companies and said, hey, here are thousands of disinformation accounts, and they go, yeah, we don’t care, AI, machine learning, we’ll sort it all out, they’ll just figure it out. We’ve got a big machine that’s so great, I’m much smarter than you. You know, look at me on my skateboard in the office or whatever. (Laughter.)

And so that sort of arrogance has gone away over the last 10 years or so and it has become, OK, I need to understand these threats, like terrorism or disinformation or whatever it might be, and they’ve got to pair that with the technologists. And I know that’s happened at Google and Facebook and they continue to expand that out.

At the same point, they can’t cover every issue in the world. So, like, who’s the person covering Myanmar at Facebook right now? I mean, this just emerged. So they’ve got to come up with a system where they can go out to people that study these issues and understand the problems and quickly put machine learning, AI and the technologists along with it.

And that used to be what my job was at the Combating Terrorism Center 10 years ago. You know, for terrorism we were pairing industry and research and academics with the government to come up with solutions. So they’ve got to do that a little bit better.

But ultimately, this problem comes down to leadership. So, you know, our country has to decide, and it doesn’t have to be elected leaders. It can be civil society. You know, what do we want? What do we want our world to be like? Just imagine this in 2020. Everyone adopts, every political campaign adopts the Russian playbook and uses it on social media. I’m not talking about a foreign influence operation. I’m talking about every country in the world saying, you know what I want to happen in America? The following: boom, bots, ads, doing it on scale. Now add domestic political parties onto it, every candidate running one of those. Guess who’s going to lose?

If you think you’re going to be able to run for an election as a person who’s, like, a schoolteacher or you’ve got a $25,000-a-year job and you’re going to run for elected office in the United States against a bot machine and political campaigns and, you know, parties, political parties? You’re insane. This will quickly become just those that have the resources, those that have the time to manipulate and shift the information the way they want.

And I think that’s what I’m—I know we talk about Russia a lot and I talk about it a lot. But I’m more worried, is this the world you want to live in where it’s just a cacophony of noise coming from social media? Because I think a lot of Americans will just walk away, they will—they’ll be apathetic and just say I don’t even want to participate in this.

KORNBLUH: There’s a woman in the back.

Q: Thank you. Alina Polyakova, Brookings Institution.

Clint, your idea on labeling, so this has been discussed a lot in all these various working groups. But my question to you is, this is kind of the Big Mac theory, right? That if you tell somebody something is bad for them, it’s actually going to change their behavior. There’s no evidence for this that’s compelling when it comes to actual nutritional labeling. So people are not eating less Big Macs basically because they know there’s 2,000 calories in them.

WATTS: Actually, they are eating less Big Macs.

Q: No, well—

WATTS: I mean, people are making—the decision is on the consumer. That’s what I want.

Q: But this is what I want to—

WATTS: I don’t care what they eat.

Q: Can I challenge you on this? Because it doesn’t seem the consumer responds to labeling is the point I’m making. And second of all, in the nutritional world, there are federal agencies that regulate not just nutritional content, but also products that can have some hazard to human life. So wouldn’t we need a similar agency to actually implement and punish—

WATTS: No government agency should do it.

Q: OK, so there’s no government agency, then why would labeling actually work when consumers don’t respond to labeling when it comes to other products?

WATTS: But they do. Let’s go to Amazon. You’re mixing a lot of different, you know, analogies together. Let’s go to Amazon in terms of rating systems, right? Does anyone buy the one-star-rated product with two reviews? Generally, no, right? So it’s a system that comes up over time. Will someone buy the product with one star and two reviews? Absolutely because it’s $5, right? So someone is going to buy it.

I’m not trying to win over the whole world, but I want people to have responsibility for the decisions they make about the information that they consume. And so we have told you, this outlet puts out 70 percent false information, 20 percent manipulated truth, 10 percent truth. This is where that outlet is based at. Did you know that this is a state-sponsored outlet or it’s an outlet that’s based in Bulgaria that suddenly popped up six months ago? Are you aware of that? That’s up to you if you still want to read it because you think they’re informed. Give them that information, it will chip away.

I have no doubt about it. If you rate that—look at Rotten Tomatoes. Rotten Tomatoes is another rating site where this has happened. Now actors are complaining that if they get bad Rotten Tomatoes reviews before anything comes out, you know they’re not getting a chance. It’s reversed almost on itself.

KORNBLUH: So I want to unpack what you’re saying, though, a little bit because you’re—

WATTS: It’s not just about putting a nutrition label on it. It’s about telling people here’s what the source has been putting out, reporting versus opinion, fact versus falsehoods, and this is a little bit about that outlet, now you make your own decision. That’s what I want.

KORNBLUH: So you’re talking about a couple of different things. One is more transparency—

WATTS: Yeah.

KORNBLUH: —so you actually know some context about the source. And obviously, that’s important because, otherwise, people wouldn’t be trying to be somebody they’re. So that’s part of it.

WATTS: Well, let’s go back to this. So when you buy a newspaper, OK, in the analogue days, do you know something about the newspaper when you pick it up? Yes, you have it physically in your hand. The reason you get duped by news that’s shared with you is because, who does it come from? Your family and your friends, people you trust. So you take the trust of your family and your friends on your social media feed over the actual outlet that’s out there. You’re not really assessing the outlet, you’re assessing the story.

I want them to assess the outlet, not just what they’re getting from their family and friends. I want them to go, OK, I know my family and friends have strong opinions and they look at good news sources, but I’m also going to assess what the information source is that it’s coming from. That’s what I’m seeking.

KORNBLUH: And then the other piece, the nutrition labels, I mean, it seems like there is some efforts by the platforms to work with these fact checkers, like Snopes and others, and, thereby, get some assessment about whether or not an outlet tends to produce disinformation or fake news, and then that can be fed back in. Slow process relying on outsiders, but still—

DIRESTA: Yeah, so—

WATTS: Can I ask that to Renee?

This is the new effort, right, that they’re trying to do?

DIRESTA: But that’s actually an effort that seems to have been largely unsuccessful. There was the kind of most recent stories about it. So there’s a couple of components.

So one thing is the platforms like to keep costs down, and their business is to automate things. Any time you have a human component involved in a system, which in Facebook’s case of flagging fake news was relying in part on people reporting things, what you really these brigading and mass reporting wars. So you have people deciding I don’t like this thing, you have actually pages, you know, calling to action other—there are members saying you’ve got to go report this story, it says something unfavorable us. So it turns into this mass nightmare of, you know, one army of opinions versus another army of opinions.

And I want to say, on Amazon, this is not really—hasn’t been written about quite as widely, but Amazon, the battle for reviews is actually kind of the new SEO because Amazon shapes consumption. Amazon’s search bar—

KORNBLUH: Why don’t you spell out what SEO is.

DIRESTA: Oh, search engine optimization. Sorry. It’s the way—it’s little tricks that you could do to make your website rank on the first page of Google’s search results. Now we see it all over Amazon, because if I go type in blender, I want—you know, I’m going to likely pick something from the first page of Amazon’s search results. And so doing everything I can to get my blender, you know, to that first page of results is potentially millions of dollars of revenue or not.

So Amazon has a very serious review manipulation problem that they are somewhat aware of, but also relying on things like algorithms to try to figure—to identify instances of brigading where people will say, OK, you know, I’m going to send an email out to my mailing list asking everyone to go leave a five-star review on my blender. This is a big problem.

So this is where we get to the—if there is a crowdsourced element of it, it is being manipulated. And this is—it’s very interesting because we all thought that crowdsourcing was going to be this magical way where people would participate and we would—we would take the wisdom of the crowd and we would turn that into really surfacing the best content, the best products, the things that you really needed to see. And it’s—this is the problem with algorithmic manipulation, is it’s manufacturing consensus, it’s gathering critical masses of people together in a manipulative way that fundamentally shapes, creates a false notion of how popular something is, whether that’s a product or a story or a person.

A lot of these bot accounts, these fake accounts, have hundreds of thousands of followers. They look very legitimate and so people don’t dig it. So I think—

KORNBLUH: I’m going to have to—

DIRESTA: I know, sorry. So what I was going to say was the platforms really do have to, at this point, take it on themselves to say we are going to have an opinion, we are going to hire internal people, and we cannot relegate this task to crowdsourcing and the assumption that—

WATTS: By the way, I—

DIRESTA: —people are going to—

WATTS: —don’t want it to be crowdsourced, just to be clear.

DIRESTA: Yeah.

WATTS: Information, consumer driven.

KORNBLUH: I’m being told that one of the Council’s sacrosanct rules is that we have to end on time.

DIRESTA: End on time.

KORNBLUH: So I want to make sure that we honor that. But, you know, I think we’ve fleshed out a lot of the challenges here, and hopefully we’ll continue the conversation about how to move forward because we obviously have to come up with some solutions.