from the challenges-of-our-time dept

It should be no surprise that I'm an unabashed supporter of free speech. Usually essays that start that way are then followed with a "but..." and that "but..." undermines everything in that opening sentence. This is not such an essay. However, I am going to talk about some interesting challenges that have been facing our concepts of free speech over the past few years -- often in regards to how free speech and the internet interact. Back in 2015, at our Copia Summit we had a panel that tried to lay out some of these challenges, which acknowledged that our traditional concepts of free speech don't fully work in the internet age.

There are those who argue that internet platforms should never do any moderation at all, and that they should just let all content flow. And while that may be compelling at a first pass, thinking beyond that proves that's unworkable for a very basic reason: spam. Almost everyone (outside of spammers, I guess) would argue that it makes sense to filter out/moderate/delete spam. It serves no useful purpose. It clutters inboxes/comments/forums with off-topic and annoying messages. So, as Dave Willner mentions in that talk back in 2015, once you've admitted that spam can be filtered, you've admitted that some moderation is appropriate for any functioning forum to exist. Then you get to the actual challenges of when and how that moderation should occur. And that's where things get really tricky. Because I think we all agree that when platforms do try to moderate speech... they tend to be really bad at it. And that leads to all sorts of stories that we like to cover of social media companies banning people for dumb reasons. But sometimes it crosses over into the absurd or dangerous -- like YouTube deleting channels that were documenting war crimes, because it's difficult to distinguish war crimes from terrorist propaganda (and, sometimes, they can be one and the same).

An even worse situation, obviously, is when governments take it upon themselves to mandate moderation. Such regimes are almost exclusively used in ways to censor speech that should be protected -- as Germany is now learning with its terrible and ridiculous new social media censorship law.

But it's not that difficult to understand why people have been increasingly clamoring for these kinds of solutions -- either having platforms moderate more aggressively or demanding regulations that require them to do so. And it's because there's a ton of really, really crappy things happening on these platforms. And, as you know, there's always the xkcd free speech point that the concept of free speech is about protecting people from government action, not requiring everyone to suffer through whatever nonsense someone wants to scream.

But, it is becoming clear that we need to think carefully about how we truly encourage free speech. Beyond the spam point above, another argument that has resonated with me over the years is that some platforms have enabled such levels of trolling (or, perhaps to be kinder, "vehement arguing") that they actually lead to less free speech in that they scare off or silence those who also have valuable contributions to add to various discussions. And that, in turn, raises at least some questions about the idea of the "marketplace of ideas" model of understanding free speech. I've long been a supporter of this viewpoint -- that the best way to combat so-called "bad speech" is with "more speech." And, you then believe that the best/smartest/most important ideas rise to the top and stomp out the bad ideas. But what if the good ideas don't even have a chance? What if they're silenced before they even are spoken by the way these things are set up? That, too, would be an unfortunate result for free speech and the "marketplace of ideas".

In the past couple of months, two very interesting pieces have been written on this that are pushing my thinking much further as well. The first is a Yale Law Journal piece by Nabiha Syed entitled Real Talk About Fake News: Towards a Better Theory for Platform Governance. Next week, we'll have Syed on our podcast to talk about this paper, but in it she points out that there are limitations and problems with the idea of the "marketplace of ideas" working the way many of us have assumed it should work. She also notes that other frameworks for thinking about free speech appear to have similar deficiencies when we are in an online world. In particular, the nature of the internet -- in which the scale and speed and ability to amplify a message are so incredibly different than basically at any other time in history -- is that it enables a sort of "weaponizing" of these concepts.

That is, those who wish to abuse the concept of the marketplace of ideas by aggressively pushing misleading or deliberately misguided concepts are able to do so in a manner that short-circuits our concept of the marketplace of ideas -- all while claiming to support it.

The second piece, which is absolutely worth reading and thinking about carefully, is Zeynep Tufekci's Wired piece entitled It's the (Democracy-Poisoning) Golden Age of Free Speech. I was worried -- from the title -- that this might be the standard rant I've been reading about free speech somehow being "dangerous" that has become tragically popular over the past few years. But (and this is not surprising, given Tufekci's previous careful consideration of these issues for years) it's a truly thought provoking piece, in some ways building upon the framework that Syed laid out in her piece, noting how some factions are, in effect, weaponizing the very concept of the "marketplace of ideas" to insist they support it, while undermining the very premise behind it (that "good" speech outweighs the bad).

In particular, she notes that while the previous scarcity was the ability to amplify speech, the current scarcity is attention -- and thus, the ability to flood the zone with bad/wrong/dangerous speech can literally act as a denial of service on the supposedly corrective "good speech." She notes that the way censorship used to work was by stifling the message. Traditional censorship is blocking the ability to get the message out. But modern censorship actually leverages the platforms of free speech to drown out other messages.

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

There's a truth to that which needs to be reckoned with. As someone who has regularly talked about the marketplace of ideas and how "more speech" is the best way to respond to "bad speech," Tufekci highlights where those concepts break down:

Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

As she notes, this is "not a call for nostalgia." It is quite clear that these platforms also have tremendous and incredibly important benefits. They have given voice to the formerly voiceless. There are, certainly, areas where the marketplace of ideas functions, and the ability to debate and have discourse actually does work. Indeed, I'd argue that it probably happens much more often than people realize. But it's difficult to deny that some have weaponized these concepts in a manner designed to flood the marketplace of ideas and drown out the good ideas, or to strategically use the "more speech" response to actually amplify and reinforce the "bad speech" rather than correct it.

And that's something we need to reckon with.

It's also an area where I don't think there are necessarily easy solutions -- but having this discussion is important. I still think that companies will be bad at moderation. And I still think government mandates will make the problems significantly worse, not better. And I very much worry that solutions may actually do more harm than good in some cases -- especially in dragging down or silencing important, but marginalized, voices. I also think it's dangerous that many people immediately jump to the platforms as the obvious place to put all responsibility here. There needs to be responsibility as well on the parts of the end users -- to be more critical, to have more media literacy.

And, of course, I think that there is a space for technology to potentially help solve some of these issues as well. As I've discussed in the past, greater transparency can help, as would putting more control into the hands of end users, rather than relying on the platforms to make these decisions.

But it is an area that raises some very real -- and very different -- challenges, especially for those of us who find free speech and free expression to be an essential and core value. What do we do when that free speech is being weaponized against free speech itself? How do you respond? Do you need to weaponize in response and flood back the "bad speech" or does that just create an arms race? What other ways are there to deal with this?

This is a discussion that was started a while back, but is increasingly important -- and I expect that we'll be writing a lot more about it in the near future.

from the whoo-boy dept

Important Update: Michael Best has now come out and said that it was actually he who uploaded the files in question, which he got from the somewhat infamous (i.e., hacked the Hacking Team) hacker Phineas Fisher. Through a somewhat convoluted set of circumstances, it appeared the files were associated with the Wikileaks leak when they were not -- and then basically everyone just started calling each other names:

The files were obtained by Phineas Fisher, who was the source. As far as I can tell, Fisher did not intend to dump all of the files publicly, and Fisher has not indicated that he meant to give any of the files to WikiLeaks to publish. However, they received a partial set of the documents and decided to publish them.

Following the WikiLeaks release of the partial set, Fisher decided to release his set. Since the files came from a known source (Fisher has been responsible for many high profile hacks, including the hack on the Hacking Team), I used the torrent file that the files were released through to create a bittorrent instance on the Internet Archive’s server. The server proceeded to download the torrent and create the item that was linked to by WikiLeaks.

After the personal information was discovered, the AKP files were removed from the Internet Archive’s server.

Although I wasn’t aware that it was included in the release at the time, I accept my responsibility in distributing the personal information. The explanation as to how it happened is not an excuse for the fact that it did happen.

Of course, in the meantime, there's been a lot of nastiness, with Wikileaks and its supporters unfairly claiming that Zeynep Tufekci was an agent for the Erdogan government -- which is insane if you know her at all. As Best notes in his piece, it's entirely reasonable that Tufekci assumed Wikileaks was responsible for the files (even though she only accused them, accurately, of promoting the files, not uploading or hosting them -- and they did, in fact, tweet a link to the files as well as post it to Facebook), and while Wikileaks may be on the defensive about other claims about its leaks, it didn't need to attack her credibility in the process. And it is true that Wikileaks tweeted a link to the files.

Update 2: In response to our update, Zeynep Tufekci has sent over the following quote, noting that she still has concerns about how Wikileaks handled this:

"Wikileaks has never clarified that the emails it hosts are almost entirely mundane emails of ordinary citizens and revealed nothing of public interest after days of intense combing (though there were privacy violations there as well), and it has never apologized for the fact that the databases that it repeatedly, and via multiple channels, pointed to its millions of followers as full data of "our AKP emails" (they weren't) and "more" actually contained private and sensitive information of tens of millions of people in Turkey, including more than 20 million women. I never claimed that they hosted; I was agnostic on that point so none of the substantive discussions revolves around who hosted them. However, I'm glad the person who uploaded them has come forward to apologize, and learn from this. I hope the broader hacker community also reflects on this, and realizes that rushing, jumping on news cycles, dumping data indiscriminately, uploading stuff you do not know, working in a language you do not understand with no local contacts, and then accusing your critics of being government shills without the slightest attempt at research is not okay."

And... original article below.

Last week, we (like many others) reported on the news that Turkey was blocking access to Wikileaks, after the site released approximately 300,000 emails, supposedly from the Turkish government. We've long been defenders of Wikileaks as a media organization, and its right to publish various leaks that it gets. However, Zeynep Tufekci, who has long been a vocal critic of the Turkish government (and deeply engaged in issues involving the internet as a platform for speech) is noting that the leak wasn't quite what Wikileaks claimed it was -- and, in fact appears to have revealed a ton of private info on Turkish citizens.

Yes -- this "leak" actually contains spreadsheets of private, sensitive information of what appears to be every female voter in 79 out of 81 provinces in Turkey, including their home addresses and other private information, sometimes including their cellphone numbers. If these women are members of Erdogan's ruling Justice and Development Party (known as the AKP), the dumped files also contain their Turkish citizenship ID, which increases the risk to them as the ID is used in practicing a range of basic rights and accessing services. I've gone through the files myself. The Istanbul file alone contains more than a million women's private information, and there are 79 files, with most including information of many hundreds of thousands of women.

What's not in the leak, apparently, is anything really about Erdogan's government:

According to the collective searching capacity of long-term activists and journalists in Turkey, none of the "Erdogan emails" appear to be emails actually from Erdogan or his inner circle. Nobody seems to be able to find a smoking gun exposing people in positions of power and responsibility. This doesn't rule out something eventually emerging, but there have been several days of extensive searching.

At the very least, this does raise some ethical questions. In the past, Wikileaks has (contrary to what some believe!) actually been pretty good about redacting and hiding truly sensitive information that isn't particularly newsworthy. It's possible that this is just a slip up. Or it's possible that Wikileaks got lazy. Or it's possible that the organization doesn't care that much to go through what it gets in some cases. [Update: Or, see the update above, where we discover it was a third party that uploaded this data, that then got associated with the Wikileaks data after Wikileaks tweeted].

I still think that the organization has every right to release what it gets, but it should also be open to criticism and people raising ethics questions about what it has chosen to release. The fact that it appears to have failed to consider some of the questions in this case, and then possibly overplayed the story of what was in this release is certainly concerning, and harms Wikileaks' credibility. [Update: so, this was a mistake, though it's unfortunate that Wikileaks then lashed out out Tufekci and others making additionally baseless claims. Yes, it was wrongly accused, but that's no reason to wrongly accuse others as well.]