from the this-is-ok-too dept

With the event at Santa Clara earlier this month, and the companion essays published here, we've been talking a lot lately about how platforms moderate content. It can be a challenging task for a platform to figure out how to balance dealing with the sometimes troubling content it can find itself intermediating on the one hand and free speech concerns on the other. But at least, thanks to Section 230, platforms have been free to do the best they could to manage these competing interests. However you may think they make these decisions now, they would not come out any better without that statutory protection insulating them from legal consequence if they did not opt to remove absolutely everything that could tempt trouble. If they had to contend with the specter of liability in making these decisions it would inevitably cause platforms to play a much more censoring role at the expense of legitimate user speech.

Fearing such a result is why the Copia Institute filed an amicus brief at the Ninth Circuit last year in Fields v. Twitter, one of the many "how dare you let terrorists use the Internet" cases that keep getting filed against Internet platforms. While it's problematic that they keep getting filed, they have fortunately not tended to get very far. I say "fortunately," because although it is terrible what has happened to the victims of these attacks, if platforms could be liable for what terrorists do it would end up chilling platforms' ability to intermediate any non-terrorist speech. Thus we, along with the EFF and the Internet Association (representing many of the bigger Internet platforms), had all filed briefs urging the Ninth Circuit to find, as the lower courts have tended to, that Section 230 insulates platforms from these types of lawsuits.

A few weeks ago the Ninth Circuit issued its decision. The good news is that this decision affirms that the end has been reached in this particular case and hopefully will deter future ones. However the court did not base its reasoning on the existence of Section 230. While somewhat disappointing because we saw this case as an important opportunity to buttress Section 230's critical statutory protection, by not speaking to it at all it also didn't undermine it, and the fact the court ruled this way isn't actually bad. By focusing instead on the language of the Anti-Terrorism Act itself (this is the statute barring the material support of terrorists), it was still able to lessen the specter of legal liability that would otherwise chill platforms and force them to censor more speech.

In fact, it may even be better that the court ruled this way. The result is not fundamentally different than what a decision based on Section 230 would have led to: like with the ATA, which the court found would have required some direct furtherance by the platform of the terrorist act, so would Section 230 have required the platform's direct interaction with the creation of user content furthering the act in order for the platform to potentially be liable for its consequences. But the more work Section 230 does to protect platforms legally, the more annoyed people seem to get at it politically. So by not being relevant to the adjudication of these sorts of tragic cases it won't throw more fuel on the political fire seeking to undermine the important speech-protective work Section 230 does, and then it hopefully will remain safely on the books for the next time we need it.

[Side note: the Ninth Circuit originally issued the decision on January 31, but then on 2/2 released an updated version correcting a minor typographical error. The version linked here is the latest and greatest.]

from the when-Plan-B-is-to-make-even-worse-arguments dept

Back in May of last year, a New York federal court tossed two lawsuits from plaintiffs attempting to hold social media companies responsible for terrorist attacks. Cohen v. Facebook and Force v. Facebook were both booted for failing to state a claim, pointing out the obvious: the fact that terrorists use social media to recruit and communicate does not somehow turn social media platforms into material support for terrorism.

Both lawsuits applied novel legal theories to internet communications in hopes of dodging the obvious problems posed by Section 230 immunity. None of those were entertained by the New York court, resulting in dismissals without prejudice for both cases.

Rather than kick their case up the ladder to the Appeals Court, the Force plaintiffs tried to get a second swing in for free. The plaintiffs filed two motions -- one asking the judge to reconsider its dismissal ruling and the other for permission to file a second amended complaint.

As Eric Goldman points out on his blog, the judge's decision to address both of these filings at once makes for difficult reading. The end result is a denial of both motions, but the trip there is bumpy and somewhat incoherent.

Once the court moves past the plaintiffs' attempt to skirt Section 230 by re-imagining its lawsuit as an extraterritorial claim, it gets directly to the matter at hand: the application of Section 230 immunity to the lawsuit's claims. The plaintiffs performed a hasty re-imagining of their arguments in hopes of dodging the inevitable immunity defense, but the judge has no time for bogus arguments raised hastily in the face of dismissal.

As noted in the court's original decision, the protection afforded by Section 230 applies only to claims "based on information provided by [an] information content provider" other than the defendant. (May 18 M&O at 18-19 (quoting FTC V. LeadClick Media. LLC. 838 F.3d 158,173 (2d Cir. 2016)).) Plaintiffs now maintain that their claims have, in fact, always sought to hold Facebook liable for its own content, and not that generated by another "information content provider," i.e., Hamas and related entities, based on Facebook's alleged role in "networking" and "brokering" links among terrorists. (Recons. Mot. at 12.)

Plaintiffs' contention is completely disingenuous. In the current motion. Plaintiffs acknowledge in a footnote that "perhaps plaintiffs could have made their reliance on Facebook's productive conduct clearer in their briefing" but attribute this oversight to Facebook's supposed failure to argue that it was not a content provider. (Recons. Mot. at 12 & n.9.) Plaintiffs' contention is flatly refuted by Facebook's briefing on the original motion to dismiss, which clearly argued that all of the offending content cited in Plaintiffs' complaint was "provided by another information content provider, not by Facebook itself." (Def. Mem. in Supp. of MTD (Dkt. 35) at 17-18.) Plaintiffs did not respond to this argument at any point, and in fact began their opposition memorandum by stating that "[t]hese cases do not concern speech or content." For Plaintiffs to now turn around and argue that its allegations are largely about content that Facebook itself created borders on mendacious.

Having expended some strong language on the plaintiffs' disingenuous arguments, the court wraps up the case with… well, it's difficult to say where this order leaves the Force v. Facebook lawsuit. Here's Eric Goldman's attempt to summarize the ruling:

I’m a little confused about where this ruling leaves the case. The court dismissed the first amended complaint without prejudice, but denied the plaintiffs the right to file a second amended complaint–and did so with prejudice. It seems like this should mean the case is over in the district court, and the plaintiffs can turn to the appeals court if they choose to do so (which they most likely will do).

About all that can definitively be said is the order can be appealed. Unfortunately, there have been two dismissals at this level: one with and one without prejudice. So, there may be an opening to refile in New York but chances are the Force case will meet its final demise at the appellate level.

There have been several lawsuits filed seeking to hold social media companies directly responsible for the actions of terrorists -- and another one of them was rejected on appeal this week (we'll have more on that case soon). In many cases, these are brought by the families of victims. There's no denying the underlying tragedies motivating these legal cases, but the targets are not the wrongdoers. Nor are they even enablers. They're platforms for communication. And communicating is something everyone does, even terrorists. People can rightfully argue platforms' attempts to control terrorist communications have been mostly unsuccessfully. But they can't honestly argue platforms are directly responsible for violence committed by terrorists.

Police had arrested Walker when he arrived at the airport. They later searched his apartment, turning up a copy of the infamous “Anarchist Cookbook,” which contains bomb-making instructions along with information about how to eavesdrop on phone calls and commit credit card fraud. Walker was accused of violating the Terrorism Act because he possessed information “likely to be useful to a person committing or preparing an act of terrorism.” He faced the possibility of a 10-year jail sentence.

Walker didn't even possess a physical copy of the book, so to speak. He did what any number of people could have done: downloaded a freely-available PDF and printed it out. Walker downloaded his copy from a local library for use with a role-playing "crisis game" group. He apparently used it to create terrorism scenarios for the group to work with. This was corroborated by statements from other members of the group.

Not wishing to alarm outsiders, the group routinely destroyed its notes and other documents post-game. This was the direct result of being previously reported to the police by a janitor who came across notes the group left behind after role-playing a terrorist attack. Apparently, Walker forgot to toss his printed Anarchist Cookbook PDF into the fire with the rest of the prep materials.

The prosecution claimed Walker retained his copy of the book -- again, a book anyone can download from the local library -- because he was "curious" about the contents. More ridiculously, the prosecution suggested the printed PDF Walker had in his bedroom "endangered public safety."

The government apparently wanted to take an actual terrorist fighter down for obtaining a copy of book that's not actually illegal to possess in the UK. But even the government's expert witnesses seemed to feel it's unlikely the book posed any sort of threat.

Walker’s case seemed to strengthen on Wednesday, when Sharon Marie Broome, an explosives expert with the British Ministry of Defence, told the court that while the makeshift explosive instructions in the “Anarchist Cookbook” were “credible,” much of the same information could be obtained from freely available books and academic literature.

Broome said that she had worked for 25 years assessing explosives, sometimes forensically analyzing devices used in real terrorist attacks perpetrated in the U.K. and overseas. Bennathan, Walker’s lawyer, pressed her on whether she had ever encountered a terrorist case that involved the use of the “Anarchist Cookbook.” She could not provide any examples.

Fortunately, there's a happy ending to this story. Walker was found not guilty by the jury. But that this happened at all should be of concern to anyone who thinks the best way to fight terrorism is by expanding the reach and power of the government. Simply possessing something the government finds objectionable is apparently a criminal act in and of itself, even without any evidence suggesting the contents of the book were going to be used nefariously. Walker won't be the last person prosecuted for reading "dangerous" things or thinking "dangerous" thoughts. And it's giving terrorists exactly what they want: a steady pruning of citizens' rights and protections by fear-fueled legislators.

from the from-an-alternate-reality-where-Section-230-doesn't-exist dept

Yet another lawsuit has been filed against social media companies hoping to hold them responsible for terrorist acts. The family of an American victim of a terrorist attack in Europe is suing Twitter, Facebook, and Google for providing material support to terrorists. [h/t Eric Goldman]

The lawsuit [PDF] is long and detailed, describing the rise of ISIS and use of social media by the terrorist group. It may be an interesting history lesson, but it's all meant to steer judges towards finding violations of anti-terrorism laws rather than recognize the obvious immunity given to third party platforms by Section 230.

When it does finally get around to discussing the issue, the complaint from 1-800-LAW-FIRM (not its first Twitter terrorism rodeo…) attacks immunity from an unsurprising angle. The suit attempts to portray the placement of ads on alleged terrorist content as somehow being equivalent to Google, Twitter, et al creating the terrorist content themselves.

When individuals look at a page on one of Defendants’ sites that contains postings and advertisements, that configuration has been created by Defendants. In other words, a viewer does not simply see a posting; nor does the viewer see just an advertisement. Defendants create a composite page of content from multiple sources.

Defendants create this page by selecting which advertisement to match with the content on the page. This selection is done by Defendants’ proprietary algorithms that select the advertisement based on information about the viewer and the content being. Thus there is a content triangle matching the postings, advertisements, and viewers.

Although Defendants have not created the posting, nor have they created the advertisement, Defendants have created new unique content by choosing which advertisement to combine with the posting with knowledge about the viewer.

Thus, Defendants’ active involvement in combining certain advertisements with certain postings for specific viewers means that Defendants are not simply passing along content created by third parties; rather, Defendants have incorporated ISIS postings along with advertisements matched to the viewer to create new content for which Defendants earn revenue, and thus providing material support to ISIS.

This argument isn't going to be enough to bypass Section 230 immunity. According to the law, the only thing social media companies are responsible for is the content of the ads they place. That they're placed next to alleged terrorist content may be unseemly, but it's not enough to hurdle Section 230 protections. Whatever moderation these companies engage in does not undercut these protections, even when their moderation efforts fail to weed out all terrorist content.

The lawsuit then moves on to making conclusory statements about these companies' efforts to moderate content, starting with an assertion not backed by the text of filing.

Most technology experts agree that Defendants could and should be doing more to stop ISIS from using its social network.

Following this sweeping assertion, two (2) tech experts are cited, both of whom appear to be only speaking for themselves. More assertions follow, with 1-800-LAW-FIRM drawing its own conclusions about how "easy" it would be for social media companies with millions of users to block the creation of terrorism-linked accounts [but how, if nothing is known of the content of posts until after the account is created?] and to eliminate terrorist content as soon as it goes live.

The complaint then provides an apparently infallible plan for preventing the creation of "terrorist" accounts. Noting the incremental numbering used by accounts repeatedly banned/deleted by Twitter, the complaint offers this "solution."

What the above example clearly demonstrates is that there is a pattern that is easily detectable without reference to the content. As such, a content-neutral algorithm could be easily developed that would prohibit the above behavior. First, there is a text prefix to the username that contains a numerical suffix. When an account is taken down by a Defendant, assuredly all such names are tracked by Defendants. It would be trivial to detect names that appear to have the same name root with a numerical suffix which is incremented. By limiting the ability to simply create a new account by incrementing a numerical suffix to one which has been deleted, this will disrupt the ability of individuals and organizations from using Defendants networks as an instrument for conducting terrorist operations.

Prohibiting this conduct would be simple for Defendants to implement and not impinge upon the utility of Defendants sites. There is no legitimate purpose for allowing the use of fixed prefix/incremental numerical suffix name.

Take a long, hard look at that last sentence. This is the sort of assertion someone makes when they clearly don't understand the subject matter. There are plenty of "legitimate purposes" for appending incremental numerical suffixes to social media handles. By doing this, multiple users can have the same preferred handle while allowing the system (and the users' friends/followers) to differentiate between similarly-named accounts. Everyone who isn't the first person to claim a certain handle knows the pain of being second... third… one-thousand-three-hundred-sixty-seventh in line. While this nomenclature process may allow terrorists to easily reclaim followers after account deletion, there are plenty of non-ominous reasons for allowing incremental suffixes.

That's indicative of the lawsuit's mindset: terrorist attacks are the fault of social media platforms because they've "allowed" terrorists to communicate. But that's completely the wrong party to hold responsible. Terrorist attacks are performed by terrorists, not social media companies, no matter how many ads have been placed around content litigants view as promoting terrorism.

Finally, the lawsuit sums it all up thusly: Monitoring content is easy -- therefore, any perceived lack of moderation is tantamount to direct support of terrorist activity.

Because the suspicious activity used by ISIS and other nefarious organizations engaged in illegal activities is easily detectable and preventable and that Defendants are fully aware that these organizations are using their networks to engage in illegal activity demonstrates that Defendants are acting knowingly and recklessly allowing such illegal conduct.

The conduct of each Defendant was a direct, foreseeable and proximate cause of the wrongful deaths of Plaintiffs’ Decedent and therefore the Defendants’ are liable to Plaintiffs for their wrongful deaths.

This is probably the worst "Twitter terrorism" lawsuit filed yet, but quite possibly exactly what you would expect from a law firm with a history of stupid social media lawsuits and a phone number for a name.

from the privacy:-the-new-terrorism dept

A director of a Muslim advocacy group has been convicted of failing to hand over passwords for an iPhone and a laptop, which he said contained sensitive information from a torture victim.

Muhammad Rabbani, 36, from London, was found guilty but walked free after being handed a 12-month conditional discharge at Westminster magistrates' court on Monday. He was ordered to pay £600 in costs.

The police may have failed to sweat passwords out of Rabbani during last November's three-hour detention, but they were instrumental in getting him charged under the UK's terrorism laws. Rabbani will be serving the UK equivalent of a suspended sentence. No jail unless "further violations" occur. This means all police have to do is stop him somewhere else and demand his passwords. Any refusal to do so will be a violation of his conditional discharge.

Unlike the US, there's no question of potential rights violations to be resolved. The UK's anti-terror laws enable this sort of law enforcement behavior. Rabbani said he had sensitive information on his devices he didn't feel comfortable sharing with police, especially when they had little reason to suspect him of being up to anything terroristic.

Rabbani is apparently investigating a torture case linked to the US, involving a citizen in one of the Brown Countries (a.k.a., a Gulf state). His trips back and forth have been greeted with much consternation and demands for device passwords. But it wasn't until last November UK law enforcement finally decided to move ahead with charges.

The court handing down the sentence was almost apologetic.

In sentencing, senior district judge Emma Arbuthnot said she believed Rabbani was protecting sensitive information but was bound by the law to find him guilty.

This is why bad bills should never be made law. They force people -- like judges -- to sentence someone for the crime of being uncooperative. Testimony during the case didn't clear anything up. The officer who performed the attempted search and actual arrest wouldn't say whether he was acting on specific information about Rabbani, or simply hassling someone UK police had hassled several times before without feeling the need to turn it into a terrorism case.

Passwords/pins are a foregone conclusion in the UK if the court can be convinced law enforcement demands were somehow related to national security. That's how the 2000 terrorism law was designed. And with Rabbani, we're being shown how it works.

from the wobbles-so-much-you-can't-even-call-it-'spin' dept

Turkish president Recep Erdogan is at it again. Not content to merely be viewed as a megalomaniacal, ring-coveting authoritarian, Erdogan is using his time in mixed company to assure the world he's angling for the title of "tyrant."

Erdogan's long history of abusing laws to shut critics up has been covered extensively here. He's gone from a comical but dangerous politician to the leading abuser of his own constituents in record time. When not attempting to push foreign countries to play by his censorship rules, Erdogan is locking up dissidents and journalists at an alarming rate.

You have been misled, Erdogan told Bloomberg News editor-in-chief John Micklethwait, who interviewed him on stage. "The ones who have been sentenced, who have been imprisoned, are not journalists. Most of them are terrorists."

Define "terrorist." In the wrong hands/minds, the word "terrorist" could be used to describe anyone threatening to the party in power, even if nothing more dangerous than words or thoughts have been deployed. I have no doubt Erdogan believes journalists are terrorists, even if they've never done anything more than criticize him and his policies.

But Erdogan at least went into a little more detail about this claim. He explained the vast amount of terrorism participated in by the terrornalists he's tossed in his jails.

"Many have been involved in burglaries and some have been caught red handed as they were trying to empty ATM machines."

Odd. That sounds more like normal criminal activity. It does not sound like terrorism. I realize terrorists need to fund their activities, but this doesn't sound like terrorist acts. This sounds like bog standard theft.

So, these journalists Erdogan calls "terrorists" (because of their alleged burglaries) remain in jail. There's at least 150 still imprisoned, according to the Quartz article. But that's not the limit of Erdogan's abuse of the press. Erdogan shut down hundreds of media outlets in an initial assault on the press, following it up with mass arrests. A few thousand journalist were swept up by Erdogan's post-coup-attempt purge, most of whom ended up with no place to work and no press credentials to use.

After this, Erdogan went on to make many more counterfactual statements, including claiming he doesn't take proactive steps to stifle criticism and spinning the beatings handed out by his bodyguards during his visit to the White House as an all-out assault by crazed anti-Turkey Americans while US law enforcement officers stood idly by.

These are all hallmarks of a sociopathic authoritarian -- the type of person who always believes they're right even when the rest of the world agrees they're wrong. His sweeping away of facts with provably untrue statements shows he really doesn't care if anyone believes him, but will still do everything in his power to make sure those that don't believe can't be heard.

from the outbreak-of-sanity dept

It's become a depressingly predictable spectacle over the years, as politicians, law enforcement officials and spy chiefs take turns to warn about the threat of "going dark", and to call for yet more tough new laws, regardless of the fact that they won't help. So it comes as something of shock to read that the UK government's own adviser on terrorism laws has just said the following in an interview:

The Government should consider abolishing all anti-terror laws as they are "unnecessary" in the fight against extremists, the barrister tasked with reviewing Britain’s terrorism legislation has said.

…

the Independent Reviewer of Terrorism Legislation, argued potential jihadis can be stopped with existing "general" laws that are not always being used effectively to take threats off the streets.

"We should not legislate in haste, we should not use the mantra of 'something has to be done' as an excuse for creating new laws," he added. “We should make use of what we have."

Aside from the astonishingly sensible nature of Hill's comments, the interview is also worth reading for the insight it provides into the changing nature of terrorism, at least in Europe:

Mr Hill noted that some of the perpetrators of the four recent terror attacks to hit the UK were previously "operating at a low level of criminality", adding: "I think that people like that should be stopped wherever possible, indicted using whatever legislation, and brought to court."

This emerging "crime-terror nexus" is one reason why anti-terrorism laws are unnecessary. Instead, non-terrorism legislation could be used to tackle what Hill termed "precursor criminality" -- general criminal activity committed by individuals who could be stopped and prosecuted before they move into terrorism. Similarly, it would be possible to use laws against murder and making explosive devices to hand down sentences for terrorists, made harsher to reflect the seriousness of the crimes.

Even though Hill himself doubts that the UK's terrorism laws will be repealed any time soon, his views are still important. Taken in conjunction with the former head of GCHQ saying recently that end-to-end encryption shouldn't be weakened, they form a more rational counterpoint to the ill-informed calls for more laws and less crypto.

The FBI has arrested an Oklahoma man on charges that he tried to detonate what he thought was a 1,000-pound bomb, acting out of a hatred for the U.S. government and an admiration for Oklahoma City bomber Timothy Mc­Veigh, according to court papers.

Jerry Drake Varnell was arrested shortly after an attempt early Saturday morning to detonate a fake bomb packed into what he believed was a stolen cargo van outside a bank in Oklahoma City, according to a criminal complaint filed in federal court. He was charged with attempted destruction of a building by means of an explosive.

We as a family are extremely distraught about this situation with our son Jerry Drake Varnell, but what the public must understand is that he is a paranoid schizophrenic and is extremely susceptible to different types of ideology that normal people would deem immoral. Underneath his condition, he is a sweet-hearted person and we are extremely shocked that this event has happened. However, what truly has us flabbergasted is the fact that the FBI knew he was schizophrenic. The State of Oklahoma found him mentally incompetent and we, his parents have legal guardianship over him by the Court. These documents are sealed from the public, which is why no news media outlet has been able to obtain them. The FBI clearly knew that he was schizophrenic because they have gathered every ounce of information on him.

If true, this prosecution will make the FBI's counterterrorist operations look even worse. This isn't the first time the FBI has exploited the weakest of humans to rack up terrorist busts. This includes the prosecution of a man agents referred to as a "retarded fool" and the dumping of an 18-year-old with a 51 IQ into the lap of local prosecutors. Now we have the FBI steering a paranoid schizophrenic into a self-destructive path, utilizing a confidential informant who apparently made several misrepresentations during his work with the FBI.

The CI claimed to have seen a "bunker" at Varnell's home (where he lived with his parents because he is mentally unable to live on his own). The Varnells claim the "bunker" is nothing more than a partially-buried storage container, meant to be used as a storm shelter. Adding to its un-bunkerlike aspects are the fact that it locks from the outside and contains no food, water, or source of electricity.

From the criminal complaint, other facts emerge. Varnell lived with his parents and only had access to the full residence occasionally. Varnell talked about bombing US government buildings but was unable to secure a vehicle to house the explosives. (He told the undercover agent he might be able to "borrow" a vehicle from some relatives.) The affidavit says the undercover agent supplied everything needed to build the explosive device -- not a single element came from the alleged terrorist. The undercover agent also supplied the vehicle.

So, in the end, the FBI got its man: Jerry Drake Varnell, lifelong schizophrenic with the inability to obtain a vehicle, much less build his own bomb. Varnell talked a lot about sending a message to the government using violent means, but it's unclear how much he actually would have done if he hadn't been nudged towards self-destruction by an overly-helpful CI and FBI undercover agent.

According to the Varnells, this may never have gone this far if the FBI's informant had simply done what he'd been told. The Varnells claim Jerry Varnell's father kicked the CI off the property and told him he'd be arrested for trespassing if he came back. This was due to apparent drug abuse by the CI. And yet, the CI returned, presumably at the behest of the FBI, which was willing to overlook the CI's drug use if it could keep its terrorism sting on track.

Varnell's lawyer has asked for a hearing [PDF] to determine whether Varnell is competent to stand trial. Based on Varnell's long history of mental illness, it would seem apparent the man can't be expected to stand trial, much less carry out an attack on a federal building… at least not without a lot of outside help from the feds themselves.

Nearly every plot we uncover has a digital element to it. Go online and you will find your own “do-it-yourself” jihad at the click of a mouse. The tentacles of Daesh (Isil) recruiters in Syria reach back to the laptops in the bedrooms of boys – and increasingly girls – in our towns and cities up and down the country. The purveyors of far-Right extremism pump out their brand of hate across the globe, without ever leaving home.

The scale of what is happening cannot be downplayed. Before he mowed down the innocents on Westminster Bridge and stabbed Pc Keith Palmer, Khalid Masood is thought to have watched extremist videos. Daesh claim to have created 11,000 new social media accounts in May alone. Our analysis shows that three-quarters of Daesh propaganda stories are shared within the first three hours of release – an hour quicker than a year ago.

An hour quicker! In internet time, that's practically a millennium. It's tough to tell what Rudd's attempting to make of this technobabble. Is she suggesting future Masoods will act quicker because they'll be able to complete their viewing of extremist videos faster? If that's the case, maybe regulators need to step in and throttle broadband connections. The more the video buffers, the less likely it is someone will watch it… and the less likely it is someone will carry out an attack. The math(s) work out.

Unfortunately, this is not where the op-ed is heading. Sadly, Rudd is here to take a swing at encryption. But she takes a swing at it in prime passive-aggressive, Ike Turner-style, saying she loves it even as the blows rain home.

Encryption plays a fundamental role in protecting us all online. It is key to growing the digital economy, and delivering public services online.

I ain't mad at ya.

But, like many powerful technologies, encrypted services are used and abused by a small minority of people. The particular challenge is around so called “end-to-end” encryption, where even the service provider cannot see the content of a communication.

But you mess me up so much inside.

To be very clear – Government supports strong encryption and has no intention of banning end-to-end encryption. But the inability to gain access to encrypted data in specific and targeted instances – even with a warrant signed by a Secretary of State and a senior judge – is right now severely limiting our agencies’ ability to stop terrorist attacks and bring criminals to justice.

In a fun twist, Rudd doesn't call for harder nerding. (Note: Rudd is visiting Silicon Valley to meet with tech leaders, so it's safe to assume requests for harder nerding will be made, even if not directly in this op-ed.)

No, Rudd doesn't want the impossible: secure, backdoored encryption. Instead, she wants to know if tech companies will just take the encryption off one end of the end-to-end. Her bolstering argument? The public doesn't give a shit about encryption. It just wants easy-to-use communication tools.

Real people often prefer ease of use and a multitude of features to perfect, unbreakable security. So this is not about asking the companies to break encryption or create so called “back doors”. Who uses WhatsApp because it is end-to-end encrypted, rather than because it is an incredibly user-friendly and cheap way of staying in touch with friends and family? Companies are constantly making trade-offs between security and “usability”, and it is here where our experts believe opportunities may lie.

Having set up her straw app user, Rudd moves towards her conclusion… which is severely lacking in anything cohesive or coherent. The "opportunities" lie in persuading tech companies to provide users with less secure communications platforms. Should be an easy sale, especially if the average user doesn't care about security. But maybe the company does and doesn't want to give bad people an easy way to access the communications of others. Hence encryption. Hence end-to-end, so even if the provider is breached, there's still nothing to access.

What Rudd is looking for can't be called a trade-off. The government has nothing tech companies want. All they can offer is platitudes about fighting crime and national security. The government, meanwhile, wants tech companies to write software the way the government wants it, rather than how the company or its users want it. That's not a trade-off. That's a one-way street where every internet communication platform becomes a proxy government agency.

Rudd's idea is bad and she should feel bad. But I get the feeling that no matter how many tech experts she talks to, she's still going to believe her way is the right and best way.

from the these-are-bad-ideas,-marc dept

For some observers, struggling UK Prime Minister Theresa May and triumphant French President Emmanuel Macron may seem at somewhat opposite ends of the current political climate. But... apparently they agree on one really, really bad idea: that it's time to massively censor the internet and to blame tech companies if they don't censor enough. We've been explaining for many years why this is a bad idea, but apparently we need to do so again. First, the plan:

The prime minister and Emmanuel Macron will launch a joint campaign on Tuesday to tackle online radicalisation, a personal priority of the prime minister from her time as home secretary and a comfortable agenda for the pair to agree upon before Brexit negotiations begin next week.

In particular, the two say they intend to create a new legal liability for tech companies if they fail to remove inflammatory content, which could include penalties such as fines.

It's no surprise that May is pushing for this. She's been pushing to regulate the internet for quite some time, and it's a core part of her platform (which is a bit "weak and wobbly" as they say these days). But, Macron... well, he's been held up repeatedly as a "friend" to the tech industry, so this has to be seen as a bit of a surprise in the internet world. Of course, there were hints that he might not really be all that well versed in the way technology works when he appeared to support backdoors to encryption. This latest move just confirms an unfortunate ignorance about the technology/internet landscape.

Creating a new legal liability for companies that fail to remove inflammatory content is going to be a massive disaster in many, many ways. It will damage the internet economy in Europe. It will create massive harms to free speech. And, it won't do what they seem to think it will do: it won't stop terrorists from posting propaganda online.

First, a regime that fines companies for failing to remove "inflammatory content" will lead companies to censor broadly, out of fear that any borderline content they leave up may open them up to massive liability. This is exactly how the Great Firewall of China works. The Chinese government doesn't just say "censor bad stuff" it tells ISPs that they'll get fined if they allow bad stuff through. And thus, the ISPs over-censor to avoid leaving anything that might put them at risk online. And, when it comes to free speech, doing something "the way the Chinese do things" tends not to be the best idea.

Second, related to that, once they open up this can of worms, they may not be happy with how it turns out. It's great to say that you don't think "inflammatory content" should be allowed online, but who gets to define "inflammatory" makes a pretty big difference. As we've noted, you always want to design regulations as if the people you trust the least are in power. This is not to say that May or Macron themselves would do this, but would you put it past some politicians in power to argue that online content from political opponents is too "inflammatory" and thus must be removed? What about if the press reveals corruption? That could be considered "inflammatory" as well.

Third, one person's "inflammatory content" is another's "useful evidence." We see this all the time in other censorship cases. I've written before about how YouTube was pressured to take down inflammatory "terrorist videos" in the past, and ended up taking down the account of a human rights group documenting atrocities in Syria. It's easy to say "take down terrorist content!" but it's not always easy to recognize what's terrorist propaganda versus what's people documenting the horrors that the terrorists are committing.

Fourth, time and time again, we've seen the intelligence community come out and argue against this kind of censorship, noting that terrorists posting inflammatory content online is a really useful way to figure out what they're up to. Demanding that platforms take down these useful sources of open source intelligence will actually harm the intelligence community's ability to monitor and stop plans of attack.

Fifth, this move will almost certainly be used by autocratic and dictatorial regimes to justify their own widespread crackdown on free speech. And, sure, they might do that already, but removing the moral high ground can be deeply problematic in diplomatic situations. How can UK or French diplomats push for more freedom of expression in, say, China or Iran, if they're actively putting this in place back home. Sure, you can say that they're different, but the officials from those countries will argue it's the exact same thing: you're censoring the internet to "protect" people from "dangerous content." Well, they'll argue, that's the same thing that we do -- it's just that we have different threats we need to protect against.

Sixth, this will inevitably be bad for innovation and the economy in both countries. Time and time again, we've seen that leaving internet platforms free from liability for the actions of their users is what has helped those companies develop, provide useful services, employ lots of people and generally help create new economic opportunities. With this plan, sure, Google and Facebook can likely figure out some way to censor some content -- and can probably stand the risk of some liability. But pretty much every other smaller platform? Good luck. If I were running a platform company in either country, I'd be looking to move elsewhere, because the cost of complying and the risk of failing to take down content would simply be too much.

Seventh, and finally, it won't work. The "problem" is not that this content exists. The problem is that lots of people out there are susceptible to such content and are interested and/or swayed by it. That's a much more fundamental problem, and censoring such content doesn't do much good. Instead, it tends to only rally up those who were already susceptible to it. They see that the powers-that-be -- who they already don't trust -- find this content "too dangerous" and that draws them in even closer to it. And of course that content will find many other places to live online.

Censoring "bad" content always seems like an easy solution if you haven't actually thought through the issues. It's not a surprise that May hasn't -- but we had hopes that perhaps Macron wouldn't be swayed by the same weak arguments.