from the government-control-c dept

As you will hopefully recall, there is an ongoing case between Twitter and the government over exactly how specific or not the social media service can be regarding the number of government surveillance requests it receives. Most of the rest of the big internet companies reached a settlement with the DOJ, including rules how specific companies could be (not very) in revealing such requests. Those rules basically were an attempt by the government to get tech companies to play hide-the-ball on transparency issues, in which the more specific a service attempted to be about how many individuals would be impacted by government orders, the more additional orders had to be lumped into those specifics, rendering the information useless.

Twitter, to its credit, was alone in saying that the proposed settlement wasn't good enough, and continued its fight with the DOJ. Essentially, the fight is over whether Twitter can be specific when it discloses how many orders it has received, or whether it must only disclose "bands" or ranges of orders. Recent arguments made by both sides do a nice job of highlighting the absurdity of the government's argument.

Twitter has argued that just as it has been precise in other areas of its transparency report, so too should it be allowed to say how many national security orders it has received from American authorities.

"Even under the most generous First Amendment standard, there is nothing in there that it is a national security harm to say that we received 44 as opposed to 0 to 499," Lee Rubin, a lawyer representing Twitter, said during the Tuesday hearing.

In court filings, DOJ lawyers have said that allowing Twitter to provide this specific level of information would be detrimental to national security. This assertion is according to a declaration filed by Michael Steinbach, the executive assistant director of the national security branch of the FBI, where he argued that "the disclosure of the information at issue would provide our adversaries a clear picture of the Government’s surveillance activities pertaining to national security investigations."

The DOJ has been conditioned through the acceptance of the argument by far too many courts, as well as the court of public opinion, to simply shout "national security" at any challenge to disclosing anything at all. And that's what it's doing here, as well. So much so, in fact, that the judge presiding over the case pointed out that the legal filings the DOJ had offered didn't address any of the specifics Twitter was arguing.

During the Tuesday hearing, US District Judge Yvonne Gonzalez Rogers, an Obama-appointed judge, seemingly rebuked the government at one point and noted that its legal responses did not directly address Twitter’s arguments. "The analysis that has been provided to the court is generic to any company," she said, explaining that there was "nothing in here that is specific to Twitter.

"If I had five different cases, one by Twitter, one my Microsoft, one by Facebook and all the other groups that do this social media stuff that none of us judges do, [Steinbach] could have taken this exact same declaration and cut and paste the declaration, switched out the names of the company and I would have the same generic explanation for why it is that the government wants to do what it wants to do."

There's no ruling yet, although one is expected in the coming months. Still, it doesn't look good to have the judge dinging the DOJ for failing to address Twitter's argument that there is no national security risk in disclosing a specific number of NSLs it receives, as opposed to a range. Perhaps more importantly, it's refreshing to see a challenge to the longstanding tradition of the government being able to simply shout "national security!" at any attempt at transparency to make it go away.

from the we-need-to-stand-up dept

I've been quite clear how I feel about Donald Trump's awful executive order that places a blanket ban on people entering the US (even if they had valid visas) from 7 countries, including a permanent block on Syrian refugees. Tons of people have been protesting this decision, and multiple courts have ruled against it. There has been some discussion over whether or not the tech industry was really going to stand up against this move, and some of the early statements about the executive order were a bit weak. However, late Sunday night, basically the entire technology industry (plus some companies from other industries as well) signed onto an amicus brief calling the order illegal and unconstitutional (technically, it's a motion asking for permission to file the amicus brief, with that brief attached).

The brief was filed in the Ninth Circuit appeals court, which is one of the first appeals courts considering the executive order, after a federal judge in Seattle issued a nationwide temporary restraining order on enforcing the exec order. On Sunday, the appeals court refused to reverse the lower court, keeping the TRO in place. However, it also gave both parties (the lawsuit itself was filed by the state of Washington) a very quick turnaround time to file written arguments to be considered.

Given that incredibly short time frame, the fact that 97 companies -- including some of the world's largest -- but also some tiny ones, like the Copia Institute (the think tank arm of Techdirt), were able to come together and not only get a detailed amicus brief together, but also get sign on from all of those companies (on Super Bowl Sunday, no less), is impressive. Having been through the process in which amicus briefs with multiple signers has been done before, normally there's lots of hemming and hawing from different companies and nitpicking over certain choices. It takes a lot of effort. Update: Another 30 companies have signed on as well.

But this issue was so important and so core and fundamental to our basic values, that basically the entire industry came together and signed onto this. You name the company, and it's probably signed on. There are the big guys: Google, Facebook, Microsoft and Apple (despite a false Washington Post article that claimed none of them had signed on). There are lots of other huge names as well, including Twitter, Snap, Uber, Airbnb, Lyft, Dropbox, Cloudflare, Box, eBay, GitHub, Kickstarter, Indiegogo, Medium, Mozilla, Patreon, Paypal, Pinterest, Reddit, Salesforce, Spotfy, Stripe, Wikimedia, Yelp, Y Combinator and many, many more. Update: Among the notable companies in the "late" sign on, were SpaceX, Tesla, Slack, Pandora, Adobe, HP, Evernote, Udacity and more...

I highly recommend reading the full amicus brief -- which makes an economic argument, a moral argument and a legal argument all wrapped up in one.

Immigrants make many of the Nation’s greatest discoveries, and create some
of the country’s most innovative and iconic companies. Immigrants are among our
leading entrepreneurs, politicians, artists, and philanthropists. The experience and
energy of people who come to our country to seek a better life for themselves and
their children—to pursue the “American Dream”—are woven throughout the social,
political, and economic fabric of the Nation.

For decades, stable U.S. immigration policy has embodied the principles that
we are a people descended from immigrants, that we welcome new immigrants,
and that we provide a home for refugees seeking protection. At the same time,
America has long recognized the importance of protecting ourselves against those
who would do us harm. But it has done so while maintaining our fundamental
commitment to welcoming immigrants—through increased background checks and
other controls on people seeking to enter our country.

[....]

The Order effects a sudden shift in the rules governing entry into the United
States, and is inflicting substantial harm on U.S. companies. It hinders the ability
of American companies to attract great talent; increases costs imposed on business;
makes it more difficult for American firms to compete in the international marketplace;
and gives global enterprises a new, significant incentive to build operations—
and hire new employees—outside the United States.

The Order violates the immigration laws and the Constitution. In 1965, Congress
prohibited discrimination on the basis of national origin precisely so that the
Nation could not shut its doors to immigrants based on where they come from.
Moreover, any discretion under the immigration laws must be exercised reasonably,
and subject to meaningful constraints.

There's much more in the full brief, and hopefully the court allows it and recognizes how momentous this is. I've never seen anything that so many tech companies have gotten behind (including things like SOPA), and this happened so fast that it is literally unprecedented. A whole bunch of people put in a tremendous effort to actually get this done (including more than a few having to miss the Super Bowl to get this done...). Andy Pincus from Mayer Brown deserves a specific shoutout for being the main lawyer putting the brief together.

We shall see what happens from here, but having basically the entire tech industry rise up in a single voice to say that this order is not right is nice to see. In this day and age, it's easy not to speak out and to just sit on the sidelines. But this is important, and when it mattered all of these companies spoke out.

from the it's-a-feelony! dept

Another day, another wacky legal complaint. This one, first spotted by Eric Goldman was filed by a recent law school grad, Tiffany Dehen. She's fairly upset that someone set up a parody Twitter account pretending to be her that portrayed her in an unflattering light. So she has sued. For $100 million. And she's not just suing the "John Doe" behind the account... but also Twitter. Oh, and also the University of San Diego, because she's pretty sure that someone there is responsible for this account (she just graduated from USD's law school). Oh, and according to the exhibits that Dehen put in her own lawsuit, the account is labeled as a parody account.

The lawsuit... well... it doesn't reflect well on the University of San Diego law school and its ability to prepare lawyers. I don't know if the law school didn't teach Ms. Dehen about California's anti-SLAPP law, but she's likely about to get a quick post-graduate lesson about it. I won't even get into the reasons why this is unlikely to be defamation (parody, people, parody...), but the fact that Twitter and USD are included... is pretty nutty. Twitter will get out of the case pretty damn easily under Section 230 (does the University of San Diego law school not teach Section 230?!?). And, of course, there's this, which kind of speaks for itself:

If you can't read that, it says:

Additionally, it should be noted that Tiffany Dehen's real twitter account consists of posts supporting the elected President of the United States, not Adolf Hitler, the socialist communist dictator from Germany. The fact that John Doe used Tiffany Dehen's real name and linked the fictitious Twitter account to Tiffany Dehen's real name and linked the fictitious Twitter account to Tiffany Dehen's real account by retweeting Tiffany Dehen's posts shows that John Doe acted with actual malice and negligence.

Huh? I'm still stumbling over "socialist communist" that I'm already having difficulty figuring out how parodying someone is proof of "actual malice and negligence."

Plaintiff requests to enjoin Twitter, Inc, jointly and severally, the social media website which allowed this disparaging speech to stay broadcast to the world, costing plaintiff potentially millions of dollars in future earnings. Twitter was put on notice on January 30, 2017, and as of Feb 1, 2017, the false twitter account was still posted, even after Tiffany Dehen put Twitter on notice. The process Twitter adheres to is absolutely ridiculous and should be looked at as well and Plaintiff claims the process Twitter has in place to review defamation is unconstitutional.

Hooo boy. Where to start? Let's just skip over the awful run on sentences and note, again, as we did above, that Section 230 makes Twitter categorically immune from this lawsuit. I'm still at a loss as to how any lawyer today could file a lawsuit and not be aware of the basics of Section 230. Even without Section 230, Twitter would easily get out of this lawsuit. Notice that she cites no actual laws on the books or caselaw to back up this claim? She gives the company a grand total of two days of notice? And then I didn't know that "absolutely ridiculous" processes (which she doesn't actually seem to understand or describe) are illegal. I'd like to know the statute that says "absolutely ridiculous" policies for dealing with parody accounts are illegal, because, man, that would be useful. Oh, and "unconstitutional." Wha....? This is just... so, so awful. The University of San Diego law school should be ashamed.

Oh, right, about USD Law. Why is it a defendant? Beats me.

Plaintiff requests to enjon University of San Diego because of the fact that as seen in Exhibits 34 and 35, it appears as though there is a high probability John Doe is an University of San Diego student or alumni since the photo used to make the swastika headband, as shown in Exhibits 3, 4, and 5, is Plaintiff's profile photograph on LinkedIn. University of San Diego should be liable as well due to a prior matter that was not resolved appropriately by University of San Diego which led to USD acting recklessly, or at the very least negligently, to allow this matter to arise.

So... it sorta feels like perhaps Ms. Dehen thinks that "enjoin" means "make a party to the case" rather than the actual meaning, which is to have the court stop the party from doing something. Is it truly possible that someone can graduate from law school without knowing what enjoin means? Also, as for the rest of that paragraph, what is even going on? I keep reading it, and trying to understand why the fact that a LinkdedIn photo was used somehow makes it obvious that it was a USD student. Because she doesn't explain it at all, if you actually bother to go to the exhibits, it appears she's implying, without saying, that because LinkedIn tells her that some people from USD Law School visited her profile (among other people from other places) that's her proof. That's... not quite how it works.

And... even if it is a USD student, so what? That doesn't make USD liable.

And then the unexplained "prior matter"? Who graduates from law school and thinks that's how you put something into a complaint?

Oh, and then there's this:

Further, on the way to Federal Court in Downtown Sand Diego to file this complaint, Plaintiff was involved in a collision on the I-5 Freeway headed South, which resulted in neck and back pain for which Plaintiff is now seeking medical attention. Please see Exhibit 39.

So... um... it sucks that you were in a car accident. That's no fun. But what the hell does that have to do with the lawsuit? Why is that in here? And if she was on her way to file it when the accident happened, does that mean after the accident (in pain and all) she stopped to add this totally irrelelvant paragraph to the "complaint"?

Again, I'm not even going to go into why this account almost certainly isn't defamation, but among her evidence that this meets the "statutory malicious defamation claim" (?!?!) is this:

John Doe's fault in publishing the statement amounted to substantially more than just negligence. John Doe's meticulous planning of potentially creating a fake Facebook account in which he sought to befriend Plaintiff on social media (Please see Exhibit 38) and gain access to additional information, coupled with the time involved in setting up a false Twitter account, as well as downloading, altering, and reposting plaintiff's images, shows more than just the defendant's fault in publishing the statement. John Doe's deliberate actions amounted to much more than just mere negligence, but more so proves malice, an element of criminal crimes.

That's... quite a paragraph. But I just want to point out that this is (1) a civil lawsuit and (2) she says that this is an element of "criminal crimes." Criminal. Crimes.

Finally, I'm no lawyer, but I read and write a lot about court cases, and I can't recall ever seeing a legal complaint written in this manner. It doesn't seem to match with any typical legal complaint format that I've ever seen. It doesn't name any laws. And, I hate to give her any ideas, but normally when people make these kinds of questionable legal attacks on parody claims, they at least try to throw in an ill-advised publicity rights claim. Perhaps that wasn't taught at USD? Anyway, the 3-page "brief" (as she calls it) is then followed with another 20 pages of "exhibits" which are mostly screenshots that she seems to think proves a point, but as noted above, require anyone looking at them to make giant leaps and inferences to even figure out what her complaint is actually alleging.

And yet, she argues that John Doe, Twitter and USD should pay her $100 million because this parody account is "damaging to plaintiff's name, especially in this crucial juncture of her life where she is applying to California bar admittance and looking for a legal job in San Diego."

from the YOU-CAN-TRUST-US dept

Mike covered Twitter's release of two FBI NSLs it had received in the last few years -- more evidence that the USA Freedom Act, if nothing else, has made review of NSL gag orders more timely and the orders themselves more easily challenged.

Not that there hasn't been significant pushback from Twitter along the way. The social media platform sued the government in 2014, claiming that the de facto government-imposed secrecy was a violation of the company's First Amendment rights.

Each of the two new orders, known as national security letters (NSLs), specifically request a type of data known as electronic communication transaction records, which can include some email header data and browsing history, among other information.

In doing so, the orders bolster the belief among privacy advocates that the FBI has routinely used NSLs to seek internet records beyond the limitations set down in a 2008 Justice Department legal memo, which concluded such orders should be constrained to phone billing records.

Twitter's counsel says it only hands out what the DOJ's legal guidance says it's supposed to hand out. The FBI, on the other hand, says nothing, which is pretty much how it handles all requests for comments on NSLs/ignoring DOJ legal guidance. What the agency has said -- not directly but via its oversight -- is that it doesn't believe the DOJ can interpret NSL statutes for it.

An FBI inspector general report from 2014 indicated that it disagreed with the memo's guidance.

The DOJ's interpretation was issued nine years ago. To this day, the FBI continues to ask for more than the DOJ says it can. And the DOJ doesn't appear to be stepping in to iron out the disagreement, much less reiterate its "phone billing only" policy.

We've seen overbroad requests before, starting with the first NSL ever released publicly. One of the NSLs sent to Yahoo asked for all of the following:

Subscriber name and related subscriber information

Account number(s)

Date the account opened or closed

Physical and or postal addresses associated with the account

Subscriber day/evening telephone numbers

Screen names or other on-line names associated with the account

All billing and method of payment related to the account including alternative billed numbers or calling cards

All e-mail addresses associated with the account to include any and all of the above information for any secondary or additional e-mail addresses and/or user names identified by you as belonging to the targeted account in this letter

The names of any and all upstream and providers facilitating this account's communications

If the DOJ isn't going to do anything about this, the FBI will continue to issue thousands of letters a year asking for more than it should and hoping recipients aren't aware they don't have to hand all of this information over. It's also hoping recipients don't know they're allowed to challenge the accompanying gag orders -- or at least it was until the Internet Archive (which isn't the proper target for NSLs to begin with) publicly pointed out the FBI was still using outdated boilerplate in its demand letters.

This is, unfortunately, how law enforcement agencies tend to handle things: blow past legal guidance and civil liberties until forced to do otherwise -- whether by a court or a policy change. And when forced to do so, engage in foot-dragging and inconsistent internal communications so as to lessen the "damage" of playing by the rules. The FBI may be at the top of the law enforcement food chain, but it often operates as though it's heading up Hazzard County.

from the first-amendment? dept

In the last few months, we've seen multiple internet companies finally able to reveal National Security Letters (NSLs) they had received from the Justice Department, demanding information from the companies, while simultaneously saddling those companies with gag orders, forbidding them to speak about the orders. It started last June, when Yahoo was the first company to publicly acknowledge such an NSL. In December, Google revealed 8 NSLs around the same time that the Internet Archive was able to reveal it had received an NSL as well. Earlier this month, Cloudflare was finally able to reveal the NSL it had received (which a Senate staffer had told the company was impossible -- and the company's top lawyer was bound by the gag order, unable to correct that staffer).

If you don't recall, Twitter has been much more aggressive than basically all of the other tech companies in challenging these gag orders. Back in 2014, Twitter sued the government, claiming it was a First Amendment violation to enforce these gag orders. That was after most of the other major internet companies had come to an agreement over how and when they could report such requests. Twitter, thankfully, felt that the agreement between the DOJ and internet companies was way too stifling and has fought it:

Twitter remains unsatisfied with restrictions on our right to speak more freely about national security requests we may receive. We continue to push for the legal ability to speak more openly on this topic in our lawsuit against the U.S. government, Twitter v. Lynch.

We continue to believe that reporting in government-mandated bands does not provide meaningful transparency to the public or those using our service. However, the government argues that any numerical reporting more detailed than the bands in the USA Freedom Act would be classified and as such not protected by the First Amendment. They further argue that Twitter is not entitled to obtain information from the government about the processes followed in classifying a version Twitter’s 2013 Transparency Report or in classifying/declassifying decisions associated with the allowed bands. We would like a meaningful opportunity to challenge government restrictions when “classification” prevents speech on issues of public importance.

Our next hearing in the Lynch case is scheduled for February 14, 2017. Concurrently, Twitter is using the statutory means provided in the USA Freedom Act to seek more transparency into similar NSL requests, and will provide updates as they become available.

That last paragraph makes it fairly clear (though it should have been obvious) that Twitter is still gagged on more NSLs. And that's kind of a key thing in all of these recent "releases" of NSLs. They're only released when the government lifts the gag orders on them -- and that's very troubling. There is a long history in this country of the government abusing its powers to spy on the public. If it alone gets to decide when to reveal the nature of its surveillance efforts, then the public really has no insight or understanding into just how widespread the practice might be.

And the most ridiculous thing in all of this is that it's hard to fathom any actual justification for this kind of thing. Yes, you can understand not necessarily revealing an ongoing investigation into a crime, but the gag orders go much further, barring companies from even admitting how many NSLs they receive. It's hard to see how revealing that kind of information -- in any way -- compromises law enforcement or intelligence investigations. The only thing it serves to do is to hide from the public the scale of the surveillance.

from the law-firms-basically-setting-up-franchises dept

So, this is how we're handling the War on Terror here on the homefront: lawsuit after lawsuit after lawsuit against social media platforms because terrorists also like to tweet and post stuff on Facebook.

The same law firm (New York's Berkman Law Office) that brought us last July's lawsuit against Facebook (because terrorist organization Hamas also uses Facebook) is now bringing one against Twitter because ISIS uses Twitter. (h/t Lawfare's Ben Wittes)

Behind the law firm are more families of victims of terrorist attacks -- this time those in Brussels and Paris. Once again, any criticism of this lawsuit (and others of its type) is not an attack on those who have lost loved ones to horrific acts of violence perpetrated by terrorist organizations.

The criticisms here are the same as they have been in any previous case: the lawsuits are useless and potentially dangerous. They attempt to hold social media platforms accountable for the actions of terrorists. At the heart of every sued company's defense is Section 230 of the CDA, which immunizes them against civil lawsuits predicated on the actions and words of the platform's users.

The lawsuits should be doomed to fail, but there's always a chance a judge will construe the plaintiffs' arguments in a way that either circumvents this built-in protection or, worse, issues a precedential ruling carving a hole in these protections.

The arguments here are identical to the other lawsuits: Twitter allegedly hasn't done enough to prevent terrorists from using its platform. Therefore, Twitter (somehow) provides material support to terrorists by not shutting down (one of) their means of communication (fast enough).

The filing [PDF] is long, containing a rather detailed history of the rise of the Islamic State, a full rundown of the attacks in Brussels and Paris, and numerous examples of social media posts by terrorists. It's rather light on legal arguments, but then it has to be, because the lawsuit works better when it tugs at the heartstrings, rather than addressing the legal issues head on.

The lawsuit even takes time to portray Twitter's shutdown of Dataminr's feed to US government surveillance agencies -- as well as its policy of notifying users of government/law enforcement demands for personal information -- as evidence of its negligence, if not outright support, of terrorist groups.

The problem with these lawsuits -- even without the Section 230 hurdle -- is that the only way for Twitter, Facebook, etc. to avoid being accused of "material support" for terrorism is to somehow predetermine what is or isn't terrorist-related before it's posted… or even before accounts are created. To do otherwise is to fail. Any content posted can immediately be reposted by supporters and detractors alike.

And that's another issue that isn't easily sorted out by platforms with hundreds of millions of users. Posts and tweets are just as often passed on by people who don't agree with content, but arguments made in these lawsuits expect social media platforms to determine what intent is… and take action almost immediately. Any post or account that stays "live" for too long becomes a liability, should courts find in favor of these plaintiffs. It's an impossible standard to meet.

These lawsuits ask courts to shoot the medium, rather than the messenger. They make about as much sense as suing cell phone manufacturers because they're not doing enough to prevent terrorists from buying their phones and using them to communicate.

from the officers:-are-you-tired-of-adhering-to-your-country's-Constitution? dept

Twitter has cut off another social media "surveillance" company from using its API. To date, the platform has forced third-party Dataminr to cut off connections to the CIA, DHS/law enforcement "fusion centers," and Geofeedia. All of these denials of service were the result of the company's policy against use of its API for surveillance.

Very little of what was being done could truly be considered "surveillance," since Dataminr's access to basically every tweet produced did nothing but cull data from public accounts. What Twitter seemed to have more of a problem with was the marketing tactics of companies like Geofeedia, which insinuated their products were perfectly suited for keeping tabs on First Amendment-protected activity, like protests.

As for the CIA and DHS, Twitter apparently felt these government agencies were far more involved in surveillance than the FBI, which just signed a contract with Dataminr for access to its every-tweet-ever API.

Media Sonar touts its social media monitoring software and algorithms as ideal tools for police and corporations to aggregate and filter data to improve safety and protect corporate assets.

But a U.S.-based investigation turned up marketing language that ran afoul of Twitter's policies, which state that posts on the popular social network should not be mined for surveillance purposes.

Media Sonar's emails to past clients explicitly stated that the software, which allows officers to comb through publicly available posts on the likes of Twitter and Instagram, could help police search for "criminal activity" and "avoid the warrant process" when flagging people who have come under scrutiny.

I'm sure Media Sonar never expected the contents of these marketing emails to be made public, but that's a risk you take every time you send something out inviting law enforcement to use your product to avoid complying with Canadians' rights. Of course, most of what's viewed by law enforcement with tools like these wouldn't require a warrant to obtain.

The move by Twitter may be seen as noble, but it does very little to curb government agencies' monitoring of publicly-available posts. If Twitter users want to remain off the government's radar, it's on them to take more control of the visibility of their tweets. For most users, this isn't a concern and while some may express dismay at law enforcement's use of their posts against them, there's nothing about this outcome that isn't preventable, even without Twitter's periodic announcements that it's cutting another third party off.

The problem isn't with the use of the API so much as it is the interpretation of obtained data. While hashtags may make it easy to track protests and other activity deeply tied to social media interaction, more nebulous data may show correlations that aren't actually there. Overreliance on monitoring tools could result in a lot of false positives, as Canadian Internet Policy staff lawyer Tamir Israel points out.

Israel said most social media monitoring companies rely on algorithms to parse the vast amount of data and pull out meaningful information for clients. Those algorithms, he cautioned, can be misleading.

Israel said they often analyze posts out of context and are unable to account for slang, cultural norms or other factors that give a post meaning.

He cited a recent example of a British tourist who tweeted about his intention to "destroy the United States" on an upcoming trip. His post raised alarm bells with U.S. security, but the tourist had been trying to express his plans to party while abroad using common British slang.

In addition to these concerns are privacy protections granted by Canadian law, which actually gives publicly-available social media posts more protection than those made by US citizens.

[Citizen Lab's Chris Parsons] said law enforcement and federal agencies must demonstrate a need for mining online data, adding that they cannot look through material indiscriminately.

"Just because I say something on Twitter doesn't mean the RCMP can hoover it up," he said. "There has to be a reason, and they have to be able to articulate it."

This may be why the company stealthily sold its product on its warrant-dodging merits. Allowing a third-party to sort and shape the data may allow Canadian law enforcement agencies to wash their hands of any "indiscriminate" hoovering/searching accusations. In any event, Media Sonar's product has suddenly become a lot less useful, and that's going to keep it from being a heavy hitter in the social media monitoring field.

The lawsuit, first reported by Fox News, was filed Monday in federal court in the eastern district of Michigan on behalf of the families of Tevin Crosby, Javier Jorge-Reyes and Juan Ramon Guerrero.

The lawsuit is the latest to target popular Internet services for making it too easy for the Islamic State to spread its message.

Like many similar lawsuits, this one is doomed to fail. First off, Section 230 immunizes these companies from being held responsible for third-party content. As this is certainly the first obstacle standing in the way of the suit's success, Altman has presented a very novel argument in hopes of avoiding it: ad placement is first party content, so immunity should be removed even when ads are attached to third-party content. From the filing [PDF]:

By specifically targeting advertisements based on viewers and content, Defendants are no longer simply passing through the content of third parties. Defendants are themselves creating content because Defendants exercise control over what advertisement to match with an ISIS posting. Furthermore, Defendants’ profits are enhanced by charging advertisers extra for targeting advertisements at viewers based upon knowledge of the viewer and the content being viewed.

[...]

Given that ad placement on videos requires Google’s specific approval of the video according to Google’s terms and conditions, any video which is associated with advertising has been approved by Google.

Because ads appear on the above video posted by ISIS, this means that Google specifically approved the video for monetization, Google earned revenue from each view of this video, and Google shared the revenue with ISIS. As a result, Google provides material support to ISIS.

That's the 230 dodge presented in this lawsuit. The same goes for Twitter and Facebook, which also place ads into users' streams -- although any sort of "attachment" is a matter of perception (directly proceeding/following "terrorist" third-party content, but not placed on the content). YouTube ads are pre-roll and are part of an automated process. The lawsuit claims ISIS is profiting from ad revenue, but that remains to be seen. Collecting ad revenue involves a verification process which actual terrorists may not be willing to participate in.

Going beyond this, the accusations are even more nebulous. The filing asserts that each of the named companies could "easily" do more to prevent use of their platforms by terrorists. To back up this assertion, the plaintiffs quote two tech experts (while portraying their thoughts as being representative of "most" experts) that say shutting down terrorist communications would be easy.

Most technology experts agree that Defendants could and should be doing more to stop ISIS from using its social network. “When Twitter says, ‘We can’t do this,’ I don’t believe that,” said Hany Farid, chairman of the computer science department at Dartmouth College. Mr. Farid, who co-developed a child pornography tracking system with Microsoft, says that the same technology could be applied to terror content, so long as companies were motivated to do so. “There’s no fundamental technology or engineering limitation,” he said. “This is a business or policy decision. Unless the companies have decided that they just can’t be bothered.”

According to Rita Katz, the director of SITE Intelligence Group, “Twitter is not doing enough. With the technology Twitter has, they can immediately stop these accounts, but they have done nothing to stop the dissemination and recruitment of lone wolf terrorists.”

Neither expert explains how speech can so easily be determined to be terrorism or how blanket filtering/account blocking wouldn't result in a sizable amount of collateral damage to innocent users. Mr. Farid, in particular, seems to believe sussing out terrorist-supporting speech should be as easy as flagging known child porn with distinct hashes. A tweet isn't a JPEG and speech can't be as easily determined to be harmful. It's easier said than done, but the argument here is the same as the FBI's argument in respect to "solving" the encryption "problem:" the smart people could figure this out. They're just not trying.

Altman's suggestion is even worse: just prevent Twitter accounts from being created that use any part of the handle of a previously-blocked account.

When an account is taken down by a Defendant, assuredly all such names are tracked by Defendants. It would be trivial to detect names that appear to have the same name root with a numerical suffix which is incremented. By limiting the ability to simply create a new account by incrementing a numerical suffix to one which has been deleted, this will disrupt the ability of individuals and organizations from using Defendants networks as an instrument for conducting terrorist operations.

It's all so easy when you're in the business of holding US-based tech companies responsible for acts of worldwide terrorism. First, this solves nothing. If the incremental option goes away, new accounts will be created with other names. Pretty soon, a great deal of innocuous handles will be auto-flagged by the system, preventing users from creating accounts with the handle they'd prefer -- including users who've never had anything to do with terrorism. Seriously a stupid idea, especially since the Twitter handle used in the example is "DriftOne" -- a completely innocuous handle the plaintiffs would like to see treated as inherently suspicious.

And thank your various gods this attorney isn't an elected official, law enforcement officer, or holding a supervisory role at an intelligence agency. Because this assertion would be less ridiculous and more frightening if delivered by any of the above:

Sending out large numbers of requests to connect with friends/followers from a newly created account is also suspicious activity. As shown in the “DriftOne” example above, it is clear that this individual must be keeping track of those previously connected. When an account is taken down and then re-established, the individual then uses an automated method to send out requests to all those members previously connected. Thus, accounts for ISIS and others can quickly reconstitute after being deleted. Such activity is suspicious on its face.

We've seen a lot of ridiculous lawsuits fired off in the wake of tragedies, but this one appears to be the worst one yet.

The lawsuit asks the court to sidestep Section 230 and order private companies to start restricting speech on their platforms. That's censorship and that's basically what the plaintiffs want -- along with fees, damages, etc. The lawsuit asks for an order finding that the named companies are violating the Anti-Terrorism Act and "grant other and further relief as justice requires."

The lawsuit's allegations are no more sound than the assertion of the Congressman quoted in support of the plaintiffs' extremely novel legal theories:

“Terrorists are using Twitter,” Rep. Poe added, and “[i]t seems like it’s a violation of the law.”

This basically sums up the lawsuit's allegations: this all "seems" wrong and the court needs to fix it. The shooting in Orlando was horrific and tragic. But this effort doesn't fix anything and asks for the government to step in and hold companies accountable for third-party postings under terrorism laws. Not only that, but it encourages the government to pressure these companies into proactive censorship based on little more than some half-baked assumptions about how the platforms work and what tech fixes they could conceivably apply with minimal collateral damage.

from the you-can-hear-Sen.-McCain's-teeth-gritting-from-here dept

Earlier this year, Twitter pulled the plug on some of Dataminr's customers, specifically the intelligence agencies it was selling its firehose access to. Twitter made it clear Dataminr's access to every public tweet wasn't to be repurposed into a government surveillance tool.

That being said, everything swept up by Dataminr was public. There was no access to direct messages or tweets sent from private accounts. And Twitter seemingly is doing nothing to prevent Dataminr from selling this same access to the FBI, an agency that's far more an intelligence agency than a law enforcement agency these days -- one that thinks it should be allowed to do everything the CIA does, if not more.

Presumably, the FBI pinned its law enforcement badge to its chest when hooking up with Dataminr because Twitter has had nothing to say about the partnership. And it's not as though Twitter is fine with just anyone selling analytic tools to law enforcement. It, along with Facebook, yanked Geofeedia's access to APIs simply because it didn't like how Geofeedia pitched its tweet-grabbing front end. In sales materials, the company strongly hinted that law enforcement agencies could use its software to stay "one step ahead" of citizens engaged in First Amendment-protected activity.

As of this week, Twitter has made sure that federally funded fusion centers can no longer use a powerful social media monitoring tool to spy on users. After the ACLU of California discovered the domestic spy centers had access to these tools, provided by Dataminr (a company partly owned by Twitter), Dataminr was forced to comply with Twitter’s clear rule prohibiting use of data for surveillance.

Twitter sent a letter to the ACLU of California this week confirming that Dataminr has terminated access for all fusion center accounts. The letter also makes clear that Dataminr will no longer provide social media surveillance tools to any local, state, or federal government customer.

Once again, the DHS and its local partners are still free to eyeball as many public tweets as they like, but without the robust front-end that hauls in hundreds of millions of tweets every day and sorts them into easily-surveillable categories. This is probably just as well, considering the DHS's "fusion centers" are underperforming boondoggles tasked mainly with fielding ridiculous complaints from Americans who actually believe "see something, say something™" helps the nation fight terrorism, rather than simply put more government boots on the Bill of Rights' neck.

Twitter's statement says it will continue to "work with" Dataminr to further limit its pool of government customers. Dataminr, on the other hand, says there's really nothing to worry about. It may be directly attached to the Twitter firehose, but its customers aren't.

Datatminr’s product does not provide any government customers with their own direct firehose access or features to export data; the ability to search raw historical Tweet archives or to target or profile users; conduct geospatial analysis; or any form of surveillance.

Well, sure. Not now. From the third-hand discussions of conversations between Twitter and Dataminr, it appears the company will only be able to offer a highly-filtered version of its firehose to government end users. If this results in less lucrative contracts, so be it. After all, Dataminr did itself no favors by marketing its software to law enforcement with the same sort of pitches that ended Geofeedia's relationship with the social media company.

Through a public records request, the ACLU of California discovered that the Los Angeles area fusion center, JRIC, was using Dataminr and had access to the company’s powerful Geospatial Analysis Application that enables keyword searches and location-based tracking. Settings in the Geospatial App even allowed the government to focus on monitoring journalists and organizations. Using Dataminr, fusion centers like JRIC could search billions of real-time and historical public tweets and then potentially share information with the federal government.

None of this shows much in the way of consistency or integrity on Twitter's end. If Geofeedia's marketing materials bothered Twitter enough to completely yank its access, the sales pitches by Dataminr should have been equally concerning. But Dataminr is still hooked up to Twitter's hose and Geofeedia has been left to wander off somewhere into the software wilderness and die. Both marketed access to law enforcement using surveillance of First Amendment-protected activity, but only one is still allowed to do so.

from the wait,-what? dept

Last month, the UK moved forward with the latest version of its ridiculous "Digital Economy Bill" which will put in place mandatory porn filtering at the ISP level -- requiring service providers to block access to sites that don't do an age verification check. But it was at least somewhat vague as to which "ISPs" this covered. The bill has moved from the House of Commons over to the House of Lords, and apparently we now have at least something of an answer -- and it's that social media sites like Twitter and Facebook will be covered by this regulation.

In other words, those sites may be required to block accounts and block access to certain porn sites. That's ridiculous. This came out during the reading of the bill in the House of Lords where a question was raised about the responsibility of platforms under the bill:

Finally, I have a question for the Minister. I would like him to comment on what the expectations are for social media sites like Twitter, which can themselves host user-generated pornographic content. The expectations on commercial pornography websites are set out pretty clearly in Clause 15, but will the Minister please clarify how the Bill as drafted will impact on social media sites? Clause 22 starts to cover this with its reference to “ancillary service providers”, but in Clause 22(6) the reference is restricted to business activities so provided. Evidence from the Government to the Communications Select Committee on 29 October was as follows:

“Twitter is a user-generated uploading-content site. If there is pornography on Twitter, it will be considered covered under ancillary services”.

How does that apply to material on Twitter that is not uploaded in the course of business activities? I ask the Minister to clarify this point when he responds.

Later, Baroness Benjamin claims that it's important that they make sure that social media is included in the bill "for the children" (of course):

In seeking to protect children from stumbling upon pornography, it is particularly important that social media is covered by the Bill. That is one of the primary ways in which children are exposed to pornography. There has been some debate about the scope of Clause 15 and the ancillary service providers, but it seems clear to me that social media should be covered by this. I was particularly delighted that the noble Baroness, Lady Shields, confirmed to the Lords Communications Committee on 29 November that:

“The Bill covers ancillary services. There was a question about Twitter. Twitter is a user-generated uploading-content site. If there is pornography on Twitter, it will be considered covered under ancillary services”.

Can the Minister confirm that this will be the case and also the case for all other social media, including, Facebook, Tumblr and Instagram?

The debate over regulating Twitter got pretty silly pretty fast. At least one person noted that the UK was at risk of looking like idiots. This is from "The Earl of Erroll" (gotta love the House of Lords), who then admits he doesn't even know what's possible, but he's absolutely positive that age checks on any e-commerce site is no big deal.

It is probably unrealistic to block the whole of Twitter—it would make us look like idiots. On the other hand, there are other things we can do. This brings me to the point that other noble Lords made about ancillary service complaints. If we start to make the payment service providers comply and help, they will make it less easy for those sites to make money. They will not be able to do certain things. I do not know what enforcement is possible. All these sites have to sign up to terms and conditions. Big retail websites such as Amazon sell films that would certainly come under this category. They should put an age check in front of the webpage. It is not difficult to do; they could easily comply.

Finally, Lord Ashton of Hyde, admits that, yes, of course the bill will apply to social media and all those other sites, because why the fuck not?

The right reverend Prelate, the noble Baronesses, Lady Kidron and Lady Benjamin, and the noble Earl, Lord Erroll, asked a valid question about social media and Twitter. The Government believe that services, including Twitter, can be classified by regulators as ancillary service providers where they are enabling or facilitating the making available of pornographic or prohibited material. This means that they could be notified of commercial pornographers to whom they provide a service but this will not apply to material provided on a non-commercial basis.

In that same answer, he pulls an infamous "free speech is important, but..." line that is what you expect from someone about to censor speech:

It is a complicated area. Free speech is vital but we must protect children from harm online as well as offline. We must do more to ensure that children cannot easily access sexual content which will distress them or harm their development, as has been mentioned.

And, thus, you go from a system officially designed to make it hard to reach porn on the internet "for the children" to a bill that allows for the UK government to force social media companies to block or kill certain accounts. That seems like a pretty big deal.