from the this-won't-end-well dept

Yesterday I wrote that people rushing to blame Facebook for the election results were being ridiculous, and it generated a fair bit of discussion (much of it on Twitter). And this was before NYMag's Max Read went out and wrote an article literally titled Donald Trump Won Because of Facebook. Here's the crux of Max's argument, which is similar to the argument many others have been making:

The most obvious way in which Facebook enabled a Trump victory has been its inability (or refusal) to address the problem of hoax or fake news. Fake news is not a problem unique to Facebook, but Facebook’s enormous audience, and the mechanisms of distribution on which the site relies — i.e., the emotionally charged activity of sharing, and the show-me-more-like-this feedback loop of the news feed algorithm — makes it the only site to support a genuinely lucrative market in which shady publishers arbitrage traffic by enticing people off of Facebook and onto ad-festooned websites, using stories that are alternately made up, incorrect, exaggerated beyond all relationship to truth, or all three. (To really hammer home the cyberdystopia aspect of this: A significant number of the sites are run by Macedonian teenagers looking to make some scratch.)

All throughout the election, these fake stories, sometimes papered over with flimsy “parody site” disclosures somewhere in small type, circulated throughout Facebook: The Pope endorses Trump. Hillary Clinton bought $137 million in illegal arms. The Clintons bought a $200 million house in the Maldives. The valiant efforts of Snopes and other debunking organizations were insufficient; Facebook’s labyrinthine sharing and privacy settings mean that fact-checks get lost in the shuffle. Often, no one would even need to click on the story for the headline to become a widely distributed talking point, repeated elsewhere online, or, sometimes, in real life.

Meanwhile Bloomberg had a big piece, saying that Facebook (and Twitter) employees are "grappling with their role" in helping to elect Trump.

Online (on Facebook, of course), current and former employees debated the company's role as an influencer. Bobby Goodlatte, a Facebook product designer from 2008 to 2012, according to his LinkedIn, today said the company's news feed was responsible for fueling “highly partisan, fact-light media outlets” that propelled Donald Trump's ascension to the presidency. “News feed optimizes for engagement,” Goodlatte wrote. “As we’ve learned in this election, bullshit is highly engaging.”

These stories sound convenient. And my Twitter feed is chock full of people -- often people in the media who are already "angry" about Facebook "stealing" their ad revenue -- making similar noises about how Facebook needs to "fix" this.

And these stories tell a neat and convenient tale, a pre-packaged "thing" to blame. And they're all bullshit. Yes, Facebook had lots of people passing around fake news stories, or misleading news stories. And, yes, lots of people live in bubbles where they only see/read/hear stuff that they are prone to agree with. But this narrative that it was Facebook's "primed for engagement, not truth" algorithm that got people to go out and vote for Trump is both simplistic and dangerous. Let's take each problem separately.

Too Simplistic:

Blaming the Facebook algorithm for sharing fake news is too simplistic in that it gives the algorithm too much power and takes the responsibility away from human beings as living, thinking creatures. We love to blame the tools. It's practically a national pasttime, searching for the moral panic du jour to blame for people doing things that some other people don't like or find problematic. It's much easier to blame the tools.

Even worse is that it assumes millions of people are pure idiots. And, I know, among many people this may be a popular opinion right now. That if they supported "the other side" they must be complete idiots. But that's wrong. There are idiot supporters of every candidate in this election -- and we can all highlight our favorite that somehow got onto the news. But lots and lots and lots of people who voted for Trump weren't doing so because some Facebook algorithm "tricked" them, but because they legitimately believed that the status quo wasn't working and was problematic, and an awful lot of "the establishment" wanted them to shut up about what wasn't working. You can argue that they were misled about what was and what wasn't working, but again, that goes back to the idea that tens of millions of people are so stupid that they change their minds based on fake stories on Facebook.

Too Dangerous:

I write an awful lot about Section 230 of the CDA and the idea of "intermediary liability" protections and I know that some people's eyes glaze over at those terms. But there's a fundamental underlying principle behind those things and it's this: if you blame a platform for the actions of its users, you end up with massive censorship and dangerous limits on free speech and innovation.

The people calling for Facebook to "fix" this problem don't see where this leads, but it's not good. In various conversations I've had in response to yesterday's article, I keep drilling down and trying to see what people think the "solution" to this "problem" is, and it inevitably comes back to something along the lines of "well, Facebook needs to stop the fake news from spreading." If only it could. Fake news, rumors, conspiracy theories, echo chambers and "bubbles" predate Facebook by a long shot. While the musical Hamilton is reminding people that some of our founding fathers were known to fight hard against each other, not everyone is aware of the spreading of rumors and lies between Thomas Jefferson and John Adams as they campaigned for the presidency in 1800:

Jefferson secretly hired the famed pamphleteer James Callendar, who had previously seriously damaged the reputation of Adams' fellow Federalist Alexander Hamilton, to paint Adams and the Federalist party as a friend to British royalty and Adams as being bent on starting a war with France in order to further an alliance with King George. More to the point, Callender described Adams as a "hideous hermaphroditical character which has neither the force and firmness of a man, nor the gentleness and sensibility of a woman."

Adams' Federalist surrogates also brought out the proverbial long knives. A Federalist publication described Jefferson as "a mean-spirited, low-lived fellow, the son of a half-breed Indian squaw, sired by a Virginia mulatto father." Allegations were made that he cheated his British creditors, was a supporter of French radicalism and assassinations of the aristocracy, and that he made a habit out of sleeping with his female slaves.

Or read about the history of the 1828 election between Andrew Jackson and John Quincy Adams, and you might notice more than a few parallels to today -- including the spreading of fake stories about each candidate by surrogates. Here's just a snippet:

One Adams newspaper even wrote, "General Jackson's mother was a common prostitute, brought to this country by the British soldiers! She afterward married a mulatto man, with whom she had several children, of which number General Jackson is one!"

In 1876, opponents of Rutherford B. Hayes spread the rumor that he had shot his own mother. In 1928, supporters of Herbert Hoover started spreading rumors that (the Catholic) Al Smith was connecting the newly built Holland Tunnel in NY all the way to the Vatican so that the Pope would weigh in on all Presidential matters. In 1952, Dwight Eisenhower supporters distributed pamphlets claiming that his opponent, Adlai Stephenson had once killed a young girl "in a jealous rage."

Point being: fake news is spread in basically every election for the US President in history. It didn't take Facebook's algorithms, and it won't go away if Facebook's algorithms change.

In fact, it's likely to make things even worse. Remember the mostly made up "controversy" about Facebook suppressing conservative news? Remember the outrage it provoked (or have you already forgotten?). Just imagine what would happen if Facebook now decided that it was only going to let people share "true" news. Whoever gets to decide that kind of thing has tremendous power -- and there will be immediately claims of bias and hiding "important" stories -- even if they're bullshit. It will lead many of the people who are already angry about things to argue that their views are being suppressed and hidden and that they are being "censored." That's not a good recipe. And it's an especially terrible recipe if people really want to understand why so many people are so angry at the status quo.

Telling them that the news needs to be censored to "protect" them isn't going to magically turn Trump supporters into Hillary supporters. It will just convince them that they're even more persecuted.

Other than "censoring" certain content, the only other suggestion I seriously heard was someone suggesting that Facebook should force-feed its users opposing views. Like that's actually going to change anyone's mind, rather than get them pissed off again. And, once again, this seems like people failing to take responsibility for their own actions. If you don't have any friends who supported Trump, don't lump that on Facebook.

There are legitimate questions about whether you can better inform a populace. But censorship and force-feeding information is general paternalistic nonsense that totally misunderstands the issue and misdiagnoses the problem. As Clay Shirky noted earlier this year, too many Hillary supporters thought that "bringing fact checkers to a culture war" would win out, when that's never going to happen. Fighting Facebook's algorithim is more of the same nonsense. It's based in the faulty belief that those who voted for "the other" are simply too dumb to understand the truth, and if they just got more truth, they'd buy it. It's not understanding why they voted the way they did. It's looking for easy scapegoats.

Facebook's algorithm is an easy target, but it's even less likely to solve a culture war than fact checkers.

from the there's-real-anger-at-the-status-quo dept

Yeah, okay, I know there are a million and one "hot takes" going on across the media about what happened yesterday and "what went wrong." I already wrote about what the election means for tech policy and civil liberties, but the trite setup of the blame game is getting really stupid, really fast. I had already started writing up a response to this silly Vox article about how "Facebook is harming our democracy" before the election (the story came out over the weekend), but now that I'm seeing more and more people (especially in the media) blaming Facebook and "algorithms" for the results of the election, I'm turning it into this post: if you're blaming Facebook for the results of this election, you're an idiot.

Facebook's algorithm and whatever "echo chamber" or "filter bubble" or whatever it may have created did not lead to this result. This was the result of a very large group of people who are quite clearly -- and reasonably -- pissed off at the status quo. Politics has been a really corrupt game for basically ever, and for the past few decades, lots of people have been trying to pretend it wasn't as corrupt as it really is. The fact that Trump is likely to be as corrupt -- if not more so -- than those who came before him didn't matter. People were upset and voted against a candidate who, to them, basically defined the status quo and the problems with the system. This was a "throw the bums out" vote, and many of the bums deserved to be thrown out. That they voted in someone likely to be worse (especially given who he's surrounded himself with so far) wasn't the point. Just as with Brexit, this was a vote of "what we have now ain't working, let's try something different."

It's no surprise many people argued that Clinton was the wrong candidate to go against Trump. She absolutely was. She was the status quo candidate in a time when lots and lots of people didn't want the status quo.

But that's not Facebook's fault. And the idea that a better or different algorithm on Facebook would have made the results any different is just as ridiculous as the idea that newspaper endorsements or "fact checking" mattered one bit. People are angry because the system has failed them in many, many ways, and it's not because they're idiots who believed all the fake news Facebook pushed on them (even if some of them did believe it). Many people don't think Trump will be any good, but they voted for him anyway, because the status quo is broken.

There is a large slice of voters who told exit pollsters they thought Trump was dishonest, had a bad temperament, etc.--but voted for him.

The idea that people are just such suckers they believe whatever Facebook puts in front of them is silly. That's not how it works:

The fundamental problem here is that Facebook’s leadership is in denial about the kind of organization it has become. “We are a tech company, not a media company,” Zuckerberg has said repeatedly over the last few years. In the mind of the Facebook CEO, Facebook is just a “platform,” a neutral conduit for helping users share information with one another.

But that’s wrong. Facebook makes billions of editorial decisions every day. And often they are bad editorial decisions — steering people to sensational, one-sided, or just plain inaccurate stories. The fact that these decisions are being made by algorithms rather than human editors doesn’t make Facebook any less responsible for the harmful effect on its users and the broader society.

Yes, many people are falling for fake or bogus or sensationalized news -- and the Trump campaign expertly took a kernel of truth (that many mainstream media sources didn't want him to win) and spun it into the idea that no media story highlighting his flaws, lies or corruption (no matter how carefully and factually reported) could be believed. But people are believing those stories because they match with their real world experience of seeing how the system has worked (or not worked) for too long.

I've already expressed my concerns about what a Trump presidency will do for the issues that I spend my days focused on -- and it's not good. But as loyal readers here at Techdirt should know well, we've never been particularly supportive of the way things have been running in the government all along -- and that's through 10 years under Democratic presidencies and 8 years under GOP presidencies. The federal government has a long history of doing bad stuff: stomping on free speech and expanding surveillance (who cares about the 1st or 4th Amendments?), pushing policies that will harm innovators in favor of legacy industries (including in both the copyright and patent spaces) and generally disregarding what's best for the public. I fear that Trump will make things significantly worse, but I certainly recognize the need to change the status quo overall. And not because of Facebook's stupid algorithm.

from the sometimes? dept

Monopolies are one of the areas that even the most staunchly anti-regulation folks often agree there is a role for government intervention. In the world of tech, multiple big antitrust fights have broken out and continue to rage in both America and the EU — but how effective is this kind of regulation and how often should it really happen? This week, we discuss whether or not there is a role for antitrust in the world of technological innovation.

from the where-are-the-adults dept

Earlier this week, Bloomberg had a fairly revealing article about the internal digital efforts of the Donald Trump campaign, in which Bloomberg reporters embedded for a few days. The whole article is quite interesting, but one of the most stunning parts, frankly, was the Trump campaign staffers directly admitting how they are actively trying to suppress voting by African Americans. It's no secret that a variety of new voter ID laws are designed to suppress voting -- especially among minorities. When North Carolina's voter ID law was struck down by the court, the judge pointed out how the legislators that had backed it had explicitly targeted rules that would suppress votes among African Americans. They had requested "racial data" concerning voter ID and then specifically targeted the types of ID more commonly used by African Americans.

In her remarkable opinion, Judge Motz strongly suggests that North Carolina’s law was indeed racist. The day following the release of Shelby County, she noted, a GOP leader in the state legislature announced his intention to write a law that the feds would have no authority to vet before it went into effect. Like laws in other Republican states, the North Carolina bill imposed a tough new photo-ID requirement. But it did much more: the law eliminated same-day voter registration and pre-registration for high-school students about to turn 18, curtailed early voting by one week and banned out-of-precinct voting.

Each of these new rules disproportionately impacted black voters seeking to exercise the franchise, as legislators in North Carolina were well aware. “[P]rior to enactment” of the law, the Fourth Circuit explained, “the legislature requested and received racial data as to usage of the practices changed by the proposed law.” Released from the obligation to clear their law with the Justice department and “with race data in hand, the legislature amended the bill to exclude many of the alternative photo IDs used by African Americans.” Photo IDs used more often by black voters, including public assistance IDs, were removed from the list of acceptable identification, while IDs issued by the Department of Motor Vehicles—which blacks are less likely to have—were retained. Cutting the first week of early voting came in reaction to data showing that the first seven days were used by large numbers of black voters, nixing one Sunday on which churches would bus “souls-to-the-polls”. Banning same-day registration, too, had an outsize effect on blacks, as did the prohibition on out-of-precinct voting: both changes made voting harder for people who had recently moved, and blacks are more itinerant than whites.

That, alone, was pretty stunning, but they still tried to pretend in public that the law wasn't about suppressing the vote. However, when put with a Bloomberg reporter, the Trump campaign flat out brags about its efforts to suppress the vote among African Americans. And they're using extreme targeting on Facebook to do so:

Instead of expanding the electorate, Bannon and his team are trying to shrink it. “We have three major voter suppression operations under way,” says a senior official. They’re aimed at three groups Clinton needs to win overwhelmingly: idealistic white liberals, young women, and African Americans. Trump’s invocation at the debate of Clinton’s WikiLeaks e-mails and support for the Trans-Pacific Partnership was designed to turn off Sanders supporters. The parade of women who say they were sexually assaulted by Bill Clinton and harassed or threatened by Hillary is meant to undermine her appeal to young women. And her 1996 suggestion that some African American males are “super predators” is the basis of a below-the-radar effort to discourage infrequent black voters from showing up at the polls—particularly in Florida.

On Oct. 24, Trump’s team began placing spots on select African American radio stations. In San Antonio, a young staffer showed off a South Park-style animation he’d created of Clinton delivering the “super predator” line (using audio from her original 1996 sound bite), as cartoon text popped up around her: “Hillary Thinks African Americans are Super Predators.” The animation will be delivered to certain African American voters through Facebook “dark posts”—nonpublic posts whose viewership the campaign controls so that, as Parscale puts it, “only the people we want to see it, see it.” The aim is to depress Clinton’s vote total. “We know because we’ve modeled this,” says the official. “It will dramatically affect her ability to turn these people out.”

Now that's... interesting (and ridiculous, but we'll leave that aside for the moment). Of course, every election cycle involves a ton of targeted "negative advertising" that is designed to suppress overall interest in a candidate. But the two things newsworthy here are (1) the fact that the Trump campaign is directly admitting to the intention behind that strategy here, rather than hiding it and (2) the ability to use Facebook to target these kinds of campaigns to a level previously not available.

Facebook, somewhat famously, allows extraordinarily targeted advertising. We've played around with it ourselves, and it's really quite incredible how granular you can go in trying to target your ads. Basically any trait or interest or demographic group that you can think of, you can put into an ad target group. At times, as you dig through the options, it almost feels like it's just Facebook showing off just how much data and insight it has into its users. It's a data nerd's dream, where you can slice and dice billions of people by basically anything.

Of course, it's somewhat ironic that the Trump campaign is using Facebook to suppress the vote, at the same time that Facebook is patting itself on the back for helping to get out the vote with its voter registration campaign, and, in the past has directly experimented with changing newsfeeds to encourage more voter turnout. Platforms like Facebook can be used for both good and evil.

Either way, sometimes the data nerds (and the advertising folks) have to be reminded of the law. Pro Publica has a pretty damning report out today about the fact that Facebook's slicing and dicing of targeted advertising also means that you can exclude people by race. They don't discuss the recent revelations about the Trump campaign's targeting, but it's pretty clear that this is how they're doing that suppression campaign described above. But it also presents potentially serious legal problem in areas where it is illegal to discriminate based on race, such as hiring or housing. And yet, Facebook's current set up allows users to do just that:

The Propbulica article quotes a civil rights lawyer who is reasonably horrified by this. But there are some big legal questions. From the data geek side of things, you can easily see how Facebook reached this point, continually slicing up data in more and more ways, without necessarily considering the consequences. But does that make Facebook legally liable for, say, violating the Fair Housing Act? That's... a much tougher question.

Facebook argues (1) that it's policies say that advertisers cannot discriminate in illegal ways, and anyone caught doing so will face punishment. (2) Facebook is likely protected by Section 230 of the CDA on this. I say "likely" instead of "definitely" because one of the few cases that cut through the CDA 230 protections is the famous Roommates.com case, which was explicitly about racial discrimination on housing based on Roommates.com ads that violated the Fair Housing Act. However, Facebook has a much stronger argument than Roommates in that case, because part of the issue is that Roommates directly asked users for a racial preference, making it content they had designed, rather than content that the user created. Facebook can (reasonably) argue that it was just offering up millions of ways to slice and dice the data, rather than explicitly calling out racial preference. (3) Facebook says the rules are not based on "race" but "racial affinity." This is a dumb argument and Facebook should not say it ever again, and possibly apologize for even bringing up such a lame argument in the first place.

Separately, Facebook argues -- correctly -- that there are lots of cases where advertisers have perfectly legitimate reasons for targeting based on race.

Satterfield said it’s important for advertisers to have the ability to both include and exclude groups as they test how their marketing performs. For instance, he said, an advertiser “might run one campaign in English that excludes the Hispanic affinity group to see how well the campaign performs against running that ad campaign in Spanish. This is a common practice in the industry.”

That said, there's simply no reason that Facebook couldn't put in a system to recognize ads that are in a protected category in which discrimination may be an issue, and either block such usage or at least put a strong warning for the user (and alert the Facebook team to review the ad more carefully -- since all ads are reviewed before being put live). It's not clear that there's a legal mandate to do so, but it just seems like a good practice in general. I've seen lots of people commenting on this story in which they are rightfully horrified about the potential abuse of such a tool, and they're quick to blame Facebook's "negligence." It does seem more like carelessness than negligence, in that you can see how the company got here, as it contined to alow greater and greater targeting attributes, which advertisers really appreciate.

from the bunch-of-boobs dept

Stories of Facebook's attempt at puritanical patrols of its site are legion at this point. The site has demonstrated it cannot filter out parody, artwork, simple speech in the form of outrage, iconic historical photos, or sculpture from its prude-patrol censorship. As a private company, Facebook is of course allowed to follow its own whim when it comes to what is allowed on its site, but as an important tool in this era for communication and speech, the company is also a legitimate target for derision when it FUBARs this as badly as it does so often.

So queue up the face-palming once more, as Facebook has decided to remove a video posted by a Swedish cancer charity informing women how to check for breast cancer, because the video included animated breasts, and breasts are icky icky.

Facebook has removed a video on breast cancer awareness posted in Sweden after deeming the images offensive, the Swedish Cancer Society said on Thursday. The video, displaying animated figures of women with circle-shaped breasts, was aimed at explaining to women how to check for suspicious lumps. Sweden’s Cancerfonden said it had tried in vain to contact Facebook, and had decided to appeal against the decision to remove the video.

Based on images on Cancerfonden's site, the tantalizing breasts in question were of the variety of stick figures. Not exactly tantalizing in its imagery, the video content was instead supposed to educate women on the proper method for detecting lumps that could be cancerous. Save for perhaps some minor percentage of humankind, these are the types of images that don't conjure a sexual connotation. And yet Facebook took them down.

In a statement to the BBC, a spokeswoman for Facebook said the images of the Swedish campaign had now been approved.

"We're very sorry, our team processes millions of advertising images each week, and in some instances we incorrectly prohibit ads," she said. "This image does not violate our ad policies. We apologise for the error and have let the advertiser know we are approving their ads."

Which, you know, fine, but exactly how many of these types of stories must be endured before Facebook acknowledges that there is a problem with its filtering and censorship process? I don't think the exclusion of oversight is the answer, but I would hope that we could agree that if the takedown filters continue to catch bronze statues and breast cancer videos in its net, perhaps some recalibration is needed.

from the when-in-doubt,-press-repeat dept

Pam Geller has decided there's nothing like grabbing more shovels when you're already in a hole. [And that means it's time for notable "leftist publication" Techdirt to crank out another "little hit piece" filled with "hyperbole and nonsense," apparently...]

Geller doesn't like the way she's been treated by Facebook, YouTube, and Twitter and has decided the problem is Section 230 of the CDA. So, she's suing the DOJ for "enforcing" the immunity the government has granted to websites to shield them from being held responsible for user-generated content.

The DOJ responded to her lawsuit by pointing out that the DOJ doesn't ENFORCE anything. It's a defense service providers can raise when entities come after them for content posted by their users. In Geller's mind, Section 230 gives service providers the "right" to arbitrarily remove content. She's wrong, of course. It does no such thing. Instead, Section 230 prevents service providers from being held civilly liable for making "good faith" efforts to remove objectionable content. The rest of what Geller's complaining about can be traced back to each provider's terms of service and their individual translations of what that means in terms of Geller's often-inflammatory content.

Geller continues to insist this is about suing Facebook, even though Facebook isn't a named party. And her response to the DOJ's motion to dismiss strongly suggests she feels she can't directly sue any service provider for taking down her content because of Section 230. This is also incorrect. She may have almost no chance of winning the suit, but nothing in Section 230 prevents service providers from being sued for allegedly discriminatory behavior. From Geller's opposition motion [PDF] (h/t Adam Steinbaugh):

By way of § 230, the Government is empowering this type of discrimination and censorship. By its own terms, § 230 permits Facebook, Twitter, and YouTube “to restrict access to or availability of material that [they] consider[] to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”

This is where Geller misreads "permits" as "orders." Section 230 does not place any content-based restrictions on speech. Instead, it immunizes service providers from civil liability for good faith content removal. Geller calls this immunization "government-sanctioned discrimination and censorship of speech" -- somehow finding a defense mechanism to be an avenue of attack. (She repeats her laughable assertion that Section 230 is a "heckler's veto" multiple times in the filing.)

From there, Geller theorizes that Section 230 would prevent Facebook, et al from being sued for violating California's anti-discrimination statutes. This theory is incorrect as well.

Nothing in this section shall be construed to prevent any State from enforcing any State law that is consistent with this section. No cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.

This law immunizes Facebook from being held liable for, say, Pam Geller's controversial content -- even if a state law says otherwise. What it doesn't do is immunize Facebook from liability for violating California discrimination laws, which is where Geller has a somewhat more cognizable claim. Unfortunately for her, she's chosen to name the wrong defendants and file in the wrong jurisdiction. Continuing to misconstrue a defense as an attack, Geller insists that she has standing to sue the federal government for content removal performed by a private company.

The very reason why Facebook, Twitter, and YouTube are able to engage in their discriminatory practices with impunity is § 230. See Klayman v. Zuckerberg, 753 F.3d 1354 (D.C. Cir. 2014) (concluding that § 230 foreclosed tort liability predicated on Facebook’s decision to allow or to remove content). In other words, the Government has sanctioned these discriminatory practices by placing them above the law. Consequently, the traceability element is satisfied.

If there's anything "traceable" here, it's the California location of the entities she mentions in her lawsuit (YouTube, Facebook, Twitter) but has not named as defendants. California law is the angle she should be using to attack these companies for their allegedly "discriminatory" removal of her postings, but she has filed in federal court and named the DOJ as the defendant.

Geller notes that California law prohibits the sort of discriminatory behavior she's alleging:

Section 51 of the California Civil Code provides, in relevant part, All persons within the jurisdiction of this state are free and equal, and no matter what their sex, race, color, religion, ancestry, national origin, disability, medical condition, genetic information, marital status, or sexual orientation are entitled to the full and equal accommodations, advantages, facilities, privileges, or services in all business establishments of every kind whatsoever.

If her allegations are true and these service providers are discriminating against her, Section 230 would not immunize them against these claims. But even if she were to raise claims solely under this law, she would likely not succeed.

The law only requires company provide "access." It does not demand they allow anyone to do whatever they want once they're granted access. Under this law, Facebook can't deny Geller an account simply because it doesn't like her religious views, but it is under no obligation to allow her to post whatever she wants. The DOJ, in its motion to dismiss, addressed this point as well (even though it was under no obligation to make California's arguments for it).

Nor is it clear how California law can require a private social media company to publish Plaintiffs’ speech, see Compl. ¶¶ 46-61, or how such a state-law requirement would be consistent with the First Amendment, which arguably protects a social media company’s editorial control or judgment from government regulation that would require publication of a certain message.

If Geller were able to prove she was denied access based on her religious beliefs (and a temporary ban doesn't cut it, legally-speaking), Section 230 would not stand in the way of the civil suit Geller doesn't appear to actually want to file. All Section 230 immunizes against is holding Facebook civilly liable for content users like Pam Geller have posted. And Geller's main complaint is that Facebook keeps taking her posts down, not allowing them to stay up.

At best, Geller's extremely misguided lawsuit may eventually boil down to litigation directly implicating California's anti-discrimination law and how that is actually applied to service providers located in California, but with users all over the world. It may also result in a somewhat indirect challenge of that law's Constitutionality. But what it won't do is make the federal government responsible for Facebook's actions. And Geller, whose popularity and following largely relies on inflammatory speech, is only shooting herself in the foot by attacking Section 230. If this immunization were not provided to social media platforms, it's highly unlikely she'd have anything more than a self-hosted personal blog for a soapbox.

The final irony is that Geller is no doubt opposed to anti-discrimination laws like California's that force private businesses to cater to customers they'd rather not -- perhaps even in opposition to their own religious beliefs. (See also: same sex marriage/wedding cakes.) But she wants the government to step in and act as arbiters of private companies' terms of service and prevent the sort of discrimination she claims is taking place.

Geofeedia itself didn't do anything illegal. It simply provided a one-stop shop for social media monitoring of public posts. It's the way it was pitched that was a problem. Rather than sell it as a way to keep law enforcement informed of criminal activity, its sales team highlighted its usefulness in monitoring protestors and other First Amendment activity.

The documents the ACLU obtained show the company paid these three social media services for "firehose" attachments -- beefed-up API calls that allowed Geofeedia to access more public posts faster than law enforcement could do on its own.

Instagram had provided Geofeedia access to the Instagram API, a stream of public Instagram user posts. This data feed included any location data associated with the posts by users. Instagram terminated this access on September 19, 2016.

Facebook had provided Geofeedia with access to a data feed called the Topic Feed API, which is supposed to be a tool for media companies and brand purposes, and which allowed Geofeedia to obtain a ranked feed of public posts from Facebook that mention a specific topic, including hashtags, events, or specific places. Facebook terminated this access on September 19, 2016.

Twitter did not provide access to its “Firehose,” but has an agreement, via a subsidiary, to provide Geofeedia with searchable access to its database of public tweets.

Of all of these companies, only Twitter took proactive steps to prevent API calls from being used for proxy surveillance.

In February, Twitter added additional contract terms to try to further safeguard against surveillance. But our records show that as recently as July 11th, Geofeedia was still touting its product as a tool to monitor protests. After learning of this, Twitter sent Geofeedia a cease and desist letter.

Well, it's all over now. It's not just Twitter demanding Geofeedia stop turning its service into an extension of law enforcement's worst urges. It's everyone. The ACLU dumped its documents and, shortly after, the companies dumped Geofeedia.

After reviewing the report, Facebook cutoff Geofeedia’s access to commercially available data from its social platform and from Instagram, which it owns.

On Tuesday, Twitter said they were also cutting off the Chicago social media company’s access.

So much for the business model. Twitter cited its long-standing policy of preventing its service from being used as a surveillance tool -- a policy it exercised earlier this year when cutting off Dataminr's access to its APIs for selling its collected communications to US intelligence agencies.

Facebook simply stated that Geofeedia's API use was "unauthorized," something it probably should have realized well before the ACLU shamed it into cutting off the company's access.

Geofeedia, meanwhile, has stated it will meet with all "stakeholders," which apparently means Twitter, Facebook, and various government agencies. Users of these services haven't been invited to do anything more than vote with their digital feet.

For all the call-and-response, the underlying fact is that Geofeedia didn't have access to anything any individual user didn't. It may have had more of it faster and a front end that made surveillance/monitoring easier, but it wasn't gathering tweets or posts from private accounts or otherwise accessing anything not already viewable by the public.

But its sales tactics were a bit concerning. The company pretty much encouraged law enforcement agencies to engage in some very questionable monitoring.

Geofeedia claims to be interested in protecting the civil liberties of Americans, while at the same time nudging law enforcement agencies towards undermining those protections. Because of that, it's now probably looking at handing out some refunds, seeing as its all-seeing-APIs have been cut off and all it can really offer at this point is part-owner positions in a fast-growing pariahship.

from the we're-helping! dept

Last year the Indian government forged new net neutrality rules that shut down Facebook's "Free Basics" service, which provided a Facebook-curated "light" version of the internet -- for free. And while Facebook consistently claimed its program was simply altruistic, critics (including Facebook content partners) consistently claimed that Facebook's concept gave the company too much power, potentially harmed free speech, undermined the open nature of the Internet, and provided a new, centralized repository of user data for hackers, governments and intelligence agencies.

In short, India joined Japan, The Netherlands, Chile, Norway, and Slovenia in banning zero rating entirely, based on the idea that cap exemption gives some companies and content a leg up, and unfairly distorts the inherently level internet playing field. It doesn't really matter if you're actually altruistic or just pretending to be altruistic (to oh, say, lay a branding foundation to corner the content market in developing countries in 30 years); the practice dramatically shifts access to the internet in a potentially devastating fashion that provides preferential treatment to the biggest carriers and companies.

Fast forward a year and Facebook is now considering bringing the controversial service to the United States. The company has apparently been in talks with the White House about getting the idea rolling in the U.S., without setting off the same kind of regulatory alarm bells it faced in India:

"The effort to offer a U.S.-based version of Free Basics is moving forward in fits and starts, said the people, who spoke on the condition of anonymity because the effort has not been publicly revealed. In particular, the company wants to ensure that Free Basics will be viewed favorably by the U.S. government before it launches, thus avoiding a costly repeat of its experience in India."

Again, India reacted poorly not because Facebook was giving away "free stuff," but because Facebook was trying to install itself as the 90's AOL of the modern internet. Content partners dropped out because they didn't like Facebook dictating which websites and services get to be "zero rated." Companies like Mozilla suggested that if Facebook really wants to help the world's poor, it can start by funding access to the actual Internet. Facebook, annoyed by those who don't believe it's being purely altruistic, responded by calling such critics "extremists" who are hurting the poor.

The fight comes to US shores as the country is already facing a growing array of problems thanks to zero rating. Whereas India banned the practice, the FCC passed net neutrality rules that don't ban it outright, opening the door to companies trampling net neutrality if they're just creative enough. As a result, Comcast, Verizon and AT&T all now exempt their own streaming content from caps while still penalizing Netflix. Similarly T-Mobile and Sprint have now started throttling video, music and games unless customers pay a steep monthly premium.

So while the FCC twiddles its thumbs to what's quickly becoming a growing problem (unless you're an ISP or a deep-pocketed content company), Facebook is looking to get in on the ground floor of a concept that professes to be "helping" while dramatically changing the way access to the internet works. Amusingly, the social media giant appears to be treading so carefully, it's refusing to strike deals with big carriers out of an obvious fear of anti-competitive criticism:

"Facebook has not attempted to strike a deal with national wireless carriers such as T-Mobile or AT&T, said the people familiar with the matter, over concerns that regulators may perceive the move as anti-competitive. Instead, it has pursued relationships with lesser-known carriers."

Again, if you want to help low-income global citizens access to the internet -- doesn't it just make more sense to help fund connections to the actual internet?

from the getting-more-interesting dept

So, the big story yesterday was clearly the report that Yahoo had secretly agreed to scan all email accounts for a certain character string as sent to them by the NSA (or possibly the FBI). There has been lots of parsing of the Reuters report (and every little word can make a difference), but there are still lots of really big questions about what is actually going on. One big one, of course, is whether or not other tech companies received and/or complied with similar demands. So it seems worth nothing that they've basically all issued pretty direct and strenuous denials to doing anything like what Yahoo has been accused of doing.

Twitter initially gave a "federal law prohibits us from answering your question" answer -- and a reference to Twitter's well documented lawsuit against the US government over its desire to reveal more details about government requests for info. However, it later clarified that it too was not doing what Yahoo was doing and had never received such a request. Microsoft's response was interesting in that it says it's not doing what Yahoo is, but refused to say if it had ever received a demand to do so. Google said it had never received such a request and would refuse to comply if it had. Facebook has also denied receiving such a request, and, like Google, says it would fight against complying. This still leaves lots of unanswered questions about why Yahoo gave in. Again, historically, Yahoo had been known to fight against these kinds of requests, which makes you wonder what exactly was going on here.

Former GCHQ infosecurity guy Matt Tait has one of the more more interesting threads about this news, arguing (in some ways) that it's both less and more than everyone is making it out to be. His basic argument is that this is an expansion of the PRISM program to include "about" targets. This has been discussed in the past, but under PRISM, the NSA could give tech companies "selectors" in the form of specific addresses and the companies were compelled to hand over emails "to" or "from" them -- but according to the PCLOB's report on the Section 702 program it did not include anyone emailing "about" the selector. Upstream collections (i.e., tapping the backbones from folks like AT&T) did include "about" selectors (and this information also flowed into other areas, enabling so called backdoor searches. And, as I speculated yesterday, Tait says that this latest news appears to be Yahoo now agreeing to use "about" selectors on its emails, which means that it's still part of PRISM, with a massive expansion.

Tait then notes that if James Clapper wants to clear this up, he should state publicly whether or not "about" collection is a part of PRISM. And if that's the case, he should also explain when and why PRISM was expanded to include this. But, of course, Clapper and the Intelligence Community tend not to want to explain very much of anything, leaving lots of people in the dark.

And, frankly, that's stupid. The Intelligence Community thinks that this keeps "bad guys" on edge, not knowing what's safe and what's not. But that's dumb. They mostly know to use more encrypted/secret means of communication when they need to. Instead, what you end up with is keeping the public on edge and not trusting services. I can almost guarantee that one of the early comments on this post will be some of you insisting that all the companies denying doing this are flat out lying. I don't agree with that, because the companies don't have a history of outright lying on things like this, but the way the NSA and other parts of the US government have repeatedly tried to pressure them and gag them, it's much tougher to take anything at face value any more. And that's not good for anyone.

from the DOJ-motion-tl;dr:-what-even-is-this dept

There simply aren't enough derogatives in the dictionary to apply to Pam Geller's lawsuit against the DOJ for its "enforcement" of Section 230. Geller doesn't appear to know what she's doing, much less who she's suing. Her blog posts portray her lawsuit against the DOJ as being against Facebook. Facebook has earned the ire of Geller by enforcing its terms of use -- rules Geller clearly disagrees with.

Somehow, Geller has managed to construe the actions of a private platform as government infringement on her First Amendment rights. The connective tissue in her litigious conspiracy theory is Section 230 -- the statute that protects service providers from being sued for the actions of their users.

Considering Geller's fondness for posting inflammatory content, you'd think the last thing she'd want to attack is Section 230. A successful dismantling of this important protection would mean Geller would be even less welcome on any social media platform.

But the burning stupidity propelling Geller's white-hot hazardous waste dump of a lawsuit knows no bounds. Somehow, actual lawyers -- working in concert with Geller -- came up with this breathtakingly wrong interpretation of Section 230.

Section 230 confers broad powers of censorship, in the form of a “heckler’s veto,” upon Facebook, Twitter, and YouTube censors, who can censor constitutionally protected speech and engage in discriminatory business practices with impunity by virtue of this power conferred by the federal government.

These are the sorts of allegations the DOJ somehow must respond to, thanks to Geller suing Facebook by suing the DOJ or whatever the hell it is that's happening here.

The DOJ has responded [PDF]. It also finds the lawsuit to be a monument of mouth-breathing stupidity but is unable to say so in those exact words. Instead, it simply points out that everything about the lawsuit is wrong -- especially the parts where Geller insists the DOJ is somehow on the hook for "forcing" service providers to avail themselves of the Section 230 "heckler's veto." (h/t Adam Steinbaugh for sending over the motion to dismiss)

Plaintiffs’ alleged injury—a private social-media company’s removal of content from a particular user’s account pursuant to that company’s private terms of service—is not an action that is fairly traceable to the United States or the federal statute Plaintiffs identify in their Complaint—Section 230 of the CDA. Instead, Plaintiffs’ allegations make clear that they are aggrieved by the decisions of private third parties, whom the United States does not control and whose actions it cannot predict. Plaintiffs’ alleged injury is also not redressable by their requested relief. Plaintiffs request that the Court declare Section 230 to be unconstitutional and to enjoin the Attorney General from enforcing this provision. But the Attorney General does not enforce Section 230 against private parties. To the contrary, this provision merely provides an immunity that a private party can invoke as a defense in a private civil lawsuit. Because the Attorney General does not enforce Section 230 against anyone, an injunction prohibiting such non-existent enforcement would be meaningless and would not redress Plaintiffs’ alleged injury.

The DOJ goes on to point out that even if Geller and her lawyers could assemble a coherent claim, they're going after the wrong party. The correct target would be Facebook -- which Geller seems to believe is the entity she's actually suing -- rather than the US government, which has nothing to do with the perceived "censorship" Geller's complaining about.

[E]ven if Plaintiffs could establish Article III standing, they fail to state a cognizable constitutional claim because they do not identify any state action that could implicate the First Amendment. It is axiomatic that the First Amendment applies only to the government’s restriction of speech, and not to a private individual or entity’s decision to permit or restrict speech. Yet Plaintiffs challenge a quintessentially private decision in this case—a social media company’s control of its platform pursuant to its terms of service. Under well-established state-action principles, Plaintiffs cannot show that Section 230 caused the constitutional deprivation they allege, or that the entities causing the injury—private social media companies—are state actors. Therefore, Plaintiffs fail to state a claim under the First Amendment and judgment should be entered in favor of the United States.

This judgment is sure to follow presumably with the judicial version of "lolwut" and a dismissal with prejudice. It's not like an amended complaint could fix this brutally-misguided lawsuit. To begin with, it needs an entirely different defendant (Facebook) and the excision of anything involving Section 230, because nothing about that protection has anything to do with the issues Gellar's complaining about. If Geller doesn't like the way Facebook treats her, she's free to complain directly to the company. It likely won't do her any good, but trying to take a company to court for enforcing its terms of use isn't going to go much further than suing the DOJ over service provider immunity.