Dave Emory’s entire life­time of work is avail­able on a flash drive that can be obtained HERE. The new drive is a 32-gigabyte drive that is current as of the programs and articles posted by the fall of 2017. The new drive (available for a tax-deductible contribution of $65.00 or more.)

WFMU-FM is podcasting For The Record–You can subscribe to the podcast HERE.

The UK’s Channel 4 sent an investigative journalist undercover to work for one of the third-party companies Facebook pays to moderate content. This investigative journalist was trained to take a hands-off approach to far right violent content and fake news because that kind of content engages users for longer and increases ad revenues. ” . . . . An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups ‘exceed deletion threshold,’ and that those pages are ‘subject to different treatment in the same category as pages belonging to governments and news organizations.’ The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. . . . .”

The article is also a reminder that you don’t necessarily need to download a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.

” . . . . AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called ‘AIQ Johnny Scraper’ registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts. . . .”

In addition, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting associated with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. This is a service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made.

“Mark Zuckerberg faces allegations that he developed a ‘malicious and fraudulent scheme’ to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive ‘weaponised’ the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal. . . . . ‘The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,’ legal documents said. . . . . Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. . . . ‘They felt that it was better not to know. I found that utterly horrifying,’ he [former Facebook executive Sandy Parakilas] said. ‘If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.’ . . . .”

The above-mentioned Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.

Is this truly the end of Cambridge Analytica?

No.

They’re rebranding under a new company, Emerdata. Intriguingly, Cambridge Analytica’s transformation into Emerdata is noteworthy because the firm’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince: ” . . . . But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. . . . An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . “

In the Big Data internet age, there’s one area of personal information that has yet to be incorporated into the profiles on everyone–personal banking information. ” . . . . If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said. . . .”

Facebook is approaching a number of big banks – JP Morgan, Wells Fargo, Citigroup, and US Bancorp – requesting financial data including card transactions and checking-account balances. Facebook is joined byIn this by Google and Amazon who are also trying to get this kind of data.

Facebook assures us that this information, which will be opt-in, is to be solely for offering new services on Facebook messenger. Facebook also assures us that this information, which would obviously be invaluable for delivering ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Messenger service. This is a dubious assurance, in light of Facebook’s past behavior.

” . . . . Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends.The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter. Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said. . . .”

Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”

Program Highlights Include:

Facebook’s project to incorporate brain-to-computer interface into its operating system: “ . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”

” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”

” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”

” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”

Some telling observations by Nigel Oakes, the founder of Cambridge Analytica parent firm SCL: ” . . . . . . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .”

Further exposition of Oakes’ statement: ” . . . . Adolf Hitler ‘didn’t have a problem with the Jews at all, but people didn’t like the Jews,’ he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims. . . . ‘What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,’ he told Dr. Briant. ‘Trump had the balls, and I mean, really the balls, to say what people wanted to hear.’ . . .”

Observations about the possibilities of Facebook’s goal of having AI governing the editorial functions of its content: As noted in a Popular Mechanics article: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout ‘we’re AWFUL lol,’ the lol might be the one part it doesn’t understand. . . .”

Microsoft’s Tay Chatbot offers a glimpse into this future: As one Twitter user noted, employing sarcasm: “Tay went from ‘humans are super cool’ to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”

1. The UK’s Channel 4 sent an investigative journalist undercover to work for one of the third-party companies Facebook pays to moderate content. This investigative journalist was trained to take a hands-off approach to far right violent content and fake news because that kind of content engages users for longer and increases ad revenues. ” . . . . An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups ‘exceed deletion threshold,’ and that those pages are ‘subject to different treatment in the same category as pages belonging to governments and news organizations.’ The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. . . . .”

An investigative journalist who went undercover as a Facebook moderator in Ireland says the company lets pages from far-right fringe groups “exceed deletion threshold,” and that those pages are “subject to different treatment in the same category as pages belonging to governments and news organizations.” The accusation is a damning one, undermining Facebook’s claims that it is actively trying to cut down on fake news, propaganda, hate speech, and other harmful content that may have significant real-world impact.The undercover journalist detailed his findings in a new documentary titled Inside Facebook: Secrets of the Social Network, that just aired on the UK’s Channel 4. The investigation outlines questionable practices on behalf of CPL Resources, a third-party content moderator firm based in Dublin that Facebook has worked with since 2010.

Those questionable practices primarily involve a hands-off approach to flagged and reported content like graphic violence, hate speech, and racist and other bigoted rhetoric from far-right groups. The undercover reporter says he was also instructed to ignore users who looked as if they were under 13 years of age, which is the minimum age requirement to sign up for Facebook in accordance with the Child Online Protection Act, a 1998 privacy law passed in the US designed to protect young children from exploitation and harmful and violent content on the internet. The documentary insinuates that Facebook takes a hands-off approach to such content, including blatantly false stories parading as truth, because it engages users for longer and drives up advertising revenue. . . .

. . . . And as the Channel 4 documentary makes clear, that threshold appears to be an ever-changing metric that has no consistency across partisan lines and from legitimate media organizations to ones that peddle in fake news, propaganda, and conspiracy theories. It’s also unclear how Facebook is able to enforce its policy with third-party moderators all around the world, especially when they may be incentivized by any number of performance metrics and personal biases. . . . .

Meanwhile, Facebook is ramping up efforts in its artificial intelligence division, with the hope that one day algorithms can solve these pressing moderation problems without any human input.Earlier today, the company said it would be accelerating its AI research efforts to include more researchers and engineers, as well as new academia partnerships and expansions of its AI research labs in eight locations around the world.. . . .The long-term goal of the company’s AI division is to create “machines that have some level of common sense” and that learn “how the world works by observation, like young children do in the first few months of life.” . . . .

The following article is also a reminder that you don’t necessarily need to download a Cambridge Analytica/AIQ app for them to be tracking your information and reselling it to clients. Security researcher stumbled upon a new repository of curated Facebook data AIQ was creating for a client and it’s entirely possible a lot of the data was scraped from public Facebook posts.

” . . . . AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called ‘AIQ Johnny Scraper’ registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts. . . .”

Additionally, the story highlights a forms of micro-targeting companies like AIQ make available that’s fundamentally different from the algorithmic micro-targeting we typically associate with social media abuses: micro-targeting by a human who wants to specifically look and see what you personally have said about various topics on social media. A service where someone can type you into a search engine and AIQ’s product will serve up a list of all the various political posts you’ve made or the politically-relevant “Likes” you’ve made.

AggregateIQ, a Canadian consultancy alleged to have links to Cambridge Analytica, collected and stored the data of hundreds of thousands of Facebook users, according to redacted computer files seen by the Financial Times.The social network banned AggregateIQ, a data company, from its platform as part of a clean-up operation following the Cambridge Analytica scandal, on suspicion that the company could have been improperly accessing user information. However, Chris Vickery, a security researcher, this week found an app on the platform called “AIQ Johnny Scraper” registered to the company, raising fresh questions about the effectiveness of Facebook’s policing efforts.

The technology group now says it shut down the Johnny Scraper app this week along with 13 others that could be related to AggregateIQ, with a total of 1,000 users.

Ime Archibong, vice-president of product partnerships, said the company was investigating whether there had been any misuse of data. “We have suspended an additional 14 apps this week, which were installed by around 1,000 people,” he said. “They were all created after 2014 and so did not have access to friends’ data. However, these apps appear to be linked to AggregateIQ, which was affiliated with Cambridge Analytica. So we have suspended them while we investigate further.”.

According to files seen by the Financial Times, AggregateIQ had stored a list of 759,934 Facebook users in a table that recorded home addresses, phone numbers and email addresses for some profiles.

Jeff Silvester, AggregateIQ chief operating officer, said the file came from software designed for a particular client, which tracked which users had liked a particular page or were posting positive and negative comments.

“I believe as part of that the client did attempt to match people who had liked their Facebook page with supporters in their voter file [online electoral records],” he said. “I believe the result of this matching is what you are looking at. This is a fairly common task that voter file tools do all of the time.”

He added that the purpose of the Johnny Scraper app was to replicate Facebook posts made by one of AggregateIQ’s clients into smartphone apps that also belonged to the client.

AggregateIQ has sought to distance itself from an international privacy scandal engulfing Facebook and Cambridge Analytica, despite allegations from Christopher Wylie, a whistleblower at the now-defunct UK firm, that it had acted as the Canadian branch of the organisation.

The files do not indicate whether users had given permission for their Facebook “Likes” to be tracked through third-party apps, or whether they were scraped from publicly visible pages. Mr Vickery, who analysed AggregateIQ’s files after uncovering a trove of information online, said that the company appeared to have gathered data from Facebook users despite telling Canadian MPs “we don’t really process data on folks”.

The files also include posts that focus on political issues with statements such as: “Like if you agree with Reagan that ‘government is the problem’,” but it is not clear if this information originated on Facebook. Mr Silvester said the software AggregateIQ had designed allowed its client to browse public comments. “It is possible that some of those public comments or posts are in the file,” he said. . . .

. . . . “The overall theme of these companies and the way their tools work is that everything is reliant on everything else, but has enough independent operability to preserve deniability,” said Mr Vickery. “But when you combine all these different data sources together it becomes something else.” . . . .

“Mark Zuckerberg faces allegations that he developed a ‘malicious and fraudulent scheme’ to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive ‘weaponised’ the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal. . . . . ‘The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,’ legal documents said. . . . . Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access. . . . ‘They felt that it was better not to know. I found that utterly horrifying,’ he [former Facebook executive Sandy Parakilas] said. ‘If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.’ . . . .”

Mark Zuckerberg faces allegations that he developed a “malicious and fraudulent scheme” to exploit vast amounts of private data to earn Facebook billions and force rivals out of business. A company suing Facebook in a California court claims the social network’s chief executive “weaponised” the ability to access data from any user’s network of friends – the feature at the heart of the Cambridge Analytica scandal.

A legal motion filed last week in the superior court of San Mateo draws upon extensive confidential emails and messages between Facebook senior executives including Mark Zuckerberg. He is named individually in the case and, it is claimed, had personal oversight of the scheme.

Facebook rejects all claims, and has made a motion to have the case dismissed using a free speech defence.

It claims the first amendment protects its right to make “editorial decisions” as it sees fit. Zuckerberg and other senior executives have asserted that Facebook is a platform not a publisher, most recently in testimony to Congress.

“Facebook’s claims in court that it is an editor for first amendment purposes and thus free to censor and alter the content available on its site is in tension with their, especially recent, claims before the public and US Congress to be neutral platforms.”

The company that has filed the case, a former startup called Six4Three, is now trying to stop Facebook from having the case thrown out and has submitted legal arguments that draw on thousands of emails, the details of which are currently redacted. Facebook has until next Tuesday to file a motion requesting that the evidence remains sealed, otherwise the documents will be made public.

The developer alleges the correspondence shows Facebook paid lip service to privacy concerns in public but behind the scenes exploited its users’ private information.

It claims internal emails and messages reveal a cynical and abusive system set up to exploit access to users’ private information, alongside a raft of anti-competitive behaviours. . . .

. . . . The papers submitted to the court last week allege Facebook was not only aware of the implications of its privacy policy, but actively exploited them, intentionally creating and effectively flagging up the loophole that Cambridge Analytica used to collect data on up to 87 million American users.

The lawsuit also claims Zuckerberg misled the public and Congress about Facebook’s role in the Cambridge Analytica scandal by portraying it as a victim of a third party that had abused its rules for collecting and sharing data.

“The evidence uncovered by plaintiff demonstrates that the Cambridge Analytica scandal was not the result of mere negligence on Facebook’s part but was rather the direct consequence of the malicious and fraudulent scheme Zuckerberg designed in 2012 to cover up his failure to anticipate the world’s transition to smartphones,” legal documents said.

The lawsuit claims to have uncovered fresh evidence concerning how Facebook made decisions about users’ privacy. It sets out allegations that, in 2012, Facebook’s advertising business, which focused on desktop ads, was devastated by a rapid and unexpected shift to smartphones.

Zuckerberg responded by forcing developers to buy expensive ads on the new, underused mobile service or risk having their access to data at the core of their business cut off, the court case alleges.

“Zuckerberg weaponised the data of one-third of the planet’s population in order to cover up his failure to transition Facebook’s business from desktop computers to mobile ads before the market became aware that Facebook’s financial projections in its 2012 IPO filings were false,” one court filing said.

In its latest filing, Six4Three alleges Facebook deliberately used its huge amounts of valuable and highly personal user data to tempt developers to create platforms within its system, implying that they would have long-term access to personal information, including data from subscribers’ Facebook friends.

Once their businesses were running, and reliant on data relating to “likes”, birthdays, friend lists and other Facebook minutiae, the social media company could and did target any that became too successful, looking to extract money from them, co-opt them or destroy them, the documents claim.

Six4Three alleges up to 40,000 companies were effectively defrauded in this way by Facebook. It also alleges that senior executives including Zuckerberg personally devised and managed the scheme, individually deciding which companies would be cut off from data or allowed preferential access.

The lawsuit alleges that Facebook initially focused on kickstarting its mobile advertising platform, as the rapid adoption of smartphones decimated the desktop advertising business in 2012.

It later used its ability to cut off data to force rivals out of business, or coerce owners of apps Facebook coveted into selling at below the market price, even though they were not breaking any terms of their contracts, according to the documents. . . .

. . . .David Godkin, Six4Three’s lead counsel said: “We believe the public has a right to see the evidence and are confident the evidence clearly demonstrates the truth of our allegations, and much more.”

Sandy Parakilas, a former Facebook employee turned whistleblower who has testified to the UK parliament about its business practices, said the allegations were a “bombshell”. He claimed to MPs Facebook’s senior executives were aware of abuses of friends’ data back in 2011-12 and he was warned not to look into the issue.

“They felt that it was better not to know. I found that utterly horrifying,” he said. “If true, these allegations show a huge betrayal of users, partners and regulators. They would also show Facebook using its monopoly power to kill competition and putting profits over protecting its users.” . . .

4. Cambridge Analytica is officially going bankrupt, along with the elections division of its parent company, SCL Group. Apparently their bad press has driven away clients.

Is this truly the end of Cambridge Analytica?

No.

They’re rebranding under a new company, Emerdata. Intriguingly, Cambridge Analytica’s transformation into Emerdata is noteworthy because the firm’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince: ” . . . . But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. . . . An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner. . . . “

. . . . In a statement posted to its website, Cambridge Analytica said the controversy had driven away virtually all of the company’s customers, forcing it to file for bankruptcy in both the United States and Britain. The elections division of Cambridge’s British affiliate, SCL Group, will also shut down, the company said.

But the company’s announcement left several questions unanswered, including who would retain the company’s intellectual property — the so-called psychographic voter profiles built in part with data from Facebook — and whether Cambridge Analytica’s data-mining business would return under new auspices. . . .

. . . . In recent months, executives at Cambridge Analytica and SCL Group, along with the Mercer family, have moved to created a new firm, Emerdata, based in Britain, according to British records. The new company’s directors include Johnson Ko Chun Shun, a Hong Kong financier and business partner of Erik Prince. Mr. Prince founded the private security firm Blackwater, which was renamed Xe Services after Blackwater contractors were convicted of killing Iraqi civilians.

Cambridge and SCL officials privately raised the possibility that Emerdata could be used for a Blackwater-style rebranding of Cambridge Analytica and the SCL Group, according two people with knowledge of the companies, who asked for anonymity to describe confidential conversations. One plan under consideration was to sell off the combined company’s data and intellectual property.

An executive and a part owner of SCL Group, Nigel Oakes, has publicly described Emerdata as a way of rolling up the two companies under one new banner.. . .

5. In the Big Data internet age, there’s one area of personal information that has yet to be incorporated into the profiles on everyone–personal banking information. ” . . . . If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said. . . .”

The president of BlackRock, the world’s biggest asset manager, is among those who think big technology firms could invade the financial industry’s turf. Google and Facebook have thrived by collecting and storing data about consumer habits—our emails, search queries, and the videos we watch. Understanding of our financial lives could be an even richer source of data for them to sell to advertisers.

“I worry about the data,” said BlackRock president Robert Kapito at a conference in London today (Nov. 2). “We’re going to have some serious competitors.”

If tech companies are in control of payment systems, they’ll know “every single thing you do,” Kapito said. It’s a different business model from traditional banking: Data is more valuable for tech firms that sell a range of different products than it is for banks that only sell financial services, he said.

Kapito is worried because the effort to win control of payment systems is already underway—Apple will allow iMessage users to send cash to each other, and Facebook is integrating person-to-person PayPal payments into its Messenger app.

As more payments flow through mobile phones, banks are worried they could get left behind, relegated to serving as low-margin utilities. To fight back, they’ve started initiatives such as Zelle to compete with payment services like PayPal.

…

Barclays CEO Jes Staley pointed out at the conference that banks probably have the “richest data pool” of any sector, and he said some 25% of the UK’s economy flows through Barlcays’ payment systems. The industry could use that information to offer better services. Companies could alert people that they’re not saving enough for retirement, or suggest ways to save money on their expenses. The trick is accessing that data and analyzing it like a big technology company would.

And banks still have one thing going for them: There’s a massive fortress of rules and regulations surrounding the industry. “No one wants to be regulated like we are,” Staley said.

6. Facebook is approaching a number of big banks – JP Morgan, Wells Fargo, Citigroup, and US Bancorp – requesting financial data including card transactions and checking-account balances. Facebook is joined byIn this by Google and Amazon who are also trying to get this kind of data.

Facebook assures us that this information, which will be opt-in, is to be solely for offering new services on Facebook messenger. Facebook also assures us that this information, which would obviously be invaluable for delivering ads, won’t be used for ads at all. It will ONLY be used for Facebook’s Messenger service. This is a dubious assurance, in light of Facebook’s past behavior.

” . . . . Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends.The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter. Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said. . . .”

The social-media giant has asked large U.S. banks to share detailed financial information about their customers, including card transactions and checking-account balances, as part of an effort to offer new services to users.

Facebook increasingly wants to be a platform where people buy and sell goods and services, besides connecting with friends.The company over the past year asked JPMorgan Chase & Co., Wells Fargo & Co., Citigroup Inc. and U.S. Bancorp to discuss potential offerings it could host for bank customers on Facebook Messenger, said people familiar with the matter.

Facebook has talked about a feature that would show its users their checking-account balances, the people said. It has also pitched fraud alerts, some of the people said.

Data privacy is a sticking point in the banks’ conversations with Facebook, according to people familiar with the matter. The talks are taking place as Facebook faces several investigations over its ties to political analytics firm Cambridge Analytica, which accessed data on as many as 87 million Facebook users without their consent.

One large U.S. bank pulled away from talks due to privacy concerns, some of the people said.

Facebook has told banks that the additional customer information could be used to offer services that might entice users to spend more time on Messenger, a person familiar with the discussions said. The company is trying to deepen user engagement: Investors shaved more than $120 billion from its market value in one day last month after it said its growth is starting to slow..

Facebook said it wouldn’t use the bank data for ad-targeting purposes or share it with third parties. . . .

. . . . Alphabet Inc.’s Google and Amazon.com Inc. also have asked banks to share data if they join with them, in order to provide basic banking services on applications such as Google Assistant and Alexa, according to people familiar with the conversations. . . .

7. In FTR #946, we examined Cambridge Analytica, its Trump and Steve Bannon-linked tech firm that harvested Facebook data on behalf of the Trump campaign.

Peter Thiel’s surveillance firm Palantir was apparently deeply involved with Cambridge Analytica’s gaming of personal data harvested from Facebook in order to engineer an electoral victory for Trump. Thiel was an early investor in Facebook, at one point was its largest shareholder and is still one of its largest shareholders. ” . . . . It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times. The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook. ‘There were senior Palantir employees that were also working on the Facebook data,’ said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . . The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .”

As a start-up called Cambridge Analytica sought to harvest the Facebook data of tens of millions of Americans in summer 2014, the company received help from at least one employee at Palantir Technologies, a top Silicon Valley contractor to American spy agencies and the Pentagon. It was a Palantir employee in London, working closely with the data scientists building Cambridge’s psychological profiling technology, who suggested the scientists create their own app — a mobile-phone-based personality quiz — to gain access to Facebook users’ friend networks, according to documents obtained by The New York Times.

Cambridge ultimately took a similar approach. By early summer, the company found a university researcher to harvest data using a personality questionnaire and Facebook app. The researcher scraped private data from over 50 million Facebook users — and Cambridge Analytica went into business selling so-called psychometric profiles of American voters, setting itself on a collision course with regulators and lawmakers in the United States and Britain.

The revelations pulled Palantir — co-founded by the wealthy libertarian Peter Thiel — into the furor surrounding Cambridge, which improperly obtained Facebook data to build analytical tools it deployed on behalf of Donald J. Trump and other Republican candidates in 2016. Mr. Thiel, a supporter of President Trump, serves on the board at Facebook.

“There were senior Palantir employees that were also working on the Facebook data,” said Christopher Wylie, a data expert and Cambridge Analytica co-founder, in testimony before British lawmakers on Tuesday. . . .

. . . .The connections between Palantir and Cambridge Analytica were thrust into the spotlight by Mr. Wylie’s testimony on Tuesday. Both companies are linked to tech-driven billionaires who backed Mr. Trump’s campaign: Cambridge is chiefly owned by Robert Mercer, the computer scientist and hedge fund magnate, while Palantir was co-founded in 2003 by Mr. Thiel, who was an initial investor in Facebook. . . .

. . . . Documents and interviews indicate that starting in 2013, Mr. Chmieliauskas began corresponding with Mr. Wylie and a colleague from his Gmail account. At the time, Mr. Wylie and the colleague worked for the British defense and intelligence contractor SCL Group, which formed Cambridge Analytica with Mr. Mercer the next year. The three shared Google documents to brainstorm ideas about using big data to create sophisticated behavioral profiles, a product code-named “Big Daddy.”

A former intern at SCL — Sophie Schmidt, the daughter of Eric Schmidt, then Google’s executive chairman — urged the company to link up with Palantir, according to Mr. Wylie’s testimony and a June 2013 email viewed by The Times.

“Ever come across Palantir. Amusingly Eric Schmidt’s daughter was an intern with us and is trying to push us towards them?” one SCL employee wrote to a colleague in the email.

“There were Palantir staff who would come into the office and work on the data,” Mr. Wylie told lawmakers. “And we would go and meet with Palantir staff at Palantir.” He did not provide an exact number for the employees or identify them.

Palantir employees were impressed with Cambridge’s backing from Mr. Mercer, one of the world’s richest men, according to messages viewed by The Times. And Cambridge Analytica viewed Palantir’s Silicon Valley ties as a valuable resource for launching and expanding its own business.

In an interview this month with The Times, Mr. Wylie said that Palantir employees were eager to learn more about using Facebook data and psychographics. Those discussions continued through spring 2014, according to Mr. Wylie.

Mr. Wylie said that he and Mr. Nix visited Palantir’s London office on Soho Square. One side was set up like a high-security office, Mr. Wylie said, with separate rooms that could be entered only with particular codes. The other side, he said, was like a tech start-up — “weird inspirational quotes and stuff on the wall and free beer, and there’s a Ping-Pong table.”

Mr. Chmieliauskas continued to communicate with Mr. Wylie’s team in 2014, as the Cambridge employees were locked in protracted negotiations with a researcher at Cambridge University, Michal Kosinski, to obtain Facebook data through an app Mr. Kosinski had built. The data was crucial to efficiently scale up Cambridge’s psychometrics products so they could be used in elections and for corporate clients. . . .

8a. Some terrifying and consummately important developments taking shape in the context of what Mr. Emory has called “technocratic fascism:”

” . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”

” . . . . Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”

” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

At Facebook’s annual developer conference, F8, on Wednesday, the group unveiled what may be Facebook’s most ambitious—and creepiest—proposal yet. Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer.

What if you could type directly from your brain?” Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes to demonstrate a real-time silent speech system capable of delivering a hundred words per minute.

“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”

Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.

“Our world is both digital and physical,” she said. “Our goal is to create and ship new, category-defining consumer products that are social first, at scale.”

She also showed a video that demonstrated a second technology that showed the ability to “listen” to human speech through vibrations on the skin. This tech has been in development to aid people with disabilities, working a little like a Braille that you feel with your body rather than your fingers. Using actuators and sensors, a connected armband was able to convey to a woman in the video a tactile vocabulary of nine different words.

Dugan adds that it’s also possible to “listen” to human speech by using your skin. It’s like using braille but through a system of actuators and sensors. Dugan showed a video example of how a woman could figure out exactly what objects were selected on a touchscreen based on inputs delivered through a connected armband.

Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. Brain-computer interface technology is still in its infancy. So far, researchers have been successful in using it to allow people with disabilities to control paralyzed or prosthetic limbs. But stimulating the brain’s motor cortex is a lot simpler than reading a person’s thoughts and then translating those thoughts into something that might actually be read by a computer.

The end goal is to build an online world that feels more immersive and real—no doubt so that you spend more time on Facebook.

“Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world — speech — can only transmit about the same amount of data as a 1980s modem,” CEO Mark Zuckerberg said in a Facebook post. “We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no ‘brain click’ would help make things like augmented reality feel much more natural.”

“That’s five times faster than you can type on your smartphone, and it’s straight from your brain,” she said. “Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.”

Brain-computer interfaces are nothing new. DARPA, which Dugan used to head, has invested heavily in brain-computer interface technologies to do things like cure mental illness and restore memories to soldiers injured in war. But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone.
…

” . . . . Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on ‘skin-hearing’ that could translate sounds into haptic feedback that people can learn to understand like braille. . . .”

” . . . . Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, ‘The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.’ . . . .”

Facebook will assemble an independent Ethical, Legal and Social Implications (ELSI) panel to oversee its development of a direct brain-to-computer typing interface it previewed today at its F8 conference. Facebook’s R&D department Building 8’s head Regina Dugan tells TechCrunch, “It’s early days . . . we’re in the process of forming it right now.”

Meanwhile, much of the work on the brain interface is being conducted by Facebook’s university research partners like UC Berkeley and Johns Hopkins. Facebook’s technical lead on the project, Mark Chevillet, says, “They’re all held to the same standards as the NIH or other government bodies funding their work, so they already are working with institutional review boards at these universities that are ensuring that those standards are met.” Institutional review boards ensure test subjects aren’t being abused and research is being done as safely as possible.

Facebook hopes to use optical neural imaging technology to scan the brain 100 times per second to detect thoughts and turn them into text. Meanwhile, it’s working on “skin-hearing” that could translate sounds into haptic feedback that people can learn to understand like braille. Dugan insists, “None of the work that we do that is related to this will be absent of these kinds of institutional review boards.”

So at least there will be independent ethicists working to minimize the potential for malicious use of Facebook’s brain-reading technology to steal or police people’s thoughts.

During our interview, Dugan showed her cognizance of people’s concerns, repeating the start of her keynote speech today saying, “I’ve never seen a technology that you developed with great impact that didn’t have unintended consequences that needed to be guardrailed or managed. In any new technology you see a lot of hype talk, some apocalyptic talk and then there’s serious work which is really focused on bringing successful outcomes to bear in a responsible way.”

In the past, she says the safeguards have been able to keep up with the pace of invention. “In the early days of the Human Genome Project there was a lot of conversation about whether we’d build a super race or whether people would be discriminated against for their genetic conditions and so on,” Dugan explains. “People took that very seriously and were responsible about it, so they formed what was called a ELSI panel . . . By the time that we got the technology available to us, that framework, that contractual, ethical framework had already been built, so that work will be done here too. That work will have to be done.” . . . .

Worryingly, Dugan eventually appeared frustrated in response to my inquiries about how her team thinks about safety precautions for brain interfaces, saying, “The flip side of the question that you’re asking is ‘why invent it at all?’ and I just believe that the optimistic perspective is that on balance, technological advances have really meant good things for the world if they’re handled responsibly.”

Facebook’s domination of social networking and advertising give it billions in profit per quarter to pour into R&D. But its old “Move fast and break things” philosophy is a lot more frightening when it’s building brain scanners. Hopefully Facebook will prioritize the assembly of the ELSI ethics board Dugan promised and be as transparent as possible about the development of this exciting-yet-unnerving technology.…

In FTR #’s 718 and 946, we detailed the frightening, ugly reality behind Facebook. Facebook is now developing technology that will permit the tapping of users thoughts by monitoring brain-to-computer technology. Facebook’s R & D is headed by Regina Dugan, who used to head the Pentagon’s DARPA. Facebook’s Building 8 is patterned after DARPA: “ . . . Facebook wants to build its own “brain-to-computer interface” that would allow us to send thoughts straight to a computer. ‘What if you could type directly from your brain?’ Regina Dugan, the head of the company’s secretive hardware R&D division, Building 8, asked from the stage. Dugan then proceeded to show a video demo of a woman typing eight words per minute directly from the stage. In a few years, she said, the team hopes todemonstrate a real-time silent speech system capable of delivering a hundred words per minute. ‘That’s five times faster than you can type on your smartphone, and it’s straight from your brain,’ she said. ‘Your brain activity contains more information than what a word sounds like and how it’s spelled; it also contains semantic information of what those words mean.’ . . .”

” . . . . Facebook’s Building 8 is modeled after DARPA and its projects tend to be equally ambitious. . . .”

” . . . . But what Facebook is proposing is perhaps more radical—a world in which social media doesn’t require picking up a phone or tapping a wrist watch in order to communicate with your friends; a world where we’re connected all the time by thought alone. . . .”

9a. Nigel Oakes is the founder of SCL, the parent company of Cambridge Analytica. His comments are related in a New York Times article. ” . . . . . . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .”

. . . . The panel has published audio records in which an executive tied to Cambridge Analytica discusses how the Trump campaign used techniques used by the Nazis to target voters. . . .

9b. Mr. Oakes’ comments are related in detail in another Times article. ” . . . . Adolf Hitler ‘didn’t have a problem with the Jews at all, but people didn’t like the Jews,’ he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims. . . . ‘What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,’ he told Dr. Briant. ‘Trump had the balls, and I mean, really the balls, to say what people wanted to hear.’ . . .”

. . . . Adolf Hitler “didn’t have a problem with the Jews at all, but people didn’t like the Jews,” he told the academic, Emma L. Briant, a senior lecturer in journalism at the University of Essex. He went on to say that Donald J. Trump had done the same thing by tapping into grievances toward immigrants and Muslims.

This sort of campaign, he continued, did not require bells and whistles from technology or social science.

“What happened with Trump, you can forget all the microtargeting and microdata and whatever, and come back to some very, very simple things,” he told Dr. Briant. “Trump had the balls, and I mean, really the balls, to say what people wanted to hear.” . . .

9c. Taking a look at the future of fascism in the context of AI, Tay, a “bot” created by Microsoft to respond to users of Twitter was taken offline after users taught it to–in effect–become a Nazi bot. It is noteworthy that Tay can only respond on the basis of what she is taught. In the future, technologically accomplished and willful people like “weev” may be able to do more. Inevitably, Underground Reich elements will craft a Nazi AI that will be able to do MUCH, MUCH more!

Beware! As one Twitter user noted, employing sarcasm: “Tay went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI.”

Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”

Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardianquotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” . . .

But like all teenagers, she seems to be angry with her mother.

Microsoft has been forced to dunk Tay, its millennial-mimicking chatbot, into a vat of molten steel. The company has terminated her after the bot started tweeting abuse at people and went full neo-Nazi, declaring that “Hitler was right I hate the jews.”

Some of this appears to be “innocent” insofar as Tay is not generating these responses. Rather, if you tell her “repeat after me” she will parrot back whatever you say, allowing you to put words into her mouth. However, some of the responses wereorganic. The Guardian quotes one where, after being asked “is Ricky Gervais an atheist?”, Tay responded, “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism.”

In addition to turning the bot off, Microsoft has deleted many of the offending tweets. But this isn’t an action to be taken lightly; Redmond would do well to remember that it was humans attempting to pull the plug on Skynet that proved to be the last straw, prompting the system to attack Russia in order to eliminate its enemies. We’d better hope that Tay doesn’t similarly retaliate. . . .

9d. As noted in a Popular Mechanics article: ” . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand. . . .”

We all know the half-joke about the AI apoc­a­lypse. The robots learn to think, and in their cold ones-and-zeros logic, they decide that humans—horrific pests we are—need to be exter­mi­nated. It’s the sub­ject of count­less sci-fi sto­ries and blog posts about robots, but maybe the real dan­ger isn’t that AI comes to such a con­clu­sion on its own, but that it gets that idea from us.

Tay has been built by min­ing rel­e­vant pub­lic data and by using AI and edi­to­r­ial devel­oped by a staff includ­ing impro­vi­sa­tional come­di­ans. Pub­lic data that’s been anonymized is Tay’s pri­mary data source. That data has been mod­eled, cleaned and fil­tered by the team devel­op­ing Tay.

The real point of Tay how­ever, was to learn from humans through direct con­ver­sa­tion, most notably direct con­ver­sa­tion using humanity’s cur­rent lead­ing show­case of deprav­ity: Twit­ter. You might not be sur­prised things went off the rails, but how fast and how far is par­tic­u­larly staggering.

Microsoft has since deleted some of Tay’s most offen­sive tweets, but var­i­ous pub­li­ca­tions memo­ri­al­ize some of the worst bits where Tay denied the exis­tence of the holo­caust, came out in sup­port of geno­cide, and went all kinds of racist.

Nat­u­rally it’s hor­ri­fy­ing, and Microsoft has been try­ing to clean up the mess. Though as some on Twit­ter have pointed out, no mat­ter how lit­tle Microsoft would like to have “Bush did 9/11″ spout­ing from a cor­po­rate spon­sored project, Tay does serve to illus­trate the most dan­ger­ous fun­da­men­tal truth of arti­fi­cial intel­li­gence: It is a mir­ror. Arti­fi­cial intelligence—specifically “neural net­works” that learn behav­ior by ingest­ing huge amounts of data and try­ing to repli­cate it—need some sort of source mate­r­ial to get started. They can only get that from us. There is no other way.

But before you give up on human­ity entirely, there are a few things worth not­ing. For starters, it’s not like Tay just nec­es­sar­ily picked up vir­u­lent racism by just hang­ing out and pas­sively lis­ten­ing to the buzz of the humans around it. Tay was announced in a very big way—with a press cov­er­age—and pranksters pro-actively went to it to see if they could teach it to be racist.

If you take an AI and then don’t imme­di­ately intro­duce it to a whole bunch of trolls shout­ing racism at it for the cheap thrill of see­ing it learn a dirty trick, you can get some more inter­est­ing results. Endear­ing ones even! Mul­ti­ple neural net­works designed to pre­dict text in emails and text mes­sages have an over­whelm­ing pro­cliv­ity for say­ing “I love you” con­stantly, espe­cially when they are oth­er­wise at a loss for words.

So Tay’s racism isn’t nec­es­sar­ily a reflec­tion of actual, human racism so much as it is the con­se­quence of unre­strained exper­i­men­ta­tion, push­ing the enve­lope as far as it can go the very first sec­ond we get the chance. The mir­ror isn’t show­ing our real image; it’s reflect­ing the ugly faces we’re mak­ing at it for fun. And maybe that’s actu­ally worse.

Sure, Tay can’t under­stand what racism means and more than Gmail can really love you. And baby’s first words being “geno­cide lol!” is admit­tedly sort of funny when you aren’t talk­ing about lit­eral all-powerful SkyNet or a real human child. But AI is advanc­ing at a stag­ger­ing rate. . . .

. . . . When the next pow­er­ful AI comes along, it will see its first look at the world by look­ing at our faces. And if we stare it in the eyes and shout “we’re AWFUL lol,” the lol might be the one part it doesn’t understand.

Discussion

Oh look, Facebook actually banned someone for posting neo-Nazi content on their platform. But there’s a catch: They banned Ukrainian activist Eduard Dolinsky for 30 days because he was posting examples of antisemitic graffiti. Dolinsky is the director of the Ukrainian Jewish Committee. According to Dolinksy, his far right opponents have a history of reporting Dolinksy’s posts to Facebook in order to get him suspended. And this time it worked. Dolinksy appealed the ban but to no avail.

So that happened. But first let’s take a quick look at an article from back in April that highlights how absurd this action was. The article is about a Ukrainian school teacher in Lviv, Marjana Batjuk, who posted birthday greetings to Adolf Hitler on her Facebook page on April 20 (Hitler’s birthday). She also taught her students the Nazi salute and even took some of her students to meet far right activists who had participated in a march wearing the uniform of the the 14th Waffen Grenadier Division of the SS.

(JTA) — A public school teacher in Ukraine allegedly posted birthday greetings to Adolf Hitler on Facebook and taught her students the Nazi salute.

Marjana Batjuk, who teaches at a school in Lviv and also is a councilwoman, posted her greeting on April 20, the Nazi leader’s birthday, Eduard Dolinsky, director of the Ukrainian Jewish Committee, told JTA. He called the incident a “scandal.”

She also took some of her students to meet far-right activists who over the weekend marched on the city’s streets while wearing the uniform of the 14th Waffen Grenadier Division of the SS, an elite Nazi unite with many ethnic Ukrainians also known as the 1st Galician.

Displaying Nazi imagery is illegal in Ukraine, but Dolinsky said law enforcement authorities allowed the activists to parade on main streets.

Batjuk had the activists explain about their replica weapons, which they paraded ahead of a larger event in honor of the 1st Galician unit planned for next week in Lviv.

The events honoring the 1st Galician SS unit in Lviv are not organized by municipal authorities.

Batjuk, 28, a member of the far-right Svoboda party, called Hitler “a great man” and quoted from his book “Mein Kampf” in her Facebook post, Dolinsky said.She later claimed that her Facebook account was hacked and deleted the post, but the Strana news site found that she had a history of posting Nazi imagery on social networks.

She also posted pictures of children she said were her students performing the Nazi salute with her.

…

Education Ministry officials have started a disciplinary review of her conduct, the KP news site reported.

Separately, in the town of Poltava, in eastern Ukraine, Dolinsky said a swastika and the words “heil Hitler” were spray-painted Friday on a monument for Holocaust victims of the Holocaust. The vandals, who have not been identified, also wrote “Death to the kikes.”

In Odessa, a large graffiti reading “Jews into the sea” was written on the beachfront wall of a hotel.

“The common factor between all of these incidents is government inaction, which ensures they will continue happening,” Dolinsky said.
———-

“Marjana Batjuk, who teaches at a school in Lviv and also is a councilwoman, posted her greeting on April 20, the Nazi leader’s birthday, Eduard Dolinsky, director of the Ukrainian Jewish Committee, told JTA. He called the incident a “scandal.””

She’s not just a teacher. She’s also a councilwoman. A teacher councilwoman who likes to post about positive things about Hitler on her Facebook page. And it was Eduard Dolinsky who was talking to the international media about this.

But Batjuk doesn’t just post pro-Nazi things on her Facebook page. She also takes her students to meet the far right activists:

…She also took some of her students to meet far-right activists who over the weekend marched on the city’s streets while wearing the uniform of the 14th Waffen Grenadier Division of the SS, an elite Nazi unite with many ethnic Ukrainians also known as the 1st Galician.

Displaying Nazi imagery is illegal in Ukraine, but Dolinsky said law enforcement authorities allowed the activists to parade on main streets.

Batjuk had the activists explain about their replica weapons, which they paraded ahead of a larger event in honor of the 1st Galician unit planned for next week in Lviv.

The events honoring the 1st Galician SS unit in Lviv are not organized by municipal authorities.
…

Batjuk later claimed that her Facebook page was hacked, and yet a media organization was able to find plenty of previous examples of similar posts on social media:

…
Batjuk, 28, a member of the far-right Svoboda party, called Hitler “a great man” and quoted from his book “Mein Kampf” in her Facebook post, Dolinsky said. She later claimed that her Facebook account was hacked and deleted the post, but the Strana news site found that she had a history of posting Nazi imagery on social networks.

She also posted pictures of children she said were her students performing the Nazi salute with her.
…

Eduard Dolinksy, a prominent Ukrainian Jewish activist, was banned from posting on Facebook Monday night for a post about antisemitic graffiti in Odessa.

Dolinsky, the director of the Ukrainian Jewish Committee, said he was blocked by the social media giant for posting a photo. “I had posted the photo which says in Ukrainian ‘kill the yid’ about a month ago,” he says. “I use my Facebook account for distributing information about antisemitic incidents and hate speech and hate crimes in Ukraine.”

Now Dolinsky’s account has disabled him from posting for thirty days, which means media, law enforcement and the local community who rely on his social media posts will receive no updates.

Dolinsky tweeted Monday that his account had been blocked and sent The Jerusalem Post a screenshot of the image he posted which shows a badly drawn swastika and Ukrainian writing. “You recently posted something that violates Facebook policies, so you’re temporarily blocked from using this feature,” Facebook informs him when he logs in. “The block will be active for 29 days and 17 hours,” it says. “To keep from getting blocked again, please make sure you’ve read and understand Facebook’s Community Standards.”

Dolinksy says that he has been targeted in the past by nationalists and anti-semites who oppose his work. Facebook has banned him temporarily in the past also, but never for thirty days. “The last time I was blocked, the media also reported this and I felt some relief.

It was as if they stopped banning me. But now I don’t know – and this has again happened. They are banning the one who is trying to fight antisemitism. They are banning me for the very thing I do.”

Based on Dolinsky’s work the police have opened criminal files against perpetrators of antisemitic crimes, in Odessa and other places.

He says that some locals are trying to silence him because he is critical of the way Ukraine has commemorated historical nationalist figures, “which is actually denying the Holocaust and trying to whitewash the actions of nationalists during the Second World War.”

Dolinksy has been widely quoted, and his work, including posts on Facebook, has been referenced by media in the past. “These incidents are happening and these crimes and the police should react.

The society also. But their goal is to cut me off.”

Ironically, the activist opposing antisemitism is being targeted by antisemites who label the antisemitic examples he reveals as hate speech. “They are specifically complaining to Facebook for the content, and they are complaining that I am violating the rules of Facebook and spreading hate speech. So Facebook, as I understand [it, doesn’t] look at this; they are banning me and blocking me and deleting these posts.”

He says he tried to appeal the ban but has not been successful.

“I use my Facebook exclusively for this, so this is my working tool as director of Ukrainian Jewish Committee.”

Facebook has been under scrutiny recently for who it bans and why. In July founder Mark Zuckerberg made controversial remarks appearing to accept Holocaust denial on the site. “I find it offensive, but at the end of the day, I don’t believe our platform should take that down because I think there are things that different people get wrong. I don’t think they’re doing it intentionally.” In late July, Facebook banned US conspiracy theorist Alex Jones for bullying and hate speech.

In a similar incident to Dolinsky, Iranian secular activist Armin Navabi was banned from Facebook for thirty days for posting the death threats that he receives. “This is ridiculous. My account is blocked for 30 days because I post the death threats I’m getting? I’m not the one making the threat!” he tweeted.

“Dolinsky, the director of the Ukrainian Jewish Committee, said he was blocked by the social media giant for posting a photo. “I had posted the photo which says in Ukrainian ‘kill the yid’ about a month ago,” he says. “I use my Facebook account for distributing information about antisemitic incidents and hate speech and hate crimes in Ukraine.””

The director of the Ukrainian Jewish Committee gets banned for post antisemitic content. That’s some world class trolling by Facebook.

And while it’s only a 30 day ban, that’s 30 days where Ukraine’s media and law enforcement won’t be getting Dolinsky’s updates. So it’s not just a morally absurd banning, it’s also actually going to be promoting pro-Nazi graffiti in Ukraine by silencing one of the key figures covering it:

…Now Dolinsky’s account has disabled him from posting for thirty days, which means media, law enforcement and the local community who rely on his social media posts will receive no updates.

Dolinsky tweeted Monday that his account had been blocked and sent The Jerusalem Post a screenshot of the image he posted which shows a badly drawn swastika and Ukrainian writing. “You recently posted something that violates Facebook policies, so you’re temporarily blocked from using this feature,” Facebook informs him when he logs in. “The block will be active for 29 days and 17 hours,” it says. “To keep from getting blocked again, please make sure you’ve read and understand Facebook’s Community Standards.”
…

And this isn’t the first time Dolinsky has been banned from Facebook for posting this kind of content. But it’s the longest he’s been banned. And the fact that this isn’t the first time he’s been banned suggest this isn’t just an ‘oops!’ genuine mistake:

…Dolinksy says that he has been targeted in the past by nationalists and anti-semites who oppose his work. Facebook has banned him temporarily in the past also, but never for thirty days. “The last time I was blocked, the media also reported this and I felt some relief.

It was as if they stopped banning me. But now I don’t know – and this has again happened. They are banning the one who is trying to fight antisemitism. They are banning me for the very thing I do.”

Based on Dolinsky’s work the police have opened criminal files against perpetrators of antisemitic crimes, in Odessa and other places.
…

Dolinsky also notes that he has people trying to silence him precisely because of the job he does highlighting Ukraine’s official embrace of Nazi collaborating historical figures:

…He says that some locals are trying to silence him because he is critical of the way Ukraine has commemorated historical nationalist figures, “which is actually denying the Holocaust and trying to whitewash the actions of nationalists during the Second World War.”

Dolinksy has been widely quoted, and his work, including posts on Facebook, has been referenced by media in the past. “These incidents are happening and these crimes and the police should react.

The society also. But their goal is to cut me off.”

Ironically, the activist opposing antisemitism is being targeted by antisemites who label the antisemitic examples he reveals as hate speech. “They are specifically complaining to Facebook for the content, and they are complaining that I am violating the rules of Facebook and spreading hate speech. So Facebook, as I understand [it, doesn’t] look at this; they are banning me and blocking me and deleting these posts.”
…

So we likely have a situation where antisemites successfully got Dolinksy silence, with Facebook ‘playing dumb’ the whole time. And as a consequence Ukraine is facing a month without Dolinsky’s reports. Except it’s not even clear that Dolinksy is going to be allowed to clarify the situation and continue posting updates of Nazi graffiti after this month long ban is up. Because he says he’s been trying to appeal the ban, but with no success:

…
He says he tried to appeal the ban but has not been successful.

“I use my Facebook exclusively for this, so this is my working tool as director of Ukrainian Jewish Committee.”
…

So for all we know, Dolinsky is effectively going to be banned permanently from using Facebook to make Ukraine and the rest of the world aware of the epidemic of pro-Nazi antisemitic graffiti in Ukraine. Maybe if he sets up a pro-Nazi Facebook persona he’ll be allowed to keep doing his work.

Trump just accused Google of biasing the search results in its search engine to give negative stories about him. Apparently he googled himself and didn’t like the results. His tweet came after a Fox Business report on Monday evening that made the claim that 96 percent of Google News results for “Trump” came from the “national left-wing media.” The report was based on some ‘analysis’ by right-wing media outlet PJ Media.

* President tweets conservative media being blocked by Google
* Company denies any political agenda in its search results

President Donald Trump warned Alphabet Inc.’s Google, Facebook Inc. and Twitter Inc. “better be careful” after he accused the search engine earlier in the day of rigging results to give preference to negative news stories about him.

Trump told reporters in the Oval Office Tuesday that the three technology companies “are treading on very, very troubled territory,” as he added his voice to a growing chorus of conservatives who claim internet companies favor liberal viewpoints.

“This is a very serious situation-will be addressed!” Trump said in a tweet earlier Tuesday. The President’s comments came the morning after a Fox Business TV segment that said Google favored liberal news outlets in search results about Trump. Trump provided no substantiation for his claim.

“Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD,” Trump said. “Republican/Conservative & Fair Media is shut out. Illegal.”

The allegation, dismissed by online search experts, follows the president’s Aug. 24 claim that social media “giants” are “silencing millions of people.” Such accusations — along with assertions that the news media and Special Counsel Robert Mueller’s Russia meddling probe are biased against him — have been a chief Trump talking point meant to appeal to the president’s base.

Google issued a statement saying its searches are designed to give users relevant answers.

“Search is not used to set a political agenda and we don’t bias our results toward any political ideology,” the statement said. “Every year, we issue hundreds of improvements to our algorithms to ensure they surface high-quality content in response to users’ queries. We continually work to improve Google Search and we never rank search results to manipulate political sentiment.”

Yonatan Zunger, an engineer who worked at Google for almost a decade, went further. “Users can verify that his claim is specious by simply reading a wide range of news sources themselves,” he said. “The ‘bias’ is that the news is all bad for him, for which he has only himself to blame.”

Google’s news search software doesn’t work the way the president says it does, according to Mark Irvine, senior data scientist at WordStream, a company that helps firms get websites and other online content to show up higher in search results. The Google News system gives weight to how many times a story has been linked to, as well as to how prominently the terms people are searching for show up in the stories, Irvine said.

“The Google search algorithm is a fairly agnostic and apathetic algorithm towards what people’s political feelings are,” he said.

“Their job is essentially to model the world as it is,” said Pete Meyers, a marketing scientist at Moz, which builds tools to help companies improve how they show up in search results. “If enough people are linking to a site and talking about a site, they’re going to show that site.”

Trump’s concern is that search results about him appear negative, but that’s because the majority of stories about him are negative, Meyers said. “He woke up and watched his particular flavor and what Google had didn’t match that.”

Complaints that social-media services censor conservatives have increased as companies such as Facebook Inc. and Twitter Inc. try to curb the reach of conspiracy theorists, disinformation campaigns, foreign political meddling and abusive posters.

Google News rankings have sometimes highlighted unconfirmed and erroneous reports in the early minutes of tragedies when there’s little information to fill its search results. After the Oct. 1, 2017, Las Vegas shooting, for instance, several accounts seemed to coordinate an effort to smear a man misidentified as the shooter with false claims about his political ties.

Google has since tightened requirements for inclusion in news rankings, blocking outlets that “conceal their country of origin” and relying more on authoritative sources, although the moves have led to charges of censorship from less established outlets. Google currently says it ranks news based on “freshness” and “diversity” of the stories. Trump-favored outlets such as Fox News routinely appear in results.

Google’s search results have been the focus of complaints for more than a decade. The criticism has become more political as the power and reach of online services has increased in recent years.

Eric Schmidt, Alphabet’s former chairman, supported Hillary Clinton against Trump during the last election. There have been unsubstantiated claims the company buried negative search results about her during the 2016 election. Scores of Google employees entered government to work under President Barack Obama.

White House economic adviser Larry Kudlow, responding to a question about the tweets, said that the administration is going to do “investigations and analysis” into the issue but stressed they’re “just looking into it.”

Trump’s comment followed a report on Fox Business on Monday evening that said 96 percent of Google News results for “Trump” came from the “national left-wing media.” The segment cited the conservative PJ Media site, which said its analysis suggested “a pattern of bias against right-leaning content.”

The PJ Media analysis “is in no way scientific,” said Joshua New, a senior policy analyst with the Center for Data Innovation.

“This frequency of appearance in an arbitrary search at one time is in no way indicating a bias or a slant,” New said. His non-partisan policy group is affiliated with the Information Technology and Innovation Foundation, which in turn has executives from Silicon Valley companies, including Google, on its board of directors.

Services such as Google or Facebook “have a business incentive not to lower the ranking of a certain publication because of news bias. Because that lowers the value as a news platform,” New said.

News search rankings use factors including “use timeliness, accuracy, the popularity of a story, a users’ personal search history, their location, quality of content, a website’s reputation — a huge amount of different factors,” New said.

Google is not the first tech stalwart to receive criticism from Trump. He has alleged Amazon.com Inc. has a sweetheart deal with the U.S. Postal Service and slammed founder Jeff Bezos’s ownership of what Trump calls “the Amazon Washington Post.”

Google is due to face lawmakers at a hearing on Russian election meddling on Sept. 5. The company intended to send Senior Vice President for Global Affairs Kent Walker to testify, but the panel’s chairman, Senator Richard Burr, who wanted Chief Executive Officer Sundar Pichai, has rejected Walker.

Despite Trump’s comments, it’s unclear what he or Congress could do to influence how internet companies distribute online news. The industry treasures an exemption from liability for the content users post. Some top members of Congress have suggested limiting the protection as a response to alleged bias and other misdeeds, although there have been few moves to do so since Congress curbed the shield for some cases of sex trafficking earlier in the year.

The government has little ability to dictate to publishers and online curators what news to present despite the president’s occasional threats to use the power of the government to curb coverage he dislikes and his tendency to complain that news about him is overly negative.

Trump has talked about expanding libel laws and mused about reinstating long-ended rules requiring equal time for opposing views, which didn’t apply to the internet. Neither has resulted in a serious policy push..

“Trump told reporters in the Oval Office Tuesday that the three technology companies “are treading on very, very troubled territory,” as he added his voice to a growing chorus of conservatives who claim internet companies favor liberal viewpoints.”

The Trumpian warning shots have been fired: feed the public positive news about Trump, or else…

…
“This is a very serious situation-will be addressed!” Trump said in a tweet earlier Tuesday. The President’s comments came the morning after a Fox Business TV segment that said Google favored liberal news outlets in search results about Trump. Trump provided no substantiation for his claim.

“Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD,” Trump said. “Republican/Conservative & Fair Media is shut out. Illegal.”

The allegation, dismissed by online search experts, follows the president’s Aug. 24 claim that social media “giants” are “silencing millions of people.” Such accusations — along with assertions that the news media and Special Counsel Robert Mueller’s Russia meddling probe are biased against him — have been a chief Trump talking point meant to appeal to the president’s base.
…

“Republican/Conservative & Fair Media is shut out. Illegal.”

And he literally charged Google with illegality over allegedly shutting out “Republican/Conservative & Fair Media.” Which is, of course, an absurd charge for anyone familiar with Google’s news portal. But that was part of what made the tweet so potentially threatening to these companies since it implied there was a role the government should be playing to correct this perceived law-breaking.

At the same time, it’s unclear what, legally speaking, Trump could actually do. But that didn’t stop him from issue such threats, as he’s done in the past:

…
Despite Trump’s comments, it’s unclear what he or Congress could do to influence how internet companies distribute online news. The industry treasures an exemption from liability for the content users post. Some top members of Congress have suggested limiting the protection as a response to alleged bias and other misdeeds, although there have been few moves to do so since Congress curbed the shield for some cases of sex trafficking earlier in the year.

The government has little ability to dictate to publishers and online curators what news to present despite the president’s occasional threats to use the power of the government to curb coverage he dislikes and his tendency to complain that news about him is overly negative.

Trump has talked about expanding libel laws and mused about reinstating long-ended rules requiring equal time for opposing views, which didn’t apply to the internet. Neither has resulted in a serious policy push..
…

And yet, as unhinged as this latest threat may be, the administration is actually going to do “investigations and analysis” into the issue according to Larry Kudlow:

…White House economic adviser Larry Kudlow, responding to a question about the tweets, said that the administration is going to do “investigations and analysis” into the issue but stressed they’re “just looking into it.”
…

And as we should expect, this all appears to have been triggered by a Fox Business piece on Monday night that covered an ‘study’ done by PJ Media (a right-wing media outlet) that found 96 percent of Google News results for “Trump” come from the “national left-wing media”:

…Trump’s comment followed a report on Fox Business on Monday evening that said 96 percent of Google News results for “Trump” came from the “national left-wing media.” The segment cited the conservative PJ Media site, which said its analysis suggested “a pattern of bias against right-leaning content.”

The PJ Media analysis “is in no way scientific,” said Joshua New, a senior policy analyst with the Center for Data Innovation.

“This frequency of appearance in an arbitrary search at one time is in no way indicating a bias or a slant,” New said. His non-partisan policy group is affiliated with the Information Technology and Innovation Foundation, which in turn has executives from Silicon Valley companies, including Google, on its board of directors.

Services such as Google or Facebook “have a business incentive not to lower the ranking of a certain publication because of news bias. Because that lowers the value as a news platform,” New said.

News search rankings use factors including “use timeliness, accuracy, the popularity of a story, a users’ personal search history, their location, quality of content, a website’s reputation — a huge amount of different factors,” New said.
…

Putting aside the general questions of the scientific veracity of this PJ Media ‘study’, it’s kind of amusing to realize that it was study conducted specifically on a search for “Trump” on Google News. And if you had to choose a single topic that is going to inevitably have an abundance of negative news written about it, that would be the topic of “Trump”. In other words, if you were to actually conduct a real study that attempts to assess the political bias of Google News’s search results, you almost couldn’t have picked a worse search term to test that theory on than “Trump”.

Google not surprisingly refutes these charges. But it’s the people who work for companies dedicated to improving how their clients who give the most convincing responses since their businesses are literally dependents on them understanding Google’s algorithms:

…
Google’s news search software doesn’t work the way the president says it does, according to Mark Irvine, senior data scientist at WordStream, a company that helps firms get websites and other online content to show up higher in search results. The Google News system gives weight to how many times a story has been linked to, as well as to how prominently the terms people are searching for show up in the stories, Irvine said.

“The Google search algorithm is a fairly agnostic and apathetic algorithm towards what people’s political feelings are,” he said.

“Their job is essentially to model the world as it is,” said Pete Meyers, a marketing scientist at Moz, which builds tools to help companies improve how they show up in search results. “If enough people are linking to a site and talking about a site, they’re going to show that site.”

Trump’s concern is that search results about him appear negative, but that’s because the majority of stories about him are negative, Meyers said. “He woke up and watched his particular flavor and what Google had didn’t match that.”
…

All that said, it’s not like the topic of the blackbox nature of the algorithms behind things like Google’s search engine aren’t a legitimate topic of public interest. And that’s part of why these farcical tweets are so dangerous: the Big Tech giants like Google, Facebook, and Twitter know that it’s not impossible that they’ll be subject to algorithmic regulation someday. And they’re going to want to push that day off for a long as possible. So when Trump makes these kinds of complaints, it’s not at all inconceivable that he’s going to get the response from these companies that he wants as these companies attempt to placate him. It’s also highly likely that if these companies do decide to placate him, they’re not going to publicly announce this. Instead they’ll just start rigging their algorithms to serve up more pro-Trump content and more right-wing content in general.

Also keep in mind that, despite the reputation of Silicon Valley as being run by a bunch of liberals, the reality is Silicon Valley has a strong right-wing libertarian faction, and there’s going to be no shortage of people at these companies that would love to inject a right-wing bias into their services. Trump’s stunt gives that right-wing faction of Silicon Valley leadership an excuse to do exactly that from a business standpoint.

So if you use Google News to see what the latest the news is on “Trump” and you suddenly find that it’s mostly good news, keep in mind that that’s actually really, really bad news because it means this stunt worked.

The New York Times published a big piece on the inner workings of Facebook’s response to the array of scandals that have enveloped the company in recent years, from the charges of Russian operatives using the platform to spread disinformation to the Cambridge Analytica scandal. Much of the story focus on the actions of Sheryl Sandberg, who appears to be top person at Facebook who was overseeing the company’s response to these scandals. It describes a general pattern of Facebook’s executives first ignoring problems and then using various public relations strategies to deal with the problems when they are no longer able to ignore them. And it’s the choice of public relations firms that is perhaps the biggest scandal revealed in this story: In October of 2017, Facebook hired Definers Public Affair, a DC-based firm founded by veterans of Republican presidential politics that specialized in applying the tactics of political races to corporate public relations.

And one of the political strategies employed by Definers was simply putting out articles that put their clients in a positive light while simultaneously attacking their clients’ enemies. That’s what Definers did for Facebook, with Definers utilizing an affiliated conservative news site, NTK Network. NTK shares offices and stiff with Definers and many NTK stories are written by Definers staff and are basically attack ads on Definers’ clients’ enemies. So how does NTK get anyone to read their propaganda articles? By getting them picked up by other popular conservative outlets, including Breitbart.

Perhaps most controversially, Facebook had Definers attempt to tie various groups that are critical of Facebook to George Soros, implicitly harnessing the existing right-wing meme that George Soros is a super wealthy Jew who secretly controls almost everything. This attack by Definers centered around the Freedom from Facebook coalition. Back in July, The group had crashed the House Judiciary Committee hearings when a Facebook executive was testifying, holding up signs depicting Sheryl Sandberg and Mark Zuckerberg as two heads of an octopus stretching around the globe. The group claimed the sign was a reference to old cartoons about the Standard Oil monopoly. But such imagery also evokes classic anti-Semitic tropes, made more acute by the fact that both Sandberg and Zuckerberg are Jewish. So Facebook enlisted the ADL to condemn Freedom from Facebook over the imagery.

Inside Facebook’s Menlo Park, Calif., headquarters, top executives gathered in the glass-walled conference room of its founder, Mark Zuckerberg. It was September 2017, more than a year after Facebook engineers discovered suspicious Russia-linked activity on its site, an early warning of the Kremlin campaign to disrupt the 2016 American election. Congressional and federal investigators were closing in on evidence that would implicate the company.

But it wasn’t the looming disaster at Facebook that angered Ms. Sandberg. It was the social network’s security chief, Alex Stamos, who had informed company board members the day before that Facebook had yet to contain the Russian infestation. Mr. Stamos’s briefing had prompted a humiliating boardroom interrogation of Ms. Sandberg, Facebook’s chief operating officer, and her billionaire boss. She appeared to regard the admission as a betrayal.

“You threw us under the bus!” she yelled at Mr. Stamos, according to people who were present.

The clash that day would set off a reckoning — for Mr. Zuckerberg, for Ms. Sandberg and for the business they had built together. In just over a decade, Facebook has connected more than 2.2 billion people, a global nation unto itself that reshaped political campaigns, the advertising business and daily life around the world. Along the way, Facebook accumulated one of the largest-ever repositories of personal data, a treasure trove of photos, messages and likes that propelled the company into the Fortune 500.

But as evidence accumulated that Facebook’s power could also be exploited to disrupt elections, broadcast viral propaganda and inspire deadly campaigns of hate around the globe, Mr. Zuckerberg and Ms. Sandberg stumbled. Bent on growth, the pair ignored warning signs and then sought to conceal them from public view. At critical moments over the last three years, they were distracted by personal projects, and passed off security and policy decisions to subordinates, according to current and former executives.

When Facebook users learned last spring that the company had compromised their privacy in its rush to expand, allowing access to the personal information of tens of millions of people to a political data firm linked to President Trump, Facebook sought to deflect blame and mask the extent of the problem.

And when that failed — as the company’s stock price plummeted and it faced a consumer backlash — Facebook went on the attack.

While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters, in part by linking them to the liberal financier George Soros. It also tapped its business relationships, lobbying a Jewish civil rights group to cast some criticism of the company as anti-Semitic.

In Washington, allies of Facebook, including Senator Chuck Schumer, the Democratic Senate leader, intervened on its behalf. And Ms. Sandberg wooed or cajoled hostile lawmakers, while trying to dispel Facebook’s reputation as a bastion of Bay Area liberalism.

This account of how Mr. Zuckerberg and Ms. Sandberg navigated Facebook’s cascading crises, much of which has not been previously reported, is based on interviews with more than 50 people. They include current and former Facebook executives and other employees, lawmakers and government officials, lobbyists and congressional staff members. Most spoke on the condition of anonymity because they had signed confidentiality agreements, were not authorized to speak to reporters or feared retaliation.

…

Even so, trust in the social network has sunk, while its pell-mell growth has slowed. Regulators and law enforcement officials in the United States and Europe are investigating Facebook’s conduct with Cambridge Analytica, a political data firm that worked with Mr. Trump’s 2016 campaign, opening up the company to fines and other liability. Both the Trump administration and lawmakers have begun crafting proposals for a national privacy law, setting up a yearslong struggle over the future of Facebook’s data-hungry business model.

“We failed to look and try to imagine what was hiding behind corners,” Elliot Schrage, former vice president for global communications, marketing and public policy at Facebook, said in an interview.

Mr. Zuckerberg, 34, and Ms. Sandberg, 49, remain at the company’s helm, while Mr. Stamos and other high-profile executives have left after disputes over Facebook’s priorities. Mr. Zuckerberg, who controls the social network with 60 percent of the voting shares and who approved many of its directors, has been asked repeatedly in the last year whether he should step down as chief executive.

His answer each time: a resounding “No.”

‘Don’t Poke the Bear’

Three years ago, Mr. Zuckerberg, who founded Facebook in 2004 while attending Harvard, was celebrated for the company’s extraordinary success. Ms. Sandberg, a former Clinton administration official and Google veteran, had become a feminist icon with the publication of her empowerment manifesto, “Lean In,” in 2013.

But as Facebook grew, so did the hate speech, bullying and other toxic content on the platform. When researchers and activists in Myanmar, India, Germany and elsewhere warned that Facebook had become an instrument of government propaganda and ethnic cleansing, the company largely ignored them. Facebook had positioned itself as a platform, not a publisher. Taking responsibility for what users posted, or acting to censor it, was expensive and complicated. Many Facebook executives worried that any such efforts would backfire.

Then Donald J. Trump ran for president. He described Muslim immigrants and refugees as a danger to America, and in December 2015 posted a statement on Facebook calling for a “total and complete shutdown” on Muslims entering the United States. Mr. Trump’s call to arms — widely condemned by Democrats and some prominent Republicans — was shared more than 15,000 times on Facebook, an illustration of the site’s power to spread racist sentiment.

Mr. Zuckerberg, who had helped found a nonprofit dedicated to immigration reform, was appalled, said employees who spoke to him or were familiar with the conversation. He asked Ms. Sandberg and other executives if Mr. Trump had violated Facebook’s terms of service.

The question was unusual. Mr. Zuckerberg typically focused on broader technology issues; politics was Ms. Sandberg’s domain. In 2010, Ms. Sandberg, a Democrat, had recruited a friend and fellow Clinton alum, Marne Levine, as Facebook’s chief Washington representative. A year later, after Republicans seized control of the House, Ms. Sandberg installed another friend, a well-connected Republican: Joel Kaplan, who had attended Harvard with Ms. Sandberg and later served in the George W. Bush administration.

Some at Facebook viewed Mr. Trump’s 2015 attack on Muslims as an opportunity to finally take a stand against the hate speech coursing through its platform. But Ms. Sandberg, who was edging back to work after the death of her husband several months earlier, delegated the matter to Mr. Schrage and Monika Bickert, a former prosecutor whom Ms. Sandberg had recruited as the company’s head of global policy management. Ms. Sandberg also turned to the Washington office — particularly to Mr. Kaplan, said people who participated in or were briefed on the discussions.

In video conference calls between the Silicon Valley headquarters and Washington, the three officials construed their task narrowly. They parsed the company’s terms of service to see if the post, or Mr. Trump’s account, violated Facebook’s rules.

Mr. Kaplan argued that Mr. Trump was an important public figure and that shutting down his account or removing the statement could be seen as obstructing free speech, said three employees who knew of the discussions. He said it could also stoke a conservative backlash.

“Don’t poke the bear,” Mr. Kaplan warned.

Mr. Zuckerberg did not participate in the debate. Ms. Sandberg attended some of the video meetings but rarely spoke.

Mr. Schrage concluded that Mr. Trump’s language had not violated Facebook’s rules and that the candidate’s views had public value. “We were trying to make a decision based on all the legal and technical evidence before us,” he said in an interview.

In the end, Mr. Trump’s statement and account remained on the site. When Mr. Trump won election the next fall, giving Republicans control of the White House as well as Congress, Mr. Kaplan was empowered to plan accordingly. The company hired a former aide to Mr. Trump’s new attorney general, Jeff Sessions, along with lobbying firms linked to Republican lawmakers who had jurisdiction over internet companies.

But inside Facebook, new troubles were brewing.

Minimizing Russia’s Role

In the final months of Mr. Trump’s presidential campaign, Russian agents escalated a yearlong effort to hack and harass his Democratic opponents, culminating in the release of thousands of emails stolen from prominent Democrats and party officials.

Facebook had said nothing publicly about any problems on its own platform. But in the spring of 2016, a company expert on Russian cyberwarfare spotted something worrisome. He reached out to his boss, Mr. Stamos.

Mr. Stamos’s team discovered that Russian hackers appeared to be probing Facebook accounts for people connected to the presidential campaigns, said two employees. Months later, as Mr. Trump battled Hillary Clinton in the general election, the team also found Facebook accounts linked to Russian hackers who were messaging journalists to share information from the stolen emails.

Mr. Stamos, 39, told Colin Stretch, Facebook’s general counsel, about the findings, said two people involved in the conversations. At the time, Facebook had no policy on disinformation or any resources dedicated to searching for it.

Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016, after Mr. Zuckerberg publicly scoffed at the idea that fake news on Facebook had helped elect Mr. Trump, Mr. Stamos — alarmed that the company’s chief executive seemed unaware of his team’s findings — met with Mr. Zuckerberg, Ms. Sandberg and other top Facebook leaders.

Ms. Sandberg was angry. Looking into the Russian activity without approval, she said, had left the company exposed legally. Other executives asked Mr. Stamos why they had not been told sooner.

Still, Ms. Sandberg and Mr. Zuckerberg decided to expand on Mr. Stamos’s work, creating a group called Project P, for “propaganda,” to study false news on the site, according to people involved in the discussions. By January 2017, the group knew that Mr. Stamos’s original team had only scratched the surface of Russian activity on Facebook, and pressed to issue a public paper about their findings.

But Mr. Kaplan and other Facebook executives objected. Washington was already reeling from an official finding by American intelligence agencies that Vladimir V. Putin, the Russian president, had personally ordered an influence campaign aimed at helping elect Mr. Trump.

If Facebook implicated Russia further, Mr. Kaplan said, Republicans would accuse the company of siding with Democrats. And if Facebook pulled down the Russians’ fake pages, regular Facebook users might also react with outrage at having been deceived: His own mother-in-law, Mr. Kaplan said, had followed a Facebook page created by Russian trolls.

Ms. Sandberg sided with Mr. Kaplan, recalled four people involved. Mr. Zuckerberg — who spent much of 2017 on a national “listening tour,” feeding cows in Wisconsin and eating dinner with Somali refugees in Minnesota — did not participate in the conversations about the public paper. When it was published that April, the word “Russia” never appeared.

…

A Political Playbook

The combined revelations infuriated Democrats, finally fracturing the political consensus that had protected Facebook and other big tech companies from Beltway interference. Republicans, already concerned that the platform was censoring conservative views, accused Facebook of fueling what they claimed were meritless conspiracy charges against Mr. Trump and Russia. Democrats, long allied with Silicon Valley on issues including immigration and gay rights, now blamed Mr. Trump’s win partly on Facebook’s tolerance for fraud and disinformation.

After stalling for weeks, Facebook eventually agreed to hand over the Russian posts to Congress. Twice in October 2017, Facebook was forced to revise its public statements, finally acknowledging that close to 126 million people had seen the Russian posts.

The same month, Mr. Warner and Senator Amy Klobuchar, the Minnesota Democrat, introduced legislation to compel Facebook and other internet firms to disclose who bought political ads on their sites — a significant expansion of federal regulation over tech companies.

“It’s time for Facebook to let all of us see the ads bought by Russians *and paid for in Rubles* during the last election,” Ms. Klobuchar wrote on her own Facebook page.

Facebook girded for battle. Days after the bill was unveiled, Facebook hired Mr. Warner’s former chief of staff, Luke Albee, to lobby on it. Mr. Kaplan’s team took a larger role in managing the company’s Washington response, routinely reviewing Facebook news releases for words or phrases that might rile conservatives.

Ms. Sandberg also reached out to Ms. Klobuchar. She had been friendly with the senator, who is featured on the website for Lean In, Ms. Sandberg’s empowerment initiative. Ms. Sandberg had contributed a blurb to Ms. Klobuchar’s 2015 memoir, and the senator’s chief of staff had previously worked at Ms. Sandberg’s charitable foundation.

But in a tense conversation shortly after the ad legislation was introduced, Ms. Sandberg complained about Ms. Klobuchar’s attacks on the company, said a person who was briefed on the call. Ms. Klobuchar did not back down on her legislation. But she dialed down her criticism in at least one venue important to the company: After blasting Facebook repeatedly that fall on her own Facebook page, Ms. Klobuchar hardly mentioned the company in posts between November and February.

A spokesman for Ms. Klobuchar said in a statement that Facebook’s lobbying had not lessened her commitment to holding the company accountable. “Facebook was pushing to exclude issue ads from the Honest Ads Act, and Senator Klobuchar strenuously disagreed and refused to change the bill,” he said.

In October 2017, Facebook also expanded its work with a Washington-based consultant, Definers Public Affairs, that had originally been hired to monitor press coverage of the company. Founded by veterans of Republican presidential politics, Definers specialized in applying political campaign tactics to corporate public relations — an approach long employed in Washington by big telecommunications firms and activist hedge fund managers, but less common in tech.

Definers had established a Silicon Valley outpost earlier that year, led by Tim Miller, a former spokesman for Jeb Bush who preached the virtues of campaign-style opposition research. For tech firms, he argued in one interview, a goal should be to “have positive content pushed out about your company and negative content that’s being pushed out about your competitor.”

Facebook quickly adopted that strategy. In November 2017, the social network came out in favor of a bill called the Stop Enabling Sex Traffickers Act, which made internet companies responsible for sex trafficking ads on their sites.

Google and others had fought the bill for months, worrying it would set a cumbersome precedent. But the sex trafficking bill was championed by Senator John Thune, a Republican of South Dakota who had pummeled Facebook over accusations that it censored conservative content, and Senator Richard Blumenthal, a Connecticut Democrat and senior commerce committee member who was a frequent critic of Facebook.

Facebook broke ranks with other tech companies, hoping the move would help repair relations on both sides of the aisle, said two congressional staffers and three tech industry officials.

When the bill came to a vote in the House in February, Ms. Sandberg offered public support online, urging Congress to “make sure we pass meaningful and strong legislation to stop sex trafficking.”

Opposition Research

In March, The Times, The Observer of London and The Guardian prepared to publish a joint investigation into how Facebook user data had been appropriated by Cambridge Analytica to profile American voters. A few days before publication, The Times presented Facebook with evidence that copies of improperly acquired Facebook data still existed, despite earlier promises by Cambridge executives and others to delete it.

Mr. Zuckerberg and Ms. Sandberg met with their lieutenants to determine a response. They decided to pre-empt the stories, saying in a statement published late on a Friday night that Facebook had suspended Cambridge Analytica from its platform. The executives figured that getting ahead of the news would soften its blow, according to people in the discussions.

They were wrong. The story drew worldwide outrage, prompting lawsuits and official investigations in Washington, London and Brussels. For days, Mr. Zuckerberg and Ms. Sandberg remained out of sight, mulling how to respond. While the Russia investigation had devolved into an increasingly partisan battle, the Cambridge scandal set off Democrats and Republicans alike. And in Silicon Valley, other tech firms began exploiting the outcry to burnish their own brands.

“We’re not going to traffic in your personal life,” Tim Cook, Apple’s chief executive, said in an MSNBC interview. “Privacy to us is a human right. It’s a civil liberty.” (Mr. Cook’s criticisms infuriated Mr. Zuckerberg, who later ordered his management team to use only Android phones — arguing that the operating system had far more users than Apple’s.)

Facebook scrambled anew. Executives quietly shelved an internal communications campaign, called “We Get It,” meant to assure employees that the company was committed to getting back on track in 2018.

Then Facebook went on the offensive. Mr. Kaplan prevailed on Ms. Sandberg to promote Kevin Martin, a former Federal Communications Commission chairman and fellow Bush administration veteran, to lead the company’s American lobbying efforts. Facebook also expanded its work with Definers.

On a conservative news site called the NTK Network, dozens of articles blasted Google and Apple for unsavory business practices. One story called Mr. Cook hypocritical for chiding Facebook over privacy, noting that Apple also collects reams of data from users. Another played down the impact of the Russians’ use of Facebook.

The rash of news coverage was no accident: NTK is an affiliate of Definers, sharing offices and staff with the public relations firm in Arlington, Va. Many NTK Network stories are written by staff members at Definers or America Rising, the company’s political opposition-research arm, to attack their clients’ enemies. While the NTK Network does not have a large audience of its own, its content is frequently picked up by popular conservative outlets, including Breitbart.

Mr. Miller acknowledged that Facebook and Apple do not directly compete. Definers’ work on Apple is funded by a third technology company, he said, but Facebook has pushed back against Apple because Mr. Cook’s criticism upset Facebook.

If the privacy issue comes up, Facebook is happy to “muddy the waters,” Mr. Miller said over drinks at an Oakland, Calif., bar last month.

Ms. Sandberg had said little publicly about the company’s problems. But inside Facebook, her approach had begun to draw criticism.

…

Facebook also continued to look for ways to deflect criticism to rivals. In June, after The Times reported on Facebook’s previously undisclosed deals to share user data with device makers — partnerships Facebook had failed to disclose to lawmakers — executives ordered up focus groups in Washington.

In separate sessions with liberals and conservatives, about a dozen at a time, Facebook previewed messages to lawmakers. Among the approaches it tested was bringing YouTube and other social media platforms into the controversy, while arguing that Google struck similar data-sharing deals.

Deflecting Criticism

By then, some of the harshest criticism of Facebook was coming from the political left, where activists and policy experts had begun calling for the company to be broken up.

In July, organizers with a coalition called Freedom from Facebook crashed a hearing of the House Judiciary Committee, where a company executive was testifying about its policies. As the executive spoke, the organizers held aloft signs depicting Ms. Sandberg and Mr. Zuckerberg, who are both Jewish, as two heads of an octopus stretching around the globe.

Eddie Vale, a Democratic public relations strategist who led the protest, later said the image was meant to evoke old cartoons of Standard Oil, the Gilded Age monopoly. But a Facebook official quickly called the Anti-Defamation League, a leading Jewish civil rights organization, to flag the sign. Facebook and other tech companies had partnered with the civil rights group since late 2017 on an initiative to combat anti-Semitism and hate speech online.

“Depicting Jews as an octopus encircling the globe is a classic anti-Semitic trope,” the organization wrote. “Protest Facebook — or anyone — all you want, but pick a different image.” The criticism was soon echoed in conservative outlets including The Washington Free Beacon, which has sought to tie Freedom from Facebook to what the publication calls “extreme anti-Israel groups.”

An A.D.L. spokeswoman, Betsaida Alcantara, said the group routinely fielded reports of anti-Semitic slurs from journalists, synagogues and others. “Our experts evaluate each one based on our years of experience, and we respond appropriately,” Ms. Alcantara said. (The group has at times sharply criticized Facebook, including when Mr. Zuckerberg suggested that his company should not censor Holocaust deniers.)

Facebook also used Definers to take on bigger opponents, such as Mr. Soros, a longtime boogeyman to mainstream conservatives and the target of intense anti-Semitic smears on the far right. A research document circulated by Definers to reporters this summer, just a month after the House hearing, cast Mr. Soros as the unacknowledged force behind what appeared to be a broad anti-Facebook movement.

He was a natural target. In a speech at the World Economic Forum in January, he had attacked Facebook and Google, describing them as a monopolist “menace” with “neither the will nor the inclination to protect society against the consequences of their actions.”

Definers pressed reporters to explore the financial connections between Mr. Soros’s family or philanthropies and groups that were members of Freedom from Facebook, such as Color of Change, an online racial justice organization, as well as a progressive group founded by Mr. Soros’s son. (An official at Mr. Soros’s Open Society Foundations said the philanthropy had supported both member groups, but not Freedom from Facebook, and had made no grants to support campaigns against Facebook.)

“While Mr. Zuckerberg has conducted a public apology tour in the last year, Ms. Sandberg has overseen an aggressive lobbying campaign to combat Facebook’s critics, shift public anger toward rival companies and ward off damaging regulation. Facebook employed a Republican opposition-research firm to discredit activist protesters, in part by linking them to the liberal financier George Soros. It also tapped its business relationships, lobbying a Jewish civil rights group to cast some criticism of the company as anti-Semitic.”

Imagine if your job was to handle Facebook’s bad press. That was apparently Sheryl Sandberg’s job behind the scenes while Mark Zuckerberg was acting as the apologetic public face of Facebook.

But both Zuckerberg and Sandberg appeared to have largely the same response to the scandals involving Facebook’s growing use as a platform for spreading hate and extremism: keep Facebook out of those disputes by arguing that it’s just a platform, not a publisher:

…‘Don’t Poke the Bear’

Three years ago, Mr. Zuckerberg, who founded Facebook in 2004 while attending Harvard, was celebrated for the company’s extraordinary success. Ms. Sandberg, a former Clinton administration official and Google veteran, had become a feminist icon with the publication of her empowerment manifesto, “Lean In,” in 2013.

But as Facebook grew, so did the hate speech, bullying and other toxic content on the platform. When researchers and activists in Myanmar, India, Germany and elsewhere warned that Facebook had become an instrument of government propaganda and ethnic cleansing, the company largely ignored them. Facebook had positioned itself as a platform, not a publisher. Taking responsibility for what users posted, or acting to censor it, was expensive and complicated. Many Facebook executives worried that any such efforts would backfire.
…

Sandberg also appears to have increasingly relied on Joel Kaplan, Facebook’s vice president of global public policy, for advice on how to handle these issues and scandal. Kaplan previously served in the George W. Bush administration. When Donald Trump first ran for president in 2015 and announced his plan for a “total and complete shutdown” on Muslims entering the United States and that message was shared more than 15,000 times on Facebook, the question was raised by Zuckerberg of whether or not Trump violated the platform’s terms of service. Sandberg turned to Kaplan for advice. Kaplan, unsurprisingly, recommended that any sort of crackdown on Trump’s use of Facebook would be seen as obstructing free speech and prompt a conservative backlash. Kaplan’s advice was taken:

…
Then Donald J. Trump ran for president. He described Muslim immigrants and refugees as a danger to America, and in December 2015 posted a statement on Facebook calling for a “total and complete shutdown” on Muslims entering the United States. Mr. Trump’s call to arms — widely condemned by Democrats and some prominent Republicans — was shared more than 15,000 times on Facebook, an illustration of the site’s power to spread racist sentiment.

Mr. Zuckerberg, who had helped found a nonprofit dedicated to immigration reform, was appalled, said employees who spoke to him or were familiar with the conversation. He asked Ms. Sandberg and other executives if Mr. Trump had violated Facebook’s terms of service.

The question was unusual. Mr. Zuckerberg typically focused on broader technology issues; politics was Ms. Sandberg’s domain. In 2010, Ms. Sandberg, a Democrat, had recruited a friend and fellow Clinton alum, Marne Levine, as Facebook’s chief Washington representative. A year later, after Republicans seized control of the House, Ms. Sandberg installed another friend, a well-connected Republican: Joel Kaplan, who had attended Harvard with Ms. Sandberg and later served in the George W. Bush administration.

Some at Facebook viewed Mr. Trump’s 2015 attack on Muslims as an opportunity to finally take a stand against the hate speech coursing through its platform. But Ms. Sandberg, who was edging back to work after the death of her husband several months earlier, delegated the matter to Mr. Schrage and Monika Bickert, a former prosecutor whom Ms. Sandberg had recruited as the company’s head of global policy management. Ms. Sandberg also turned to the Washington office — particularly to Mr. Kaplan, said people who participated in or were briefed on the discussions.

In video conference calls between the Silicon Valley headquarters and Washington, the three officials construed their task narrowly. They parsed the company’s terms of service to see if the post, or Mr. Trump’s account, violated Facebook’s rules.

Mr. Kaplan argued that Mr. Trump was an important public figure and that shutting down his account or removing the statement could be seen as obstructing free speech, said three employees who knew of the discussions. He said it could also stoke a conservative backlash.

“Don’t poke the bear,” Mr. Kaplan warned.

Mr. Zuckerberg did not participate in the debate. Ms. Sandberg attended some of the video meetings but rarely spoke.

Mr. Schrage concluded that Mr. Trump’s language had not violated Facebook’s rules and that the candidate’s views had public value. “We were trying to make a decision based on all the legal and technical evidence before us,” he said in an interview.
…

And note how, after Trump won, Facebook hired a former aide to Jeff Sessions and lobbying firms linked to Republican lawmakers who had jurisdiction over internet companies. Facebook was making pleasing Republicans in Washington a top priority:

…
In the end, Mr. Trump’s statement and account remained on the site. When Mr. Trump won election the next fall, giving Republicans control of the White House as well as Congress, Mr. Kaplan was empowered to plan accordingly. The company hired a former aide to Mr. Trump’s new attorney general, Jeff Sessions, along with lobbying firms linked to Republican lawmakers who had jurisdiction over internet companies.
…

Kaplan also encouraged Facebook to avoid investigating too closely the alleged Russian troll campaigns. This was his advice even in 2016, while the campaign was ongoing, and after the campaign in 2017. Interestingly, Facebook apparently found accounts linked to ‘Russian hackers’ that were using Facebook to look up information on presidential campaigns. This was in the spring of 2016. Keep in mind that the initial reports of the hacked emails didn’t start until mid June of 2016. Summer technically started about a week later. So how did Facebook’s internal team know these accounts were associated with Russian hackers before the ‘Russian hacker’ scandal erupted? That’s unclear. But the article goes on to say that this same team also found accounts linked with the Russian hackers messaging journalists to share contents of the hacked emails. Was “Guccifer 2.0” using Facebook to talk with journalists? that’s also unclear. But it sounds like Facebook was indeed actively observing what it thought were Russian hackers using the platform:

…Minimizing Russia’s Role

In the final months of Mr. Trump’s presidential campaign, Russian agents escalated a yearlong effort to hack and harass his Democratic opponents, culminating in the release of thousands of emails stolen from prominent Democrats and party officials.

Facebook had said nothing publicly about any problems on its own platform. But in the spring of 2016, a company expert on Russian cyberwarfare spotted something worrisome. He reached out to his boss, Mr. Stamos.

Mr. Stamos’s team discovered that Russian hackers appeared to be probing Facebook accounts for people connected to the presidential campaigns, said two employees. Months later, as Mr. Trump battled Hillary Clinton in the general election, the team also found Facebook accounts linked to Russian hackers who were messaging journalists to share information from the stolen emails.

Mr. Stamos, 39, told Colin Stretch, Facebook’s general counsel, about the findings, said two people involved in the conversations. At the time, Facebook had no policy on disinformation or any resources dedicated to searching for it.
…

Alex Stamos, Facebook’s head of security, directed a team to examine the Russian activity on Facebook. And yet Zuckerberg and Sandberg apparently never learned about their findings until December of 2016, after the election. And when they did learn, Sandberg got angry as Stamos for not getting approval before looking into this because it could leave the company legally exposed, highlighting again how not knowing about the abuses on its platform is a legal strategy of the company. By January of 2017, Stamos wanted to issue a public paper on their findings, but Joel Kaplan shot down the idea, arguing that doing so would cause Republicans to turn on the company. Sandberg again agreed with Kaplan:

…Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016, after Mr. Zuckerberg publicly scoffed at the idea that fake news on Facebook had helped elect Mr. Trump, Mr. Stamos — alarmed that the company’s chief executive seemed unaware of his team’s findings — met with Mr. Zuckerberg, Ms. Sandberg and other top Facebook leaders.

Ms. Sandberg was angry. Looking into the Russian activity without approval, she said, had left the company exposed legally. Other executives asked Mr. Stamos why they had not been told sooner.

Still, Ms. Sandberg and Mr. Zuckerberg decided to expand on Mr. Stamos’s work, creating a group called Project P, for “propaganda,” to study false news on the site, according to people involved in the discussions. By January 2017, the group knew that Mr. Stamos’s original team had only scratched the surface of Russian activity on Facebook, and pressed to issue a public paper about their findings.

But Mr. Kaplan and other Facebook executives objected. Washington was already reeling from an official finding by American intelligence agencies that Vladimir V. Putin, the Russian president, had personally ordered an influence campaign aimed at helping elect Mr. Trump.

If Facebook implicated Russia further, Mr. Kaplan said, Republicans would accuse the company of siding with Democrats. And if Facebook pulled down the Russians’ fake pages, regular Facebook users might also react with outrage at having been deceived: His own mother-in-law, Mr. Kaplan said, had followed a Facebook page created by Russian trolls.

Ms. Sandberg sided with Mr. Kaplan, recalled four people involved. Mr. Zuckerberg — who spent much of 2017 on a national “listening tour,” feeding cows in Wisconsin and eating dinner with Somali refugees in Minnesota — did not participate in the conversations about the public paper. When it was published that April, the word “Russia” never appeared.
…

“Mr. Stamos, acting on his own, then directed a team to scrutinize the extent of Russian activity on Facebook. In December 2016, after Mr. Zuckerberg publicly scoffed at the idea that fake news on Facebook had helped elect Mr. Trump, Mr. Stamos — alarmed that the company’s chief executive seemed unaware of his team’s findings — met with Mr. Zuckerberg, Ms. Sandberg and other top Facebook leaders.”

Both Zuckerberg and Sandberg were apparently unaware of the findings of Stamos’s team that had been looking into Russian activity since the spring of 2016 and found early signs of the ‘Russian hacking teams’ setting up Facebook pages to distribute the emails. Huh.

And then we get to Definers Public Affairs, the company founded by Republican political operatives and specializing in bring political tactics to corporate public relations. In October of 2017, Facebook appears to have decided to double down on the Definers strategy. A strategy that appears to revolve around the strategy of simultaneously pushing out positive Facebook coverage while attacking Facebooks’s opponents and critics to muddy the waters:

…In October 2017, Facebook also expanded its work with a Washington-based consultant, Definers Public Affairs, that had originally been hired to monitor press coverage of the company. Founded by veterans of Republican presidential politics, Definers specialized in applying political campaign tactics to corporate public relations — an approach long employed in Washington by big telecommunications firms and activist hedge fund managers, but less common in tech.

Definers had established a Silicon Valley outpost earlier that year, led by Tim Miller, a former spokesman for Jeb Bush who preached the virtues of campaign-style opposition research. For tech firms, he argued in one interview, a goal should be to “have positive content pushed out about your company and negative content that’s being pushed out about your competitor.”

Facebook quickly adopted that strategy. In November 2017, the social network came out in favor of a bill called the Stop Enabling Sex Traffickers Act, which made internet companies responsible for sex trafficking ads on their sites.

Google and others had fought the bill for months, worrying it would set a cumbersome precedent. But the sex trafficking bill was championed by Senator John Thune, a Republican of South Dakota who had pummeled Facebook over accusations that it censored conservative content, and Senator Richard Blumenthal, a Connecticut Democrat and senior commerce committee member who was a frequent critic of Facebook.

Facebook broke ranks with other tech companies, hoping the move would help repair relations on both sides of the aisle, said two congressional staffers and three tech industry officials.

When the bill came to a vote in the House in February, Ms. Sandberg offered public support online, urging Congress to “make sure we pass meaningful and strong legislation to stop sex trafficking.”
…

In March, The Times, The Observer of London and The Guardian prepared to publish a joint investigation into how Facebook user data had been appropriated by Cambridge Analytica to profile American voters. A few days before publication, The Times presented Facebook with evidence that copies of improperly acquired Facebook data still existed, despite earlier promises by Cambridge executives and others to delete it.

Mr. Zuckerberg and Ms. Sandberg met with their lieutenants to determine a response. They decided to pre-empt the stories, saying in a statement published late on a Friday night that Facebook had suspended Cambridge Analytica from its platform. The executives figured that getting ahead of the news would soften its blow, according to people in the discussions.

They were wrong. The story drew worldwide outrage, prompting lawsuits and official investigations in Washington, London and Brussels. For days, Mr. Zuckerberg and Ms. Sandberg remained out of sight, mulling how to respond. While the Russia investigation had devolved into an increasingly partisan battle, the Cambridge scandal set off Democrats and Republicans alike. And in Silicon Valley, other tech firms began exploiting the outcry to burnish their own brands.

“We’re not going to traffic in your personal life,” Tim Cook, Apple’s chief executive, said in an MSNBC interview. “Privacy to us is a human right. It’s a civil liberty.” (Mr. Cook’s criticisms infuriated Mr. Zuckerberg, who later ordered his management team to use only Android phones — arguing that the operating system had far more users than Apple’s.)

Facebook scrambled anew. Executives quietly shelved an internal communications campaign, called “We Get It,” meant to assure employees that the company was committed to getting back on track in 2018.

Then Facebook went on the offensive. Mr. Kaplan prevailed on Ms. Sandberg to promote Kevin Martin, a former Federal Communications Commission chairman and fellow Bush administration veteran, to lead the company’s American lobbying efforts. Facebook also expanded its work with Definers.

On a conservative news site called the NTK Network, dozens of articles blasted Google and Apple for unsavory business practices. One story called Mr. Cook hypocritical for chiding Facebook over privacy, noting that Apple also collects reams of data from users. Another played down the impact of the Russians’ use of Facebook.

The rash of news coverage was no accident: NTK is an affiliate of Definers, sharing offices and staff with the public relations firm in Arlington, Va. Many NTK Network stories are written by staff members at Definers or America Rising, the company’s political opposition-research arm, to attack their clients’ enemies. While the NTK Network does not have a large audience of its own, its content is frequently picked up by popular conservative outlets, including Breitbart.
…

Finally, in July of this year, we find Facebook accusing its critics of anti-Semitism at the same time Definers uses an arguably anti-Semitic attack on these exact same critics as part of a general strategy by Definers to define Facebook’s critics as puppets of George Soros:

…Deflecting Criticism

By then, some of the harshest criticism of Facebook was coming from the political left, where activists and policy experts had begun calling for the company to be broken up.

In July, organizers with a coalition called Freedom from Facebook crashed a hearing of the House Judiciary Committee, where a company executive was testifying about its policies. As the executive spoke, the organizers held aloft signs depicting Ms. Sandberg and Mr. Zuckerberg, who are both Jewish, as two heads of an octopus stretching around the globe.

Eddie Vale, a Democratic public relations strategist who led the protest, later said the image was meant to evoke old cartoons of Standard Oil, the Gilded Age monopoly. But a Facebook official quickly called the Anti-Defamation League, a leading Jewish civil rights organization, to flag the sign. Facebook and other tech companies had partnered with the civil rights group since late 2017 on an initiative to combat anti-Semitism and hate speech online.

“Depicting Jews as an octopus encircling the globe is a classic anti-Semitic trope,” the organization wrote. “Protest Facebook — or anyone — all you want, but pick a different image.” The criticism was soon echoed in conservative outlets including The Washington Free Beacon, which has sought to tie Freedom from Facebook to what the publication calls “extreme anti-Israel groups.”

An A.D.L. spokeswoman, Betsaida Alcantara, said the group routinely fielded reports of anti-Semitic slurs from journalists, synagogues and others. “Our experts evaluate each one based on our years of experience, and we respond appropriately,” Ms. Alcantara said. (The group has at times sharply criticized Facebook, including when Mr. Zuckerberg suggested that his company should not censor Holocaust deniers.)

Facebook also used Definers to take on bigger opponents, such as Mr. Soros, a longtime boogeyman to mainstream conservatives and the target of intense anti-Semitic smears on the far right. A research document circulated by Definers to reporters this summer, just a month after the House hearing, cast Mr. Soros as the unacknowledged force behind what appeared to be a broad anti-Facebook movement.

He was a natural target. In a speech at the World Economic Forum in January, he had attacked Facebook and Google, describing them as a monopolist “menace” with “neither the will nor the inclination to protect society against the consequences of their actions.”

Definers pressed reporters to explore the financial connections between Mr. Soros’s family or philanthropies and groups that were members of Freedom from Facebook, such as Color of Change, an online racial justice organization, as well as a progressive group founded by Mr. Soros’s son. (An official at Mr. Soros’s Open Society Foundations said the philanthropy had supported both member groups, but not Freedom from Facebook, and had made no grants to support campaigns against Facebook.)
…

So as we can see, Facebook’s response to scandals appears to fall into the following pattern:

1. Intentionally ignore the scandal.

2. When it’s no longer possible to ignore, try to get ahead of it by going public with a watered down admission of the problem.

3. When getting ahead of the story doesn’t work, attack Facebook’s critics (like suggesting they are all pawns of George Soros)

4. Don’t piss off Republicans.

Also, regarding the discovery of Russian hackers setting up Facebook accounts in the spring of 2016 to distribute the hacked emails, here’s a Washington Post article from September of 2017 that talks about this. And according to the article, Facebook discovered these alleged Russian hacker accounts in June of 2016 (technically still spring) and promptly informed the FBI. The Facebook cybersecurity team was reportedly tracking APT28 (Fancy Bear) as just part of their normal work and discovered this activity as part of that work. They told the FBI, and then shortly afterwards they discovered that pages for Guccifer 2.0 and DCLeaks were being set up to promote the stolen emails. And recall in the above article that the Facebook team apparently discovered message from these account to journalists.

This story has been updated with an additional response from Facebook.

Nine days after Facebook chief executive Mark Zuckerberg dismissed as “crazy” the idea that fake news on his company’s social network played a key role in the U.S. election, President Barack Obama pulled the youthful tech billionaire aside and delivered what he hoped would be a wake-up call.

…

A Russian operation

It turned out that Facebook, without realizing it, had stumbled into the Russian operation as it was getting underway in June 2016.

At the time, cybersecurity experts at the company were tracking a Russian hacker group known as APT28, or Fancy Bear, which U.S. intelligence officials considered an arm of the Russian military intelligence service, the GRU, according to people familiar with Facebook’s activities.

Members of the Russian hacker group were best known for stealing military plans and data from political targets, so the security experts assumed that they were planning some sort of espionage operation — not a far-reaching disinformation campaign designed to shape the outcome of the U.S. presidential race.

Facebook executives shared with the FBI their suspicions that a Russian espionage operation was in the works, a person familiar with the matter said. An FBI spokesperson had no comment.

Soon thereafter, Facebook’s cyber experts found evidence that members of APT28 were setting up a series of shadowy accounts — including a persona known as Guccifer 2.0 and a Facebook page called DCLeaks — to promote stolen emails and other documents during the presidential race. Facebook officials once again contacted the FBI to share what they had seen.

After the November election, Facebook began to look more broadly at the accounts that had been created during the campaign.

A review by the company found that most of the groups behind the problematic pages had clear financial motives, which suggested that they weren’t working for a foreign government.

But amid the mass of data the company was analyzing, the security team did not find clear evidence of Russian disinformation or ad purchases by Russian-linked accounts.

Nor did any U.S. law enforcement or intelligence officials visit the company to lay out what they knew, said people familiar with the effort, even after the nation’s top intelligence official, James R. Clapper Jr., testified on Capitol Hill in January that the Russians had waged a massive propaganda campaign online.

“It turned out that Facebook, without realizing it, had stumbled into the Russian operation as it was getting underway in June 2016.”

It’s kind of an amazing story. Just by accident, Facebook’s cybersecurity experts were already tracking APT28 somehow and noticed a bunch of activity by the group on Facebook. They alert the FBI. This is in June of 2016. “Soon thereafter”, Facebook finds evidence that members of APT28 were setting up accounts for Guccifer 2.0 and DCLeaks. Facebook again informed the FBI:

…
At the time, cybersecurity experts at the company were tracking a Russian hacker group known as APT28, or Fancy Bear, which U.S. intelligence officials considered an arm of the Russian military intelligence service, the GRU, according to people familiar with Facebook’s activities.

Members of the Russian hacker group were best known for stealing military plans and data from political targets, so the security experts assumed that they were planning some sort of espionage operation — not a far-reaching disinformation campaign designed to shape the outcome of the U.S. presidential race.

Facebook executives shared with the FBI their suspicions that a Russian espionage operation was in the works, a person familiar with the matter said. An FBI spokesperson had no comment.

Soon thereafter, Facebook’s cyber experts found evidence that members of APT28 were setting up a series of shadowy accounts — including a persona known as Guccifer 2.0 and a Facebook page called DCLeaks — to promote stolen emails and other documents during the presidential race. Facebook officials once again contacted the FBI to share what they had seen.
…

So Facebook allegedly detected APT28/Fancy Bear activity in the spring of 2016. It’s unclear how they knew these were APT28/Fancy Bear hackers and unclear how they were tracking their activity. And then they discovered these APT28 hackers were setting pages for Guccifer 2.0 and DC Leaks. And as we saw in the above article, they also found messages from these accounts to journalists discussing the emails.

It’s a remarkable story, in part because it’s almost never told. We learn that Facebook apparently has the ability to track exactly the same Russian hacker group that’s accused of carrying out these hacks, and we learn that Facebook watched these same hackers set up the Facebook pages for Guccifer 2.0 and DC Leaks. And yet this is almost never mentioned as evidence that Russian government hackers were indeed behind the hacks. Thus far, the attribution of these hacks on APT28/Fancy Bear has relied on Crowdstrike and the US government and the direct investigation of the hacks Democratic Party servers. But here we’re learning that Facebook apparently has it’s own pool of evidence that can tie APT28 to Facebook accounts set up for Guccifer 2.0 and DCLeaks. A pool of evidence that’s almost never mentioned.

The Facebook founder’s bromantic hero was a canny operator who was obsessed with power and overrode democracy

Powerful men do love a transhistorical man-crush – fixating on an ancestor figure, who can be venerated, perhaps surpassed. Facebook’s Mark Zuckerberg has told the New Yorker about his particular fascination with the Roman emperor, Augustus – he and his wife, Priscilla Chan, have even called one of their children August.

“Basically, through a really harsh approach, he established 200 years of world peace,” Zuckerberg explained. He pondered, “What are the trade-offs in that? On the one hand, world peace is a long-term goal that people talk about today …” On the other hand, he said, “that didn’t come for free, and he had to do certain things”.

Zuckerberg loved Latin at school (“very much like coding”, he said). His sister, Donna, got her classics PhD at Princeton, is editor of the excellent Eidolon online classics magazine, and has just written a book on how “alt-right”, misogynist online communities invoke classical history.

I’m not sure whether the appealing classics nerdiness of Zuckerberg’s background makes his sanguine euphemisms more or less alarming. “He had to do certain things” and “a really harsh approach” are, let’s say, a relaxed way of describing Augustus’ brutal and systematic elimination of political opponents. And “200 years of world peace”? Well yes, if that’s what you want to call centuries of brutal conquest. Even the Roman historian Tacitus had something to say about that: “solitudinem faciunt, pacem appellant”. They make a desert and call it peace.

…

It’s true that his reign has been reconsidered time and again: it is one of those extraordinary junctions in history – when Rome’s republic teetered, crumbled, and reformed as the empire – that looks different depending on the moment from which he is examined. It is perfectly true to say that Augustus ended the civil strife that overwhelmed Rome in the late first century BC, and ushered in a period of stability and, in some ways, renewal, by the time of his death in 14 AD. That’s how I was taught about Augustus at school, I suspect not uncoincidentally by someone brought up during the second world war. But in 1939 Ronald Syme had published his brilliant account of the period, The Roman Revolution – a revolutionary book in itself, challenging Augustus’s then largely positive reputation by portraying him as a sinister figure who emerged on the tides of history out of the increasingly ungovernable Roman republic, to wield autocratic power.

Part of the fascination of the man is that he was a master of propaganda and a superb political operator. In our own era of obfuscation, deceit and fake news it’s interesting to try to unpick what was really going on. Take his brief autobiography, Res Gestae Divi Augusti. (Things Done By the Deified Augustus – no messing about here, title-wise).

The text, while heavygoing, is a fascinating document, listing his political appointments, his military achievements, the infrastructure projects he funded. But it can, with other contemporary evidence, also be interpreted as a portrait of a man who instituted an autocracy that cleverly mimicked the forms and traditions of Rome’s quasi-democratic republic.

Under the guise of restoring Rome to greatness, he hollowed out its constitution and loaded power into his own hands. Something there for Zuckerberg to think about, perhaps. Particularly considering the New Yorker’s headline for its profile: “Can Mark Zuckerberg fix Facebook before it breaks democracy?”

“Powerful men do love a transhistorical man-crush – fixating on an ancestor figure, who can be venerated, perhaps surpassed. Facebook’s Mark Zuckerberg has told the New Yorker about his particular fascination with the Roman emperor, Augustus – he and his wife, Priscilla Chan, have even called one of their children August.”

He literally named his daughter after the Roman emperor. That hints at more than just a casual historical interest.

So what is it about Caesar Augustus’s rule that Zuckerberg is so enamored with? Well, based on Zuckerberg’s own words, it sounds like it was the way Augustus took a “really harsh approach” to making decisions with difficult trade-offs in order to achieve Pax Romana, 200 years of peace for the Roman empire:

…
“Basically, through a really harsh approach, he established 200 years of world peace,” Zuckerberg explained. He pondered, “What are the trade-offs in that? On the one hand, world peace is a long-term goal that people talk about today …” On the other hand, he said, “that didn’t come for free, and he had to do certain things”.
…

And while focusing a 200 years of peace puts an obsession with Augustus in the most positive possible light, it’s hard to ignore the fact that Augustus was still a master of propaganda and the man who saw the end of the Roman Republic and the imposition of an imperial model of government:

…Part of the fascination of the man is that he was a master of propaganda and a superb political operator. In our own era of obfuscation, deceit and fake news it’s interesting to try to unpick what was really going on. Take his brief autobiography, Res Gestae Divi Augusti. (Things Done By the Deified Augustus – no messing about here, title-wise).

The text, while heavygoing, is a fascinating document, listing his political appointments, his military achievements, the infrastructure projects he funded. But it can, with other contemporary evidence, also be interpreted as a portrait of a man who instituted an autocracy that cleverly mimicked the forms and traditions of Rome’s quasi-democratic republic.

Under the guise of restoring Rome to greatness, he hollowed out its constitution and loaded power into his own hands. Something there for Zuckerberg to think about, perhaps. Particularly considering the New Yorker’s headline for its profile: “Can Mark Zuckerberg fix Facebook before it breaks democracy?”

And that’s a little peek into Mark Zuckerberg’s mind that gives us a sense of what he spends time thinking about: historic figures who did a lot of harsh things to achieve historic ‘greatness’. That’s not a scary red flag or anything.

Here’s a new reason to hate Facebook: if you hate Facebook on Facebook, Facebook might put you on its “Be on the lookout” (BOLO) list and start using its location tracking technology to track your location. That’s according to a new report based on a number of current and former Facebook employees who discussed how the company’s BOLO list policy works. And according to security experts, while Facebook isn’t unique in having a BOLO list for company threats, it is highly unusual in that it can use its own technology to track the people on the BOLO list. Facebook can track BOLO users’ locations using their IP address or the smartphone’s location data collected through the Facebook app.

So how does one end up on this BOLO list? Well, there are the reasonable ways, like if someone posts posts on one of Facebook’s social media platforms a specific threat against Facebook or one of its employees. But it sounds like the standards are a lot more subjective and people are placed on the BOLO for simply posting things like “F— you, Mark,” “F— Facebook”. Another group routinely put on the list is former employees and contractors. Again, it doesn’t sound like it takes much to get on the list. Simply getting emotional if your contract isn’t extended appears to be enough. Given those standards, it’s almost surprising that it sounds like the BOLO list is only hundreds of people long and not thousands of people:

CNBC

Facebook uses its apps to track users it thinks could threaten employees and offices

* Facebook maintains a list of individuals that its security guards must “be on lookout” for that is comprised of users who’ve made threatening statements against the company on its social network as well as numerous former employees.
* The company’s information security team is capable of tracking these individuals’ whereabouts using the location data they provide through Facebook’s apps and websites.
* More than a dozen former Facebook security employees described the company’s tactics to CNBC, with several questioning the ethics of the company’s practices.

Salvador Rodriguez
Published 02/17/2019 Updated

In early 2018, a Facebook user made a public threat on the social network against one of the company’s offices in Europe.

Facebook picked up the threat, pulled the user’s data and determined he was in the same country as the office he was targeting. The company informed the authorities about the threat and directed its security officers to be on the lookout for the user.

“He made a veiled threat that ‘Tomorrow everyone is going to pay’ or something to that effect,” a former Facebook security employee told CNBC.

The incident is representative of the steps Facebook takes to keep its offices, executives and employees protected, according to more than a dozen former Facebook employees who spoke with CNBC. The company mines its social network for threatening comments, and in some cases uses its products to track the location of people it believes present a credible threat.

Several of the former employees questioned the ethics of Facebook’s security strategies, with one of them calling the tactics “very Big Brother-esque.”

Other former employees argue these security measures are justified by Facebook’s reach and the intense emotions it can inspire. The company has 2.7 billion users across its services. That means that if just 0.01 percent of users make a threat, Facebook is still dealing with 270,000 potential security risks.

“Our physical security team exists to keep Facebook employees safe,” a Facebook spokesman said in a statement. “They use industry-standard measures to assess and address credible threats of violence against our employees and our company, and refer these threats to law enforcement when necessary. We have strict processes designed to protect people’s privacy and adhere to all data privacy laws and Facebook’s terms of service. Any suggestion our onsite physical security team has overstepped is absolutely false.”

Facebook is unique in the way it uses its own product to mine data for threats and locations of potentially dangerous individuals, said Tim Bradley, senior consultant with Incident Management Group, a corporate security consulting firm that deals with employee safety issues. However, the Occupational Safety and Health Administration’s general duty clause says that companies have to provide their employees with a workplace free of hazards that could cause death or serious physical harm, Bradley said.

“If they know there’s a threat against them, they have to take steps,” Bradley said. “How they got the information is secondary to the fact that they have a duty to protect employees.”

Making the list

One of the tools Facebook uses to monitor threats is a “be on lookout” or “BOLO” list, which is updated approximately once a week. The list was created in 2008, an early employee in Facebook’s physical security group told CNBC. It now contains hundreds of people, according to four former Facebook security employees who have left the company since 2016.

Facebook notifies its security professionals anytime a new person is added to the BOLO list, sending out a report that includes information about the person, such as their name, photo, their general location and a short description of why they were added.

In recent years, the security team even had a large monitor that displayed the faces of people on the list, according to a photo CNBC has seen and two people familiar, although Facebook says it no longer operates this monitor.

Other companies keep similar lists of threats, Bradley and other sources said. But Facebook is unique because it can use its own products to identify these threats and track the location of people on the list.

Users who publicly threaten the company, its offices or employees — including posting threatening comments in response to posts from executives like CEO Mark Zuckerberg and COO Sheryl Sandberg — are often added to the list. These users are typically described as making “improper communication” or “threatening communication,” according to former employees.

The bar can be pretty low. While some users end up on the list after repeated appearances on company property or long email threats, others might find themselves on the BOLO list for saying something as simple as “F— you, Mark,” “F— Facebook” or “I’m gonna go kick your a–,” according to a former employee who worked with the executive protection team. A different former employee who was on the company’s security team said there were no clearly communicated standards to determine what kinds of actions could land somebody on the list, and that decisions were often made on a case-by-case basis.

The Facebook spokesman disputed this, saying that people were only added after a “rigorous review to determine the validity of the threat.”

Awkward situations

Most people on the list do not know they’re on it. This sometimes leads to tense situations.

Several years ago, one Facebook user discovered he was on the BOLO list when he showed up to Facebook’s Menlo Park campus for lunch with a friend who worked there, according to a former employee who witnessed the incident.

The user checked in with security to register as a guest. His name popped up right away, alerting security. He was on the list. His issue had to do with messages he had sent to Zuckerberg, according to a person familiar with the circumstances.

Soon, more security guards showed up in the entrance area where the guest had tried to register. No one grabbed the individual, but security guards stood at his sides and at each of the doors leading in and out of that entrance area.

Eventually, the employee showed up mad and demanded that his friend be removed from the BOLO list. After the employee met with Facebook’s global security intelligence and investigations team, the friend was removed from the list — a rare occurrence.

“No person would be on BOLO without credible cause,” the Facebook spokesman said in regard to this incident.

It’s not just users who find themselves on Facebook’s BOLO list. Many of the people on the list are former Facebook employees and contractors, whose colleagues ask to add them when they leave the company.

Some former employees are listed for having a track record of poor behavior, such as stealing company equipment. But in many cases, there is no reason listed on the BOLO description. Three people familiar said that almost every Facebook employee who gets fired is added to the list, and one called the process “really subjective.” Another said that contractors are added if they get emotional when their contracts are not extended.

The Facebook spokesman countered that the process is more rigorous than these people claim. “Former employees are only added under very specific circumstances, after review by legal and HR, including threats of violence or harassment.”

The practice of adding former employees to the BOLO list has occasionally created awkward situations for the company’s recruiters, who often reach out to former employees to fill openings. Ex-employees have showed up for job interviews only to find out that they couldn’t enter because they were on the BOLO list, said a former security employee who left the company last year.

“It becomes a whole big embarrassing situation,” this person said.

Tracked by special request

Facebook has the capability to track BOLO users’ whereabouts by using their smartphone’s location data collected through the Facebook app, or their IP address collected through the company’s website.

Facebook only tracks BOLO-listed users when their threats are deemed credible, according to a former employee with firsthand knowledge of the company’s security procedures. This could include a detailed threat with an exact location and timing of an attack, or a threat from an individual who makes a habit of attending company events, such as the Facebook shareholders’ meeting. This former employee emphasized Facebook could not look up users’ locations without cause.

When a credible threat is detected, the global security operations center and the global security intelligence and investigations units make a special request to the company’s information security team, which has the capabilities to track users’ location information. In some cases, the tracking doesn’t go very far — for instance, if a BOLO user made a threat about a specific location but their current location shows them nowhere close, the tracking might end there.

But if the BOLO user is nearby, the information security team can continue to monitor their location periodically and keep other security teams on alert.

Depending on the threat, Facebook’s security teams can take other actions, such as stationing security guards, escorting a BOLO user off campus or alerting law enforcement.

Facebook’s information security team has tracked users’ locations in other safety-related instances, too.

In 2017, a Facebook manager alerted the company’s security teams when a group of interns she was managing did not log into the company’s systems to work from home. They had been on a camping trip, according to a former Facebook security employee, and the manager was concerned about their safety.

Facebook’s information security team became involved in the situation and used the interns’ location data to try and find out if they were safe. “They call it ‘pinging them’, pinging their Facebook accounts,” the former security employee recalled.

After the location data did not turn up anything useful, the information security team then kept digging and learned that the interns had exchanged messages suggesting they never intended to come into work that day — essentially, they had lied to the manager. The information security team gave the manager a summary of what they had found.

“There was legit concern about the safety of these individuals,” the Facebook spokesman said. “In each isolated case, these employees were unresponsive on all communication channels. There’s a set of protocols guiding when and how we access employee data when an employee goes missing.”

“Several of the former employees questioned the ethics of Facebook’s security strategies, with one of them calling the tactics “very Big Brother-esque.””

Yeah, “very Big Brother-esque” sounds like a pretty good description of the situation. In part because Facebook is doing the tracking with its own technology:

…Facebook is unique in the way it uses its own product to mine data for threats and locations of potentially dangerous individuals, said Tim Bradley, senior consultant with Incident Management Group, a corporate security consulting firm that deals with employee safety issues. However, the Occupational Safety and Health Administration’s general duty clause says that companies have to provide their employees with a workplace free of hazards that could cause death or serious physical harm, Bradley said.

“If they know there’s a threat against them, they have to take steps,” Bradley said. “How they got the information is secondary to the fact that they have a duty to protect employees.”

…

Other companies keep similar lists of threats, Bradley and other sources said. But Facebook is unique because it can use its own products to identify these threats and track the location of people on the list.

…

Tracked by special request

Facebook has the capability to track BOLO users’ whereabouts by using their smartphone’s location data collected through the Facebook app, or their IP address collected through the company’s website.

Facebook only tracks BOLO-listed users when their threats are deemed credible, according to a former employee with firsthand knowledge of the company’s security procedures. This could include a detailed threat with an exact location and timing of an attack, or a threat from an individual who makes a habit of attending company events, such as the Facebook shareholders’ meeting. This former employee emphasized Facebook could not look up users’ locations without cause.

When a credible threat is detected, the global security operations center and the global security intelligence and investigations units make a special request to the company’s information security team, which has the capabilities to track users’ location information. In some cases, the tracking doesn’t go very far — for instance, if a BOLO user made a threat about a specific location but their current location shows them nowhere close, the tracking might end there.

But if the BOLO user is nearby, the information security team can continue to monitor their location periodically and keep other security teams on alert.

Depending on the threat, Facebook’s security teams can take other actions, such as stationing security guards, escorting a BOLO user off campus or alerting law enforcement.
…

Getting on the list also sounds shockingly easy. A simple “F— you, Mark,” or “F— Facebook” post on Facebook is all it apparently takes. Given that, it’s almost unbelievable that the list only contains hundreds of people. Although it sounds like that “hundreds of people” estimate is based on former security employees who left the company since 2016. You have to wonder how much longer the BOLO list could be today compared to 2016 simply given the amount of bad press Facebook has received just in the last year alone:

…Making the list

One of the tools Facebook uses to monitor threats is a “be on lookout” or “BOLO” list, which is updated approximately once a week. The list was created in 2008, an early employee in Facebook’s physical security group told CNBC. It now contains hundreds of people, according to four former Facebook security employees who have left the company since 2016.

Facebook notifies its security professionals anytime a new person is added to the BOLO list, sending out a report that includes information about the person, such as their name, photo, their general location and a short description of why they were added.

In recent years, the security team even had a large monitor that displayed the faces of people on the list, according to a photo CNBC has seen and two people familiar, although Facebook says it no longer operates this monitor.

…

Users who publicly threaten the company, its offices or employees — including posting threatening comments in response to posts from executives like CEO Mark Zuckerberg and COO Sheryl Sandberg — are often added to the list. These users are typically described as making “improper communication” or “threatening communication,” according to former employees.

The bar can be pretty low. While some users end up on the list after repeated appearances on company property or long email threats, others might find themselves on the BOLO list for saying something as simple as “F— you, Mark,” “F— Facebook” or “I’m gonna go kick your a–,” according to a former employee who worked with the executive protection team. A different former employee who was on the company’s security team said there were no clearly communicated standards to determine what kinds of actions could land somebody on the list, and that decisions were often made on a case-by-case basis.

The Facebook spokesman disputed this, saying that people were only added after a “rigorous review to determine the validity of the threat.”
…

And it sounds like former employees and contractors can get thrown on the list for basically no reason at all. If you’re fired from Facebook, don’t get emotional. Or the company will track your location indefinitely:

…Awkward situations

Most people on the list do not know they’re on it. This sometimes leads to tense situations.

Several years ago, one Facebook user discovered he was on the BOLO list when he showed up to Facebook’s Menlo Park campus for lunch with a friend who worked there, according to a former employee who witnessed the incident.

The user checked in with security to register as a guest. His name popped up right away, alerting security. He was on the list. His issue had to do with messages he had sent to Zuckerberg, according to a person familiar with the circumstances.

Soon, more security guards showed up in the entrance area where the guest had tried to register. No one grabbed the individual, but security guards stood at his sides and at each of the doors leading in and out of that entrance area.

Eventually, the employee showed up mad and demanded that his friend be removed from the BOLO list. After the employee met with Facebook’s global security intelligence and investigations team, the friend was removed from the list — a rare occurrence.

“No person would be on BOLO without credible cause,” the Facebook spokesman said in regard to this incident.

It’s not just users who find themselves on Facebook’s BOLO list. Many of the people on the list are former Facebook employees and contractors, whose colleagues ask to add them when they leave the company.

Some former employees are listed for having a track record of poor behavior, such as stealing company equipment. But in many cases, there is no reason listed on the BOLO description. Three people familiar said that almost every Facebook employee who gets fired is added to the list, and one called the process “really subjective.” Another said that contractors are added if they get emotional when their contracts are not extended.

The Facebook spokesman countered that the process is more rigorous than these people claim. “Former employees are only added under very specific circumstances, after review by legal and HR, including threats of violence or harassment.”

The practice of adding former employees to the BOLO list has occasionally created awkward situations for the company’s recruiters, who often reach out to former employees to fill openings. Ex-employees have showed up for job interviews only to find out that they couldn’t enter because they were on the BOLO list, said a former security employee who left the company last year.

“It becomes a whole big embarrassing situation,” this person said.
…

And as Facebook itself makes clear with its anecdote about how it tracked the location of a team of interns after the company became concerned about their safety on a camping trip, the BOLO list is just one reason the company might decide to track the locations of specific people. Employees being unresponsive to emails is another reason for the potential tracking. Given that Facebook is using its own in-house location tracking capabilities to do this there are probably all sorts of different excuses for using the technology:

In 2017, a Facebook manager alerted the company’s security teams when a group of interns she was managing did not log into the company’s systems to work from home. They had been on a camping trip, according to a former Facebook security employee, and the manager was concerned about their safety.

Facebook’s information security team became involved in the situation and used the interns’ location data to try and find out if they were safe. “They call it ‘pinging them’, pinging their Facebook accounts,” the former security employee recalled.

After the location data did not turn up anything useful, the information security team then kept digging and learned that the interns had exchanged messages suggesting they never intended to come into work that day — essentially, they had lied to the manager. The information security team gave the manager a summary of what they had found.

“There was legit concern about the safety of these individuals,” the Facebook spokesman said. “In each isolated case, these employees were unresponsive on all communication channels. There’s a set of protocols guiding when and how we access employee data when an employee goes missing.”
…

So now you know, if you’re a former Facebook employee/contractor and/or have ever written a nasty thing about Facebook on Facebook’s platforms, Facebook is watching you.

Of course, Facebook is tracking the locations and everything else it can track about everyone to the greatest extent possible anyway. Tracking everyone is Facebook’s business model. So the distinction is really just whether or not Facebook’s security team is specifically watching you. Facebook the company is watching you whether or not you’re on the list or not.

Facebook decided which users are interested in Nazis — and let advertisers target them directly

By Sam Dean
Feb 21, 2019 | 5:00 AM

Facebook makes money by charging advertisers to reach just the right audience for their message — even when that audience is made up of people interested in the perpetrators of the Holocaust or explicitly neo-Nazi music.

Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.

Experts say that this practice runs counter to the company’s stated principles and can help fuel radicalization online.

“What you’re describing, where a clear hateful idea or narrative can be amplified to reach more people, is exactly what they said they don’t want to do and what they need to be held accountable for,” said Oren Segal, director of the Anti-Defamation League’s center on extremism.

After being contacted by The Times, Facebook said that it would remove many of the audience groupings from its ad platform.

“Most of these targeting options are against our policies and should have been caught and removed sooner,” said Facebook spokesman Joe Osborne. “While we have an ongoing review of our targeting options, we clearly need to do more, so we’re taking a broader look at our policies and detection methods.”

Approved by Facebook

Facebook’s broad reach and sophisticated advertising tools brought in a record $55 billion in ad revenue in 2018.

Profit margins stayed above 40%, thanks to a high degree of automation, with algorithms sorting users into marketable subsets based on their behavior — then choosing which ads to show them.

But the lack of human oversight has also brought the company controversy.

In 2017, Pro Publica found that the company sold ads based on any user-generated phrase, including “Jew hater” and “Hitler did nothing wrong.” Following the murder of 11 congregants at a synagogue in Pittsburgh in 2018, the Intercept found that Facebook gave advertisers the ability to target users interested in the anti-Semitic “white genocide conspiracy theory,” which the suspected killer cited as inspiration before the attacks.

This month, the Guardian highlighted the ways that YouTube and Facebook boost anti-vaccine conspiracy theories, leading Rep. Adam Schiff (D-Burbank) to question whether the company was promoting misinformation.

Facebook has promised since 2017 that humans review every ad targeting category. It announced last fall the removal of 5,000 audience categories that risked enabling abuse or discrimination.

The Times decided to test the effectiveness of the company’s efforts by seeing if Facebook would allow the sale of ads directed to certain segments of users.

Facebook allowed The Times to target ads to users Facebook has determined are interested in Goebbels, the Third Reich’s chief propagandist, Himmler, the architect of the Holocaust and leader of the SS, and Mengele, the infamous concentration camp doctor who performed human experiments on prisoners. Each category included hundreds of thousands of users.

The company also approved an ad targeted to fans of Skrewdriver, a notorious white supremacist punk band — and automatically suggested a series of topics related to European far-right movements to bolster the ad’s reach.

Collectively, the ads were seen by 4,153 users in 24 hours, with The Times paying only $25 to fuel the push.

Facebook admits its human moderators should have removed the Nazi-affiliated demographic categories. But it says the “ads” themselves — which consisted of the word “test” or The Times’ logo and linked back to the newspaper’s homepage — would not have raised red flags for the separate team that looks over ad content.

Upon review, the company said the ad categories were seldom used. The few ads purchased linked to historical content, Facebook said, but the company would not provide more detail on their origin.

‘Why is it my job to police their platform?’

The Times was tipped off by a Los Angeles musician who asked to remain anonymous for fear of retaliation from hate groups.

Earlier this year, he tried to promote a concert featuring his hardcore punk group and a black metal band on Facebook. When he typed “black metal” into Facebook’s ad portal, he said he was disturbed to discover that the company suggested he also pay to target users interested in “National Socialist black metal” — a potential audience numbering in the hundreds of thousands.

The punk and metal music scenes, and black metal in particular, have a long grappled with white supremacist undercurrents. Black metal grew out of the early Norwegian metal scene, which saw prominent members convicted of burning down churches, murdering fellow musicians and plotting bombings. Some bands and their fans have since combined anti-Semitism, neo-paganism, and the promotion of violence into the distinct subgenre of National Socialist black metal, which the Southern Poverty Law Center described as a dangerous white supremacist recruiting tool nearly 20 years ago.

Facebook subsequently removed the grouping from the platform, but the musician remains incredulous that “National Socialist black metal” was a category in the first place — let alone one the company specifically prompted him to pursue.

“Why is it my job to police their platform?” he said.

A rabbit hole of hate

After reviewing screenshots verifying the musician’s story, The Times investigated whether Facebook would allow advertisers to target explicitly neo-Nazi bands or other terms associated with hate groups.

We started with Skrewdriver, a British band with a song called “White Power” and an album named after a Hitler Youth motto. Since the band only had 2,120 users identified as fans, Facebook informed us that we would need to add more target demographics to publish the ad.

The prompt led us down a rabbit hole of terms it thought were related to white supremacist ideology.

First, it recommended “Thor Steinar,” a clothing brand that has been outlawed in the German parliament for its association with neo-Nazism. Then, it recommended “NPD Group,” the name of both a prominent American market research firm and a far-right German political party associated with neo-Nazism. Among the next recommended terms were “Flüchtlinge,” the German word for “refugees,” and “Nationalism.”

Facebook said the categories “Flüchtlinge,” “Nationalism,” and “NPD Group” are in line with its policies and will not be removed despite appearing as auto-suggestions following neo-Nazi terms. (Facebook said it had found that the users interested in NPD Group were actually interested in the American market research firm.)

In the wake of past controversies, Facebook has blocked ads aimed at those interested in the most obvious terms affiliated with hate groups. “Nazi,” “Hitler,” “white supremacy” and “Holocaust” all yield nothing in the ad platform. But advertisers could target more than a million users with interest in Goebbels or the National Fascist Party, which dissolved in 1943. Himmler had nearly 95,000 constituents. Mengele had 117,150 interested users — a number that increased over the duration of our reporting, to 127,010.

Facebook said these categories were automatically generated based on user activity — liking or commenting on ads, or joining certain groups. But it would not provide specific details about how it determined a user’s interest in topics linked to Nazis.

‘Expanding the orbit’

The ads ended up being served within Instant Articles — which are hosted within Facebook, rather than linking out to a publisher’s own website — published by the Facebook pages of a wide swath of media outlets.

These included articles by the Daily Wire, CNN, HuffPost, Mother Jones, Breitbart, the BBC and ABC News. They also included articles by viral pages with names like Pupper Doggo, I Love Movies and Right Health Today — a seemingly defunct media company whose only Facebook post was a link to a now-deleted article titled “What Is The Benefits Of Eating Apple Everyday.”

Segal, the ADL director, said Facebook might wind up fueling the recruitment of new extremists by serving up such ads on the types of pages an ordinary news reader might visit.

“Being able to reach so many people with extremist content, existing literally in the same space as legitimate news or non-hateful content, is the biggest danger,” he said. “What you’re doing is expanding the orbit.”

” Despite promises of greater oversight following past advertising scandals, a Times review shows that Facebook has continued to allow advertisers to target hundreds of thousands of users the social media firm believes are curious about topics such as “Joseph Goebbels,” “Josef Mengele,” “Heinrich Himmler,” the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.”

Yes, despite Facebook’s promises of greater oversight following the previous reports of Nazi ad targeting categories, the Nazi ad targeting continues. And these ad categories don’t have just a handful of Facebook users. Each of the categories the LA Times tested had hundreds of thousands of users. And with just a $25 purchase, over 4,000 users saw the test ad in 24 hours, demonstrating that Facebook remains a remarkably cost-effective platform for directly reaching out to people with Nazi sympathies:

…
The Times decided to test the effectiveness of the company’s efforts by seeing if Facebook would allow the sale of ads directed to certain segments of users.

Facebook allowed The Times to target ads to users Facebook has determined are interested in Goebbels, the Third Reich’s chief propagandist, Himmler, the architect of the Holocaust and leader of the SS, and Mengele, the infamous concentration camp doctor who performed human experiments on prisoners. Each category included hundreds of thousands of users.

The company also approved an ad targeted to fans of Skrewdriver, a notorious white supremacist punk band — and automatically suggested a series of topics related to European far-right movements to bolster the ad’s reach.

Collectively, the ads were seen by 4,153 users in 24 hours, with The Times paying only $25 to fuel the push.

…

And these ads show up in as Instant Articles, so they would show up in the same part of the Facebook page where articles from sites like CNN and BBC might show up:

…‘Expanding the orbit’

The ads ended up being served within Instant Articles — which are hosted within Facebook, rather than linking out to a publisher’s own website — published by the Facebook pages of a wide swath of media outlets.

These included articles by the Daily Wire, CNN, HuffPost, Mother Jones, Breitbart, the BBC and ABC News. They also included articles by viral pages with names like Pupper Doggo, I Love Movies and Right Health Today — a seemingly defunct media company whose only Facebook post was a link to a now-deleted article titled “What Is The Benefits Of Eating Apple Everyday.”

Segal, the ADL director, said Facebook might wind up fueling the recruitment of new extremists by serving up such ads on the types of pages an ordinary news reader might visit.

“Being able to reach so many people with extremist content, existing literally in the same space as legitimate news or non-hateful content, is the biggest danger,” he said. “What you’re doing is expanding the orbit.”
…

Of course, Facebook pledged to remove these neo-Nazi ad categories…just like they did before:

…After being contacted by The Times, Facebook said that it would remove many of the audience groupings from its ad platform.

“Most of these targeting options are against our policies and should have been caught and removed sooner,” said Facebook spokesman Joe Osborne. “While we have an ongoing review of our targeting options, we clearly need to do more, so we’re taking a broader look at our policies and detection methods.”

…

Facebook has promised since 2017 that humans review every ad targeting category. It announced last fall the removal of 5,000 audience categories that risked enabling abuse or discrimination.
…

So how confident should we be that Facebook is actually going to purge its system of neo-Nazi ad categories? Well, as the article notes, Facebook’s current ad system earned the company a record $55 billion in ad revenue in 2018 with over 40% profit margins. And a big reason for those big profit margins is the lack of human oversight and the high degree of automation in the running of this system. In other words, Facebook’s record profits depends on exactly the kind of lack of human oversight that allowed for these neo-Nazi ad categories to proliferate:

…Approved by Facebook

Facebook’s broad reach and sophisticated advertising tools brought in a record $55 billion in ad revenue in 2018.

Profit margins stayed above 40%, thanks to a high degree of automation, with algorithms sorting users into marketable subsets based on their behavior — then choosing which ads to show them.

But the lack of human oversight has also brought the company controversy.

…

Of course, we shouldn’t necessarily assume that Facebook’s ongoing problems with Nazi ad categories is simply due to a lack of human oversight. It’s also quite possible that Facebook simply sees the promotion of extremism as a great source of revenue. After all, the LA Times reporters discovered that the number of users Facebook categorized as having an interest in Joseph Mengele actually grew from 117,l150 users to 127,010 users during their investigation. That’s a growth of over 8%! So the extremist ad market might simply be seen as a lucrative growth market that the company can’t resist:

…
In the wake of past controversies, Facebook has blocked ads aimed at those interested in the most obvious terms affiliated with hate groups. “Nazi,” “Hitler,” “white supremacy” and “Holocaust” all yield nothing in the ad platform. But advertisers could target more than a million users with interest in Goebbels or the National Fascist Party, which dissolved in 1943. Himmler had nearly 95,000 constituents. Mengele had 117,150 interested users — a number that increased over the duration of our reporting, to 127,010.

Facebook said these categories were automatically generated based on user activity — liking or commenting on ads, or joining certain groups. But it would not provide specific details about how it determined a user’s interest in topics linked to Nazis.
…

Could it be that the explosive growth of extremism is simply making the hate demographic irresistible? Perhaps, although as we’ve seen with virtually all of the major social media platforms like Twitter and YouTube, when it comes to social media platforms profiting off of extremism it’s very much a ‘chicken & egg’ situation.

NEW YORK (AP) — Several phone apps are sending sensitive user data, including health information, to Facebook without users’ consent, according to a report by The Wall Street Journal.

An analytics tool called “App Events” allows app developers to record user activity and report it back to Facebook, even if the user isn’t on Facebook, according to the report .

One example detailed by the Journal shows how a woman would track her period and ovulation using an app from Flo Health. After she enters when she last had her period, Facebook software in the app would send along data, such as whether the user may be ovulating. The Journal’s testing found that the data was sent with an advertising ID that can be matched to a device or profile.

Although Facebook’s terms instruct app developers not to send such sensitive information, Facebook appeared to be accepting such data without telling the developers to stop. Developers are able to use such data to target their own users while on Facebook.

Facebook said in a statement that it requires apps to tell users what information is shared with Facebook and it “prohibits app developers from sending us sensitive data.” The company said it works to remove information that developers should not have sent to Facebook.

…

The data-sharing is related to a data analytics tool that Facebook offers developers. The tool lets developers see statistics about their users and target them with Facebook ads.

Besides Flo Health, the Journal found that Instant Heart Rate: HR Monitor and real-estate app Realtor.com were also sending app data to Facebook. The Journal found that the apps did not provide users any way to stop the data-sharing.

Flo Health said in an emailed statement that using analytical systems is a “common practice” for all app developers and that it uses Facebook analytics for “internal analytics purposes only.” But the company plans to audit its analytics tools to be “as proactive as possible” on privacy concerns.

Hours after the Journal story was published, New York Gov. Andrew Cuomo directed the state’s Department of State and Department of Financial Services to “immediately investigate” what he calls a clear invasion of consumer privacy. The Democrat also urged federal regulators to step in to end the practice.

Securosis CEO Rich Mogull said that while it is not good for Facebook to have yet another data privacy flap in the headlines, “In this case it looks like the main violators were the companies that wrote those applications,” he said. “Facebook in this case is more the enabler than the bad actor.”

“In this case it looks like the main violators were the companies that wrote those applications…Facebook in this case is more the enabler than the bad actor.”

That’s one way to spin it: Facebook is more of the enabler than the primary bad actor in this case. That’s sort of an improvement. Specifically, Facebook’s “App Events” tool is enabling app developers to send sensitive user information back Facebook despite Facebook’s instructions to developers not to send sensitive information. And the fact that Facebook was clearly accepting this sensitive data without telling developers to stop sending it certainly adds to the enabling behavior. Even when that sensitive data included whether or not a woman is ovulating:

…An analytics tool called “App Events” allows app developers to record user activity and report it back to Facebook, even if the user isn’t on Facebook, according to the report .

One example detailed by the Journal shows how a woman would track her period and ovulation using an app from Flo Health. After she enters when she last had her period, Facebook software in the app would send along data, such as whether the user may be ovulating. The Journal’s testing found that the data was sent with an advertising ID that can be matched to a device or profile.

Although Facebook’s terms instruct app developers not to send such sensitive information, Facebook appeared to be accepting such data without telling the developers to stop. Developers are able to use such data to target their own users while on Facebook.

Facebook said in a statement that it requires apps to tell users what information is shared with Facebook and it “prohibits app developers from sending us sensitive data.” The company said it works to remove information that developers should not have sent to Facebook.

…

The data-sharing is related to a data analytics tool that Facebook offers developers. The tool lets developers see statistics about their users and target them with Facebook ads.
…

And the range of sensitive data includes everything from heart rate monitors to real estate apps. In other words, pretty much any app might be sending data to Facebook but we don’t necessarily know which apps because the apps aren’t informing users about this data collection and don’t give users a way to stop it:

…
Besides Flo Health, the Journal found that Instant Heart Rate: HR Monitor and real-estate app Realtor.com were also sending app data to Facebook. The Journal found that the apps did not provide users any way to stop the data-sharing.
…

And as the following BuzzFeed report from December describes, while app developers tend to assume that the information their apps are sending back to Facebook is anonymized because it doesn’t have your personal name attached, that’s basically a garbage conclusion because Facebook doesn’t need your name to know who you are. There’s plenty of other identifying information in what these apps are sending. Even if you don’t have a Facebook profile. And about half of the smartphone apps found to be sending information back to Facebook don’t even mention this in their privacy policies according to a study by the German mobile security initiative Mobilsicher. So what percent of smartphone apps overall are sending information back to Facebook? According to the estimates of privacy researcher collective App Census, about 30 percent of all apps on the Google Play store contact Facebook at startup:

BuzzFeed News

Apps Are Revealing Your Private Information To Facebook And You Probably Don’t Know It

Last updated on December 19, 2018, at 1:04 p.m. ET
Posted on December 19, 2018, at 12:30 p.m. ET

Major Android apps like Tinder, Grindr, and Pregnancy+ are quietly transmitting sensitive user data to Facebook, according to a new report by the German mobile security initiative Mobilsicher. This information can include things like religious affiliation, dating profiles, and health care data. It’s being purposefully collected by Facebook through the Software Developer Kit (SDK) that it provides to third-party app developers. And while Facebook doesn’t hide this, you probably don’t know about it.

Certainly not all developers did.

“Most developers we asked about this issue assumed that the information Facebook receives is anonymized,” Mobilsicher explains in its report, which explores the types of information shared behind the scenes between the platform and developers. Through its SDK, Facebook provides app developers with data about their users, including where you click, how long you use the app, and your location when you use it. In exchange, Facebook can access the data those apps collect, which it then uses to target advertising relevant to a user’s interests. That data doesn’t have your name attached, but as Mobilsicher shows, it’s far from anonymized, and it’s transmitted to Facebook regardless of whether users are logged into the platform.

Among the information transmitted to Facebook are the IP address of the device that used the app, the type of device, time of use, and a user-specific Advertising ID, which allows Facebook to identify and link third-party app information to the people using those apps. Apps that Mobilsicher tested include Bible+, Curvy, ForDiabetes, Grindr, Kwitt, Migraine Buddy, Moodpath, Muslim Pro, OkCupid, Pregnancy+, and more.

As long as you’ve logged into Facebook on your mobile device at some point (through your phone’s browser or the Facebook app itself), the company cross-references the Advertising ID and can link the third-party app information to your profile. And even if you don’t have a Facebook profile, the data can still be transmitted and collected with other third-party app data that corresponds to your unique Advertising ID.

For developers and Facebook, this transmission appears relatively common.The privacy researcher collective App Census estimates that “approximately 30 percent of all apps in Google’s Play store contact Facebook at startup” through the company’s SDK. The research firm Statista estimates that the Google Play store has over 2.6 million apps as of December 2018. As the Mobilsicher report details, many of these apps contain sensitive information. And while Facebook users can opt out and disable targeted advertisements (the same kind of ads that are informed by third-party app data), it is unclear whether turning off targeting stops Facebook from collecting this app information. In a statement to Mobilsicher, Facebook specified only that “if a person utilizes one of these controls, then Facebook will not use data gathered on these third-party apps (e.g. through Facebook Audience Network), for ad targeting.”

A Facebook representative clarified to BuzzFeed News that while it enables users to opt out of targeted ads from third parties, the controls apply to the usage of the data and not its collection. The company also said it does not use the third-party data it collects through the SDK to create profiles of non-Facebook users. Tinder, Grindr, and Google did not respond to requests for comment. Apple, which uses a similar ad identifier, was not able to comment at the time of publication.

The publication of Mobilsicher’s report comes at the end of a year rife with Facebook privacy scandals. In the past few months alone, the company has grappled with a few massive ones. In late September, Facebook disclosed a vulnerability that had exposed the personal information of 30 million users. A month later, it revealed that same vulnerability had exposed profile information including gender, location, birth dates, and recent search history. Earlier this month, the company reported another security flaw that potentially exposed the public and private photos of as many as 6.8 million Facebook users to developers that should not have had access to them. And on Tuesday, the New York Times reported that Facebook gave more than 150 companies, including Netflix, Amazon, Microsoft, Spotify, and Yahoo, unprecedented and undisclosed access to users’ personal data, in some cases granting access to read users’ private messages.

The vulnerabilities, coupled with fallout from the Cambridge Analytica data mining scandal, have set off a Facebook privacy reckoning that’s inspired grassroots campaigns to #DeleteFacebook, leading to some high-profile deletions. They’ve also sparked a technical debate about whether Facebook “sells data” to advertisers. (Facebook and its defenders argue that no data changes hands as a result of its targeted advertising, while critics say that’s a semantic dodge and that the company sells ads against your information, which is effectively similar.)

Lost in that debate is the greater issue of transparency. Platforms like Facebook do disclose their data policies in daunting mountain ranges of text with impressively off-putting complexity. Rare is the normal human who reads them. Rarer still is the non-developer human who reads the company’s even more off-putting data policies for developers. For these reasons, the mechanics of the Facebook platform — particularly the nuances of its software developer kit — are largely unknown to the typical Facebook user.

Though CEO Mark Zuckerberg told lawmakers this year that Facebook users have “complete control” of their data, Tuesday’s New York Times investigation as well as Mobilsicher’s report reveal that user information appears to move between different companies and platforms and is collected, sometimes without notifying the users. In the case of Facebook’s SDK, for example, Mobilsicher notes that the transmission of user information from third-party apps to Facebook occurs entirely behind the scenes. None of the apps Mobilsicher found to be transmitting data to Facebook “actively notified users” that they were doing so.According to the report, “Not even half of [the apps Mobilsicher tested] mention Facebook Analytics in their privacy policy. Strictly speaking, none of them is GDPR-compliant, since the transmission starts before any user interaction could indicate informed consent.”

“Major Android apps like Tinder, Grindr, and Pregnancy+ are quietly transmitting sensitive user data to Facebook, according to a new report by the German mobile security initiative Mobilsicher. This information can include things like religious affiliation, dating profiles, and health care data. It’s being purposefully collected by Facebook through the Software Developer Kit (SDK) that it provides to third-party app developers. And while Facebook doesn’t hide this, you probably don’t know about it.”

It’s not just the handful of apps described in the Wall Street Journal report. Major Android apps are routinely passing information to Facebook. And this information can include things like religious affiliation and data profiles in addition to health care data. And while developers might be doing this, in part, because they assume the data is anonymized, it’s not. At least not in any meaningful way. And even non-Facebook users are getting their data sent:

…
Certainly not all developers did.

“Most developers we asked about this issue assumed that the information Facebook receives is anonymized,” Mobilsicher explains in its report, which explores the types of information shared behind the scenes between the platform and developers. Through its SDK, Facebook provides app developers with data about their users, including where you click, how long you use the app, and your location when you use it. In exchange, Facebook can access the data those apps collect, which it then uses to target advertising relevant to a user’s interests. That data doesn’t have your name attached, but as Mobilsicher shows, it’s far from anonymized, and it’s transmitted to Facebook regardless of whether users are logged into the platform.

Among the information transmitted to Facebook are the IP address of the device that used the app, the type of device, time of use, and a user-specific Advertising ID, which allows Facebook to identify and link third-party app information to the people using those apps. Apps that Mobilsicher tested include Bible+, Curvy, ForDiabetes, Grindr, Kwitt, Migraine Buddy, Moodpath, Muslim Pro, OkCupid, Pregnancy+, and more.

As long as you’ve logged into Facebook on your mobile device at some point (through your phone’s browser or the Facebook app itself), the company cross-references the Advertising ID and can link the third-party app information to your profile. And even if you don’t have a Facebook profile, the data can still be transmitted and collected with other third-party app data that corresponds to your unique Advertising ID.
…

How common is this? According to privacy researcher collective App Census estimates, it’s about 30 percent of all apps in the Google Play store. And half of the apps tested by Mobilsicher didn’t even mention Facebook Analytics in their privacy policy:

…For developers and Facebook, this transmission appears relatively common.The privacy researcher collective App Census estimates that “approximately 30 percent of all apps in Google’s Play store contact Facebook at startup” through the company’s SDK. The research firm Statista estimates that the Google Play store has over 2.6 million apps as of December 2018. As the Mobilsicher report details, many of these apps contain sensitive information. And while Facebook users can opt out and disable targeted advertisements (the same kind of ads that are informed by third-party app data), it is unclear whether turning off targeting stops Facebook from collecting this app information. In a statement to Mobilsicher, Facebook specified only that “if a person utilizes one of these controls, then Facebook will not use data gathered on these third-party apps (e.g. through Facebook Audience Network), for ad targeting.”

…

Though CEO Mark Zuckerberg told lawmakers this year that Facebook users have “complete control” of their data, Tuesday’s New York Times investigation as well as Mobilsicher’s report reveal that user information appears to move between different companies and platforms and is collected, sometimes without notifying the users. In the case of Facebook’s SDK, for example, Mobilsicher notes that the transmission of user information from third-party apps to Facebook occurs entirely behind the scenes. None of the apps Mobilsicher found to be transmitting data to Facebook “actively notified users” that they were doing so.According to the report, “Not even half of [the apps Mobilsicher tested] mention Facebook Analytics in their privacy policy. Strictly speaking, none of them is GDPR-compliant, since the transmission starts before any user interaction could indicate informed consent.”
…

More popular apps are sending data to Facebook without asking
MyFitnessPal, TripAdvisor and others may be violating EU privacy law.

Jon Fingas
12.30.18

It’s not just dating and health apps that might be violating your privacy when they send data to Facebook. A Privacy International study has determinedthat “at least” 20 out of 34 popular Android apps are transmitting sensitive information to Facebook without asking permission, including Kayak, MyFitnessPal, Skyscanner and TripAdvisor. This typically includes analytics data that sends on launch, including your unique Android ID, but can also include data that sends later. The travel search engine Kayak, for instance, apparently sends destination and flight search data, travel dates and whether or not kids might come along.

While the data might not immediately identify you, it could theoretically be used to recognize someone through roundabout means, such as the apps they have installed or whether they travel with the same person.

The concern isn’t just that apps are oversharing data, but that they may be violating the EU’s GDPR privacy rules by both collecting info without consent and potentially identifying users. You can’t lay the blame solely at the feet of Facebook or developers, though. Facebook’s relevant developer kit didn’t provide the option to ask for permission until after GDPR took effect. The social network did develop a fix, but it’s not clear that it works or that developers are implementing it properly. Numerous apps were still using older versions of the developer kit, according to the study. Skyscanner noted that it was “not aware” it was sending data without permission.

“It’s not just dating and health apps that might be violating your privacy when they send data to Facebook. A Privacy International study has determinedthat “at least” 20 out of 34 popular Android apps are transmitting sensitive information to Facebook without asking permission, including Kayak, MyFitnessPal, Skyscanner and TripAdvisor. This typically includes analytics data that sends on launch, including your unique Android ID, but can also include data that sends later. The travel search engine Kayak, for instance, apparently sends destination and flight search data, travel dates and whether or not kids might come along.”

So if you don’t exactly whether or not an app is sending Facebook your data, it appears to be a safe bet that, yes, that an app is sending Facebook your data.

Here’s an update on the brain-to-computer interface technology that Facebook is working on. First, recall how the initial use for the technology that Facebook has been touting thus far has been simply replacing using your brain for rapid typing. It always seemed like a rather limited application for a technology that’s basically reading your mind.

Now Mark Zuckerberg is giving us a hint at one of the more ambitious applications of these technology: Augmented Reality (AR). AR technology isn’t new. Google Glass was an earlier version of AR technology and Oculus, the virtual reality headset company owned by Facebook, has made it clear that AR is an area they are planning on getting into. But it sounds like Facebook has big plans for using the the brain-to-computer with AR technology. This was revealed during a talk Zuckerberg gave at Harvard last month during a two hour interview by with Harvard law school professor Jonathan Zittrain. According to Zuckerberg, the vision is to allow people to use their thoughts to navigate through augmented realities. This will presumably work in tandem with AR headsets.

For those of us who worry that Facebook may have serious boundary issues when it comes to the personal information of its users, Mark Zuckerberg’s recent comments at Harvard should get the heart racing.

Zuckerberg dropped by the university last month ostensibly as part of a a year of conversations with experts about the role of technology in society, “the opportunities, the challenges, the hopes, and the anxieties.” His nearly two-hour interview with Harvard law school professor Jonathan Zittrain in front of Facebook cameras and a classroom of students centered on the company’s unprecedented position as a town square for perhaps 2 billion people. To hear the young CEO tell it, Facebook was taking shots from all sides—either it was indifferent to the ethnic hatred festering on its platforms or it was a heavy-handed censor deciding whether an idea was allowed to be expressed.

Zuckerberg confessed that he hadn’t sought out such an awesome responsibility. No one should, he said. “If I was a different person, what would I want the CEO of the company to be able to do?” he asked himself. “I would not want so many decisions about content to be concentrated with any individual.”

Instead, Facebook will establish its own Supreme Court, he told Zittrain, an outside panel entrusted to settle thorny questions about what appears on the platform. “I will not be able to make a decision that overturns what they say,” he promised, “which I think is good.”

All was going to plan. Zuckerberg had displayed a welcome humility about himself and his company. And then he described what really excited him about the future—and the familiar Silicon Valley hubris had returned. There was this promising new technology, he explained, a brain-computer interface, which Facebook has been researching.

The idea is to allow people to use their thoughts to navigate intuitively through augmented reality—the neuro-driven version of the world recently described by Kevin Kelly in these pages. No typing, no speaking, even, to distract you or slow you down as you interact with digital additions to the landscape: driving instructions superimposed over the freeway, short biographies floating next to attendees of a conference, 3-D models of furniture you can move around your apartment.

The Harvard audience was a little taken aback by the conversation’s turn, and Zittrain made a law-professor joke about the constitutional right to remain silent in light of a technology that allows eavesdropping on thoughts. “Fifth amendment implications are staggering,” he said to laughter. Even this gentle pushback was met with the tried-and-true defense of big tech companies when criticized for trampling users’ privacy—users’ consent. “Presumably,” Zuckerberg said, “this would be something that someone would choose to use as a product.”

In short, he would not be diverted from his self-assigned mission to connect the people of the world for fun and profit. Not by the dystopian image of brain-probing police officers. Not by an extended apology tour. “I don’t know how we got onto that,” he said jovially. “But I think a little bit on future tech and research is interesting, too.”

Of course, Facebook already follows you around as you make your way through the world via the GPS in the smartphone in your pocket, and, likewise, follows you across the internet via code implanted in your browser. Would we really let Facebook inside those old noggins of ours just so we can order a pizza faster and with more toppings? Zuckerberg clearly is counting on it.

To be fair, Facebook doesn’t plan to actually enter our brains. For one thing, a surgical implant, Zuckerberg told Zittrain, wouldn’t scale well: “If you’re actually trying to build things that everyone is going to use, you’re going to want to focus on the noninvasive things.”

The technology that Zuckerberg described is a shower-cap-looking device that surrounds a brain and discovers connections between particular thoughts and particular blood flows or brain activity, presumably to assist the glasses or headsets manufactured by Oculus VR, which is part of Facebook. Already, Zuckerberg said, researchers can distinguish when a person is thinking of a giraffe or an elephant based on neural activity. Typing with your mind would work off of the same principles.

As with so many of Facebook’s innovations, Zuckerberg doesn’t see how brain-computer interface breaches an individual’s integrity, what Louis Brandeis famously defined as “the right to be left alone” in one’s thoughts, but instead sees a technology that empowers the individual. “The way that our phones work today, and all computing systems, organized around apps and tasks is fundamentally not how our brains work and how we approach the world,” he told Zittrain. “That’s one of the reasons why I’m just very excited longer term about especially things like augmented reality, because it’ll give us a platform that I think actually is how we think about stuff.”

Kelly, in his essay about AR, likewise sees a world that makes more sense when a “smart” version rests atop the quotidian one. “Watches will detect chairs,” he writes of this mirrorworld, “chairs will detect spreadsheets; glasses will detect watches, even under a sleeve; tablets will see the inside of a turbine; turbines will see workers around them.” Suddenly our environment, natural and artificial, will operate as an integrated whole. Except for humans with their bottled up thoughts and desires. Until, that is, they install BCI-enhanced glasses.

Zuckerberg explained the potential benefits of the technology this way when he announced Facebook’s research in 2017: “Our brains produce enough data to stream 4 HD movies every second. The problem is that the best way we have to get information out into the world—speech—can only transmit about the same amount of data as a 1980s modem. We’re working on a system that will let you type straight from your brain about 5x faster than you can type on your phone today. Eventually, we want to turn it into a wearable technology that can be manufactured at scale. Even a simple yes/no ‘brain click’ would help make things like augmented reality feel much more natural.”

Zuckerberg likes to quote Steve Jobs’s description of computers as “bicycles for the mind.” I can imagine him thinking, What’s wrong with helping us pedal a little faster?

“All was going to plan. Zuckerberg had displayed a welcome humility about himself and his company. And then he described what really excited him about the future—and the familiar Silicon Valley hubris had returned. There was this promising new technology, he explained, a brain-computer interface, which Facebook has been researching.”

Yep, everything was going well at the Zuckerberg event until he started talking about his vision for the future. A future of augmented reality that you navigate with your thoughts using Facebook’s brain-to-computer interface technology. It might seem creepy, but Facebook is clearly betting on it not being too creepy to prevent people from using it:

…The idea is to allow people to use their thoughts to navigate intuitively through augmented reality—the neuro-driven version of the world recently described by Kevin Kelly in these pages. No typing, no speaking, even, to distract you or slow you down as you interact with digital additions to the landscape: driving instructions superimposed over the freeway, short biographies floating next to attendees of a conference, 3-D models of furniture you can move around your apartment.

…

Of course, Facebook already follows you around as you make your way through the world via the GPS in the smartphone in your pocket, and, likewise, follows you across the internet via code implanted in your browser. Would we really let Facebook inside those old noggins of ours just so we can order a pizza faster and with more toppings? Zuckerberg clearly is counting on it.

To be fair, Facebook doesn’t plan to actually enter our brains. For one thing, a surgical implant, Zuckerberg told Zittrain, wouldn’t scale well: “If you’re actually trying to build things that everyone is going to use, you’re going to want to focus on the noninvasive things.”

The technology that Zuckerberg described is a shower-cap-looking device that surrounds a brain and discovers connections between particular thoughts and particular blood flows or brain activity, presumably to assist the glasses or headsets manufactured by Oculus VR,which is part of Facebook. Already, Zuckerberg said, researchers can distinguish when a person is thinking of a giraffe or an elephant based on neural activity. Typing with your mind would work off of the same principles.
…

What about potential abuses like violating the constitutional right to remain silent? Zuckerberg assured us that only people who choose to use the technology would actually use so we shouldn’t worry about abuse, a rather worrying response in part because of typical it is:

…The Harvard audience was a little taken aback by the conversation’s turn, and Zittrain made a law-professor joke about the constitutional right to remain silent in light of a technology that allows eavesdropping on thoughts. “Fifth amendment implications are staggering,” he said to laughter. Even this gentle pushback was met with the tried-and-true defense of big tech companies when criticized for trampling users’ privacy—users’ consent. “Presumably,” Zuckerberg said, “this would be something that someone would choose to use as a product.”

In short, he would not be diverted from his self-assigned mission to connect the people of the world for fun and profit. Not by the dystopian image of brain-probing police officers. Not by an extended apology tour. “I don’t know how we got onto that,” he said jovially. “But I think a little bit on future tech and research is interesting, too.”
…

But at least it’s augmented reality that will be working with some sort of AR headset and the technology isn’t actually injecting augmented info into your brain. That would be a whole new level of creepy.

And according to the following article, a neuroscientist at Northwestern University, Dr. Moran Cerf, is working on on exactly that kind of technology and predicts it will be available to the public in as little as five years. Cerf is working on some sort chip that would be connected to the internet, read your thoughts, go to Wikipedia or some website to get an answer to your questions, and return the answer directly to your brain. Yep, internet-connected brain chips. He estimates that such technology could give people IQs of 200.

So will people have to go through brain surgery to get this new technology? Not necessarily. Cerf is asking the question “Can you eat something that will actually get to your brain? Can you eat things in parts that will assemble inside your head?” Yep, internet-connected brain chips that you eat. So not only will you not need brain surgery to get the chip…in theory, you might not even know you ate one.

Also note that it’s unclear if this brain chip can read your thoughts like Facebook’s brain-to-computer interface or if it’s only for feeding your the information from the internet. In other words, since Cerf’s vision for this chip requires the ability to read thoughts first in order to go on the internet and find answers and report them back, it’s possible that this is the kind of computer-to-brain technology that is intended to work with the kind of brain-to-computer mind reading technology Facebook is working on. And that’s particularly revelent because Cerf tells us that he’s collaborating with ‘Silicon Valley big wigs’ that he’d rather not name:

CBS Chicago

Northwestern Neuroscientist Researching Brain Chips To Make People Superintelligent

By Lauren Victory
March 4, 2019 at 7:32 am

CHICAGO (CBS) — What if you could make money, or type something, just by thinking about it? It sounds like science fiction, but it might be close to reality.

In as little as five years, super smart people could be walking down the street; men and women who’ve paid to increase their intelligence.

Northwestern University neuroscientist and business professor Dr. Moran Cerf made that prediction, because he’s working on a smart chip for the brain.

“Make it so that it has an internet connection, and goes to Wikipedia, and when I think this particular thought, it gives me the answer,” he said.

Cerf is collaborating with Silicon Valley big wigs he’d rather not name.

Facebook also has been working on building a brain-computer interface, and SpaceX and Tesla CEO Elon Musk is backing a brain-computer interface called Neuralink.

“Everyone is spending a lot of time right now trying to find ways to get things into the brain without drilling a hole in your skull,” Cerf said. “Can you eat something that will actually get to your brain? Can you eat things in parts that will assemble inside your head?”

…

“This is no longer a science problem. This is a social problem,” Cerf said.

Cerf worries about creating intelligence gaps in society; on top of existing gender, racial, and financial inequalities.

“They can make money by just thinking about the right investments, and we cannot; so they’re going to get richer, they’re going to get healthier, they’re going to live longer,” he said.

The average IQ of an intelligent monkey is about 70, the average human IQ is around 100, and a genius IQ is generally considered to begin around 140. People with a smart chip in their brain could have an IQ of around 200, so would they even want to interact with the average person?

“Are they going to say, ‘Look at this cute human, Stephen Hawking. He can do differential equations in his mind, just like a little baby with 160 IQ points. Isn’t it amazing? So cute. Now let’s put it back in a cage and give it bananas,’” Cerf said.

Time will tell. Or will our minds?

Approximately 40,000 people in the United States already have smart chips in their heads, but those brain implants are only approved for medical use for now.

“In as little as five years, super smart people could be walking down the street; men and women who’ve paid to increase their intelligence.”

In just five years, you’ll be walking down the street, wonder about something, and your brain chip will go access Wikipedia, find the answer, and somehow deliver it to you. And you won’t even have to have gone through brain surgery. You’ll just eat something that will somehow insert the chip in your brain:

…
Northwestern University neuroscientist and business professor Dr. Moran Cerf made that prediction, because he’s working on a smart chip for the brain.

“Make it so that it has an internet connection, and goes to Wikipedia, and when I think this particular thought, it gives me the answer,” he said.

…

Facebook also has been working on building a brain-computer interface, and SpaceX and Tesla CEO Elon Musk is backing a brain-computer interface called Neuralink.

“Everyone is spending a lot of time right now trying to find ways to get things into the brain without drilling a hole in your skull,” Cerf said. “Can you eat something that will actually get to your brain? Can you eat things in parts that will assemble inside your head?”

…

The average IQ of an intelligent monkey is about 70, the average human IQ is around 100, and a genius IQ is generally considered to begin around 140. People with a smart chip in their brain could have an IQ of around 200, so would they even want to interact with the average person?
…

That’s the promise. Or, rather, the hype. It’s hard to imagine this all being ready in five years. It’s also worth noting that if the only thing this chip does is conduct internet queries it’s hard to see how this will effectively raise people’s IQs to 200. After all, people damn near have their brains connected to Wikipedia already with smartphones and there doesn’t appear to have been a smartphone-induced IQ boost. But who knows. Once you have the technology to rapidly feed information back and forth between the brain and a computer there could be all sorts of IQ-boosting technologies that could be developed. At a minimum, it could allow for some very fancy augmented reality technology.

So some sort of computer-to-brain interface technology appears to be on the horizon. And if Cerf’s chip ends up being technologically feasible it’s going to have Silicon Valley big wigs behind it. We just don’t know which big wigs because he won’t tell us:

…Cerf is collaborating with Silicon Valley big wigs he’d rather not name.
…

So some Silicon Valley big wits are working on computer-to-brain interface technology that can potentially be fed to people. And they they want to keep their involvement in the development of this technology a secret. That’s super ominous, right?

President Trump on Tuesday suggested that Google, Facebook and Twitter have colluded with each other to discriminate against Republicans.

“We use the word collusion very loosely all the time. And I will tell you there is collusion with respect to that,” Trump said during a press conference at the White House Rose Garden. “Something has to be going on. You see the level, in many cases, of hatred for a certain group of people that happened to be in power, that happened to win the election.

“Something’s happening with those groups of folks that are running Facebook and Google and Twitter and I do think we have to get to the bottom of it,” he added.

The president’s comments marked an escalation in his criticism of U.S. tech giants like Twitter, a platform that he frequently uses to promote his policies and denounce his political opponents.

Trump said Twitter is “different than it used to be,” when asked about a new push to make social media companies liable for the content on their platform.

“We have to do something,” Trump said. “I have many, many millions of followers on Twitter, and it’s different than it used to be. Things are happening. Names are taken off.”

He later alleged that conservatives and Republicans are discriminated against on social media platforms.

“It’s big, big discrimination,” he said. “I see it absolutely on Twitter.”

Trump and other conservatives have increasingly argued that companies like Google, Facebook and Twitter have an institutional bias that favors liberals. Trump tweeted Tuesday morning that the tech giants were “sooo on the side of the Radical Left Democrats.”

The three companies did not immediately respond to requests for comment on Trump’s Tuesday morning tweet.

He also vowed to look into a report that his social media director, Dan Scavino, was temporarily blocked from making public comments on one of his Facebook posts.

The series of comments came a day after Rep. Devin Nunes (R-Calif.) sued Twitter and some of its users for more than $250 million. Nunes’s suit alleges that the platform censors conservative voices by “shadow-banning” them.

The California Republican also accused Twitter of “facilitating defamation on its platform” by “ignoring lawful complaints about offensive content.”

“Trump and other conservatives have increasingly argued that companies like Google, Facebook and Twitter have an institutional bias that favors liberals. Trump tweeted Tuesday morning that the tech giants were “sooo on the side of the Radical Left Democrats.””

Yep, the social media giants are apparently “sooo on the side of the Radical Left Democrats.” Trump is convinced of this because he feels that “something has to be going on” and “we have to get to the bottom of it”. He’s also sure that Twitter is “different than it used to be” and “we have to do something” because it’s “big, big discrimination”:

…
“We use the word collusion very loosely all the time. And I will tell you there is collusion with respect to that,” Trump said during a press conference at the White House Rose Garden. “Something has to be going on. You see the level, in many cases, of hatred for a certain group of people that happened to be in power, that happened to win the election.

“Something’s happening with those groups of folks that are running Facebook and Google and Twitter and I do think we have to get to the bottom of it,” he added.

The president’s comments marked an escalation in his criticism of U.S. tech giants like Twitter, a platform that he frequently uses to promote his policies and denounce his political opponents.

Trump said Twitter is “different than it used to be,” when asked about a new push to make social media companies liable for the content on their platform.

“We have to do something,” Trump said. “I have many, many millions of followers on Twitter, and it’s different than it used to be. Things are happening. Names are taken off.”

He later alleged that conservatives and Republicans are discriminated against on social media platforms.

“It’s big, big discrimination,” he said. “I see it absolutely on Twitter.”
…

And these comments by Trump come a day after Republican congressman Devin Nunes sued Twitter and for “shadow-banning” conservative voices. Nunes also sued a handful of Twitter users who had been particularly critical of him:

…The series of comments came a day after Rep. Devin Nunes (R-Calif.) sued Twitter and some of its users for more than $250 million. Nunes’s suit alleges that the platform censors conservative voices by “shadow-banning” them.

The California Republican also accused Twitter of “facilitating defamation on its platform” by “ignoring lawful complaints about offensive content.”
…

But Devin Nunes appears to feel so harmed by Twitter that he’s suing it for $250 million anyway. And as the following column notes, while the lawsuit is a joke on legal grounds and stands no chance of victory, it does serve an important purpose. And it’s the same purpose we’ve seen over and over: intimidating the tech companies into giving conservatives preferential treatment and giving them a green light to turn these platforms into disinformation machines.

First of all, I should introduce myself: I’m Jeet Heer, a contributing editor at The New Republic. I’m filling in for Josh as he takes a much-deserved break. Having followed TPM from its earliest days as a blog covering the 2000 (!) election and its aftermath, I’m honored to be here.

I wanted to flag a story from Monday night that is both comically absurd but also has a sinister side: Republican Congressman Devin Nunes’ announced lawsuit against Twitter and three Twitter accounts who he claims have defamed him.

You can read Nunes’ complaint here. Much of the suit reads like pure dada nonsense, especially since Nunes is going after two joke accounts with the handles Devin Nunes’ Mom and Devin Nunes’ Cow. This leads to the immortal line, “Like Devin Nunes’ Mom, Devin Nunes’ Cow engaged in a vicious defamation campaign against Nunes.”

…

As tempting as it is to simply mock the suit, it also has to be said that it is part of something more disturbing: the rising use of legal actions, especially by right-wing forces, to shut down political opponents. As Susan Hennessey, a legal scholar at the Brookings Institute, noted, the suit “is a politician attempting to abuse the judicial process in order to scare people out of criticizing him by proving that he can cost them a lot in legal fees.”

Peter Thiel’s support of a suit that destroyed Gawker is the prime example. Thiel’s success seems to have emboldened the right in general. Amid Trump’s chatter about wanting to loosen libel laws and similar talk from Supreme Court Justice Clarence Thomas, we’ve seen lawsuits or threatened lawsuits from Joe Arpaio, Sarah Palin, and Roy Moore, among others. As with the Nunes suit, many of these seem like jokes, but they have a goal of chilling speech.

“As tempting as it is to simply mock the suit, it also has to be said that it is part of something more disturbing: the rising use of legal actions, especially by right-wing forces, to shut down political opponents. As Susan Hennessey, a legal scholar at the Brookings Institute, noted, the suit “is a politician attempting to abuse the judicial process in order to scare people out of criticizing him by proving that he can cost them a lot in legal fees.””

This this form of right-wing intimidation of the media – intimidation that rises to the level of ‘we will financially destroy you if you criticize us’ – is exactly what we saw Peter Thiel unleashed when he revenge-bankrolled a lawsuit that drove Gawker into bankruptcy:

…
Peter Thiel’s support of a suit that destroyed Gawker is the prime example. Thiel’s success seems to have emboldened the right in general. Amid Trump’s chatter about wanting to loosen libel laws and similar talk from Supreme Court Justice Clarence Thomas, we’ve seen lawsuits or threatened lawsuits from Joe Arpaio, Sarah Palin, and Roy Moore, among others. As with the Nunes suit, many of these seem like jokes, but they have a goal of chilling speech.
…

So it’s going to be interesting to see if Nunes’s lawsuit furthers this trend or ends up being a complete joke. But given that one metric of success is simply costing the defendants a lot of money it really could end up being quite successful. We’ll see.

Fox News Dominates Facebook By Inciting Anger, Study Shows
Facebook’s algorithm overhaul was supposed to make users feel happier, but it doesn’t look like it did.

By Amy Russo
3/18/2019 01:42 pm ET Updated

Facebook CEO Mark Zuckerberg announced an algorithm overhaul last year intended to make users feel better with less news in their feeds and more content from family and friends instead.

But the data is in, and it shows Fox News rules the platform in terms of engagement, with “angry” reactions to its posts leading the way.

According to a NewsWhip study published this month that examines Facebook News Feed content from Jan. 1 to March 10, the cable network was the No. 1 English-language publisher when it came to comments, shares and reactions.

The outlet far outpaced its competition, with NBC, the BBC, the Daily Mail, CNN and others lagging behind.

While Harvard’s Nieman Lab on journalism points out that Fox News’ popularity on Facebook may have occurred without help from an algorithm, it begs the question of whether Zuckerberg’s vision for the platform is truly coming to fruition.

In January 2018, Zuckerberg told users he had “a responsibility to make sure our services aren’t just fun to use, but also good for people’s well-being.”

He said he was hoping to promote “meaningful interactions between people” and that the algorithm overhaul would result in “less public content like posts from businesses, brands, and media” and “more from your friends, family and groups.”

While overall engagement on Facebook has skyrocketed this year compared with 2018, the power of the platform’s algorithms remains unclear.

“But the data is in, and it shows Fox News rules the platform in terms of engagement, with “angry” reactions to its posts leading the way.”

Facebook’s news feed algorithm sure loves serving up Fox News stories. Especially the kinds of stories that make people angry:

…
According to a NewsWhip study published this month that examines Facebook News Feed content from Jan. 1 to March 10, the cable network was the No. 1 English-language publisher when it came to comments, shares and reactions.

The outlet far outpaced its competition, with NBC, the BBC, the Daily Mail, CNN and others lagging behind.

The difference is even more glaring when ranking outlets only by the number of angry responses they trigger with Facebook’s reactions feature.

By that measure, Fox News is leaps and bounds ahead of other pages, including that of right-wing website Breitbart and conservative Daily Wire Editor-in-Chief Ben Shapiro.
…

So as President Trump and Rep Nunes continue waging their social media intimidation campaign it’s going to be worth keeping in mind the wild success these intimidation campaigns have already had. This is a tactic that clearly works.