Data from 50 million Facebook users ended up in the hands of Cambridge Analytica, the data firm that engineered Donald Trump’s social media campaign blitz.

Jessica Rich was head of the Federal Trade Commission’s Bureau of Consumer Protection from 2013 to 2017. In 2011 she helped construct the consent decree against Facebook for violating users’ privacy without their knowledge or consent.

The order from the FTC against Facebook required that Facebook obtain affirmative consent from users before it could override privacy settings. The order required that Facebook submit a plan for compliance with the order within 90 days, and described penalties for violating the order. The order does not expire until 2032.

NPR reporter Ailsa Chang interviewed Rich Tuesday, and asked Rich what she thought upon hearing that Facebook had given detailed, private data on users to Cambridge Analytica in 2014.

Rich: “Well, like many people, my reaction was are you kidding? The facts here of allowing third parties to have unfettered access to user data and not exercising the kind of care for Facebook users that they should were the exact same facts that drove us to take action against them in 2011 and that led to the order they're now under.”Chang: “And the penalties for violating that 2011 consent decree, they're huge. It's $40,000…”

RICH: “Per violation.”

CHANG: “So if you multiply that across 50 million users, we're talking about billions of dollars, potentially, that Facebook faces in fines. Why would a number like that, billions of dollars of potential fines, not serve as enough of a deterrent for a company like Facebook?”RICH: “I really can't answer that question. It must be lack of proper compliance procedures or literally a culture that is not one that really cares about its users.”

Tim Wu, a former senior official at the Federal Trade Commission, explained in an interview with Noel King of NPR’s Morning Edition, also on Tuesday, that Facebook’s continuing problems with violating the privacy of its users is fundamental to its business model.

WU: “I think the problem lies here. It's actually a very fundamental one, which is Facebook is always in the position of serving two masters. If its actual purpose was just trying to connect friends and family, and it didn't have a secondary motive of trying to also prove to another set of people that it could gather as much data as possible and make it possible to manipulate or influence or persuade people, then it wouldn't be a problem. For example, if they were a nonprofit, it wouldn't be a problem. But they…”

KING: “But they're not, right?” (Laughter).

WU: “They're not. But, well, that's the problem. I think there's a sort of intrinsic problem with having for-profit entities with this business model in this position of so much public trust because they're always at the edge because their profitability depends on it.”

King asked Wu if the FTC could seek damages if Facebook is found to have violated the order.

WU: “Yes, they can. Every single violation is punishable by a $40,000 fine. So it could be billions [Editor’s note - actually, trillions] of dollars in damages if the FTC decides to police this very aggressively.”

Carole Cadwalladr and Emma Graham-Harrison of the British newspaper The Guardian/Observer published an article last week with new information about Cambridge Analytica, the shadowy data firm that worked closely with the Trump campaign in 2016. Christopher Wylie, a data scientist formerly with the company, stated in an interview:

“We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.”

In late 2015, Facebook found out that information from more than 50 million individuals, including private details and personality profiles, had been harvested without permission. The company did not alert users, and took limited steps to protect the data of users.

British public television station Channel 4 went undercover to glean information about how Cambridge Analytica, a company started in 2013 as an offshoot of SCL Group, uses data to influence elections throughout the world. The results of the investigation are shocking. Some of the revelations, by key members of the company:

“We’re used to operating through different vehicles, in the shadows, and I look forward to building a very long-term and secretive relationship with you." Alexander Nix, CEO, to a reporter he thought was a Sri Lankan businessman working for a prospective political candidate.

“We just put information into the bloodstream of the internet, and then, and then watch it grow, give it a little push every now and again… like a remote control. It has to happen without anyone thinking, ‘that’s propaganda’, because the moment you think ‘that’s propaganda,’ the next question is, ‘who’s put that out?’...So we have to be very subtle.” Mark Turnbull, managing director of Cambridge Analytica Political Global.

Since the exposé aired last week, Alexander Nix has been suspended by the company. However, a few days ago Nix set up a new data company with Rebekah Mercer called Emerdata, which is described in this article from Business Insider.

An interview with Theresa Hong, a key insider from the Trump campaign data operation who says she wrote Trump's Facebook posts, was interviewed last year by the BBC. She brags about the role Cambridge Analytica played in the campaign.

Twitter recently announced that they will let nearly 700,000 users that they interacted with accounts identified as part of a propaganda campaign financed by the Russian government in 2016.

Unless you have seen the process taking place in real time, it is difficult to understand how bots (automated social media accounts), trolls (real individuals aggressively attempting to sow division or discredit experts), and sock puppets (persona accounts run by a real person who is not the person portrayed in the account profile) can spread divisive messages, silence debate, and use abuse and harassment to push political issues in a particular direction.Once you have seen it in real time, it is clear how influential Russian government-financed troll, bot, and sock puppet activity split American voters into angry, isolated camps with little inclination or ability to bridge divides and find consensus.A team of researchers at the University of Washington examined Twitter discourse related to #BlackLivesMatter and police-related shooting events in 2016, and their discoveries have broad and important implications for how Americans can combat the onslaught of divisive messaging and disinformation. One important discovery is that the groups on the two “sides” of the issue were quite separate, with almost no overlap in retweets or conversation. The researchers found that Russian accounts identified as originating at the Internet Research Agency in St. Petersburg - a Russian government-financed troll farm - were clearly working in both pro-Black Lives Matter and anti-Black Lives Matter groups.

The diagrams tell the story. The first shows the isolation of the two groups - people in each group interact primarily with others who express the same views. The second diagram shows Russian-Internet Research Agency identified accounts in orange. It is clear that Russian paid accounts were very active in the separate groups, amplifying divisive messages and increasing division and disagreement.The researchers conclude:“In this paper, we have located RU-IRA-affiliated troll accounts in the retweet network of a politically polarized conversation surrounding race and shootings in the United States. Our findings suggest that troll accounts contributed content to polarized information networks, likely serving to accentuate disagreement and foster division. Furthermore, our findings imply that the troll accounts gained a platform in a domestic conversation, suggesting a calculated form of media manipulation that exploits on the crowd-sourced nature of social media.”Two articles from the University of Washington research group can be found here.

Thanks to Kate Starbird of the University of Washington for sharing this research on Twitter.

Both the Senate and the House Intelligence Committees held hearings November 1 to investigate Russian interference in the 2016 presidential election, which may be continuing to this day. The House Democrats released a list of more than 2700 social media accounts that Twitter has identified as linked to Russian intelligence agencies.

any Twitter users believe that the influence of Russian intelligence on the Twitter platform is much more widespread than the company has admitted. Bots, which are computer-driven accounts used to boost popularity of content, and trolls, which have a real person behind them (although often not the person represented in the profile) and can criticize, mock, or report targeted accounts for suspension, are common.

Bots, which are computer-generated accounts, can appear to be real on casual inspection. They can answer simple questions and give general support or criticism. The comments can be activated by certain words or hashtags.

Trolls come in a variety of forms. Some of them pretend to be Americans, but are actually working from another country. They can have a profile describing a "Christian grandmother with fibromyalgia," but actually the account is a 20 year-old man working at a troll farm in Saint Petersburg. A false identity is called a persona.

Bot and troll accounts, which can work in coordination to target an account, can foment discord in several ways. Here are a few examples:

The troll makes an outrageous or inflammatory claim or comment, and multiple bots "like" the post, raising its popularity and visibility; real human users then respond, arguing with the original troll, programmed bots, or other real humans who agree with the original inflammatory post.

A troll account can have many bot account followers, making the troll appear to be popular.

Multiple troll accounts can attack a real human user, effectively silencing the real human. While a user might be able to shake off criticism from one persona, human psychology makes people particularly susceptible to coordinated criticism from multiple personas.

My own experience is that turning in accounts that violated the Twitter terms of service got no response until November 3 - two days after the Intelligence Committee hearings. I got no responses to numerous reports of harassment I observed between November 2016 and November 2017. Then I received notice of 7 suspended accounts that had violated the terms of service in the week of November 1.

Twitter deleted the accounts on the list they provided House Intelligence, so interested individuals cannot examine these accounts to see their characteristics.

I ran the User ID numbers near in number to the Russia-linked accounts through tweeterid.com​, a reverse look-up service. I then searched Twitter for the handles associated with those ID numbers. Most of the accounts were "blue eggs" - accounts with generic names, no photos and no profile. Most of the accounts had never Tweeted, or had Tweeted only a few times. Most of the accounts has been formed in 2009 or 2012. The collage of screen captures of some of the accounts are shown above.

What does this show? These accounts appear to be dormant. They were created years ago, and nothing has been done with them yet. Are these nefarious, Russian intelligence accounts? There is no way for me to tell, but it seems that Twitter did not search very hard to identify accounts that are not "authentic" - their word for accounts associated with real people.

These may be innocent accounts made by individuals who never used them. They may also be silent accounts, waiting for someone to activate them and adopt personas that boost content or bully real people. They may also be part of a botnet, a group of bots used to boost the popularity of troll accounts.

More information is needed, and Twitter is likely not finished testifying to Congress.

In the words of Representative Adam Schiff, ranking member on the House Intelligence Commitee:

"​Russia exploited real vulnerabilities that exist across online platforms and we must identify, expose, and defend ourselves against similar covert influence operations in the future. The companies here today must play a central role as we seek to better protect legitimate political expression, while preventing cyberspace from being misused by our adversaries. "

Members of both committees, Republican, Democrat, and Independent, expressed disappointment that the social media companies represented (Twitter, Google, and Facebook) sent their attorneys, rather than the CEOs coming to the hearings.

Politico reported that Senator Mark Warner said Twitter's "actions have not matched their words in terms of grasping the seriousness of the threat ... I'm more than a bit surprised, in light of all the public interest from the subject, that anyone from the Twitter team would think that the presentation made to Senate staff today even began to answer the kinds of questions that we'd asked," he told reporters after the staff briefing. "So there's a lot more work they have to do."

Jack Dorsey, CEO of Twitter, said Thursday, November 9, that he is willing to testify to Congress, but that he had not been invited. This is inaccurate - according to Recode, "lawmakers previously and repeatedly called on Dorsey and other tech executives to make the trip to Capitol Hill — and they’ve apparently declined."

Amy Mek is a prolific Tweeter. Her account was opened on the morning of November 17, 2012. She has tweeted 45,693 times as of this evening. That means she has Tweeted an average of 25 times per day for the past five years. Her Twitter profile says she is a vegan psychotherapist in New York City. She has more than 199,000 followers, including former national security advisor Michael Flynn, and lots of users with blank profile banners, no faces, and suspiciously bot-like activity. She is a staunch supporter of Donald Trump and the NRA, and an opponent of immigration. Some of the most common words in her recent Tweets, according to the Twitter tracking website foller.me, are “media,” “mulsims,” “terrorist,” “attack,” “slaughtered,” “islamic,” “nyc,” “muslim,” “mueller,” “mosque,” “jihad,” and “fbi.”

But Amy Mek may not really exist.

In March 2017 Maureen Erwin reported in The San Francisco Examiner that she had been searching for evidence that “Amy” exists. (She didn’t find any.) “Amy” had been quoted in The New York Times in May 2016, before the primary season was over, in an article titled “The Women Who Like Donald Trump.” the article mentions another Twitter star, Linda Suhler, PhD, a molecular biologist who interacts with Amy Mek. She has lots of followers, an averag of 26 tweets per day, grandchildren, and the article suggests that she has made real-life friendships with other Trump-supporting women she has met on Twitter who are on the #TrumpTrain.

Neither of these women’s profile pics connect to any other images on Google Reverse Image Search. Neither seem to have a curriculum vitae or other professional identity on the Internet. They do, however, both appear as “experts” on a website, agilience.com, which originates from France, and seems to be a conduit for funneling Internet users toward Internet influencers. “Amy” is listed as an “top authority on Islamic terrorism,” and “Linda” is listed as a “top expert in American politics.”

Are these women real? Maybe. Or, perhaps, they are personas created to boost Donald Trump’s popularity among women during the primary and general election seasons.Looking at Amy Mek’s" early Tweets, in the first half of 2013 (a few months after the opened the account was opened). They don’t look very personal. In her early Twitter life, “Amy” may have been a bot. “Linda” began a bit earlier on Twitter, but her activity in the first half of 2013 appears very similar. (No wonder they got to be friends over Trump.)

Russian intelligence may have created these accounts long before the 2016 presidential season, used them as automated accounts (or accounts run by employees in a troll farm), and activated them to become content pushers as needed.

Alternatively, there may be automated accounts formed by U.S. operatives, which were activated by U.S. Trump supporters when needed for the campaign.

It is also possible that these two personas are real people, with professional careers, who have no online presence except as Twitter influencers and experts on a French website.I will update as this story develops.

An investigation by the Russian media organization RBC identified numerous social media accounts on Facebook, Twitter, and Instagram that were run by government operatives within Russia. The article is in Russian, but several graphics from the investigation are included at the end of this article.

Russian news outlet Meduza published an article with details about the St. Petersburg toll factory previously reported on by the New York Times and The Atlantic.

Molly McKew, a lobbyist who has worked extensively in former Soviet countries and is familiar with techniques used by the Russian government to influence public opinion, warns that we are likely only seeing what the Kremlin wants us to see, and that there are many more operatives who worked and are still working to influence current events in the U.S. She posted on Twitter regarding the Meduza​ article: "Read this for tools/tactics. But keep in mind what PURPOSE info has. 90 paid interns not whole effort. There's more."

Graphics from the RBC article are included below. They show some of the accounts that have been traced to Russian operatives in St. Petersburg. The bright blue circles represent Facebook accounts, the lighter blue represent Twitter accounts, and the red circles represent Instagram. The size of the circle indicates the level of activity of the account.