Facebook has good news and bad news about its data breach

The good news about the Facebook data breach is that it affected fewer people than previously believed — a rarity, in the cybersecurity realm. The bad news is that the types of data stolen were quite personal, and could have bad consequences for the 14 million people affected.

According to today’s statement, the hackers stole access tokens for 30 million accounts (revised down from an initial estimate of 50 million), allowing them to gain complete access to the profiles. Of those 30 million, the hackers accessed basic contact information (name and either email or phone number) for 14 million accounts, and additional information including gender, religion, location, device information, and the 15 most recent searches for another 15 million accounts. No information was accessed for the remaining one million accounts.

“We take these incidents really, really seriously,” said Guy Rosen, Facebook’s vice president of product management, told reporters in a call afterwards.

You can check to see whether your account was affected here. (Mine wasn’t, depriving me of a crucial opportunity to post aggrieved tweets about the situation. Fortunately, it seems that every other tech reporter that I follow had their information compromised.)

This, it bears repeating, is a privacy disaster. The ripple effects may go unnoticed for weeks or months, but as long as users’ deeply personal information is floating around the internet, it is exposed and open to misuse. And what recourse do people have to reclaim that information? Two factor authentication, for example, will now be much harder for users who’ve had their email address and phone number compromised by the attack. As Slate’s Will Oremus noted, unlike a password, location histories and search histories aren’t things you can change. “If your password is stolen, you change your password. The damage is done and you move on. But if all your identifying personal information is stolen? You can’t change that. It could haunt you for the rest of your life,” he tweeted.

Some reporters have called on Facebook to offer free credit monitoring to breach victims; the company has so far been mum on the subject. The FBI is investigating, and has asked Facebook not to tell us who the company suspects is responsible for the attack.

Sarah Frier notes that the worst of the damage will likely be felt by a subset of 400,000 people, who served as an entry point for the attackers. (You’ll recall that they were able to exploit a series of bugs to view profiles as if they were the person who owned them.) For those people, in addition to profile data about hometowns and such, they also have to worry about hackers see their timeline posts and names of recent Messenger conversations. Notably, the attack affected even those users who employed two-factor authentication on their accounts.

What to make of all this? Weeks after Facebook revealed that the breach had happened, I’m still not sure that there is a smarter take than the extremely obvious and oft-stated one: it’s another blow to the trust that people have in Facebook, at a time when (1) that trust is already at a low, and (2) when the company is asking us to trust it more than ever.

Over a long enough time span, all data is liable to be breached. It’s why some security researchers call on companies to store as little data about their customers as possible, to minimize the damage when the inevitable happens. As an advertising company, Facebook cannot easily adopt such an approach. But it could modulate the other ways in which it asks us for our trust — perhaps deciding, as Google did, to leave the camera out of its home speaker; or not to put on stage an executive soliciting our most personal information, however well anonymized, while the investigation into a data breach affecting millions is still underway.

Instead, it’s full speed ahead.

Perhaps Facebook will shrug off this breach, as it has so many privacy flaps before it. But credibility, once lost, is hard to regain. Facebook has been appropriately open and straightforward about the breach, in ways that could have rebuilt trust with its user base. But the story of this week has been how efforts elsewhere in the company have continually undermined them.

Ryan Gallagher gets a hold of Google CEO Sundar Pichai’s letter to a bipartisan group of six senators on the subject of Project Dragonfly:

Pichai did not answer nine specific questions the senators asked, including, “Which ‘blacklist’ of censored searches and websites are you using? Are there any phrases or words that Google is refusing to censor?”

Instead, Pichai wrote, “Google has been open about our desire to increase our ability to serve users in China and other countries. We are thoughtfully considering a variety of options for how to offer services in China in a way that is consistent with our mission. … [W]e can confirm that our work will continue to reflect our best assessment of how best to serve people around the world, as set forth in our mission and our code of conduct. Of course, should we have something to announce in the future, we would be more than happy to brief you and your staff on those plans.”

Andrew Liptak talks to New America fellow P. W. Singer about his new book LikeWar: The Weaponization of SocialMedia, which covers how bad actors gamed social media platforms to spread discord and misinformation:

SINGER: The telegraph and then the telephone allowed us to connect personally from a distance at a speed not previously possible. Radio and then TV allowed one to broadcast out to many. What social media has done is combine the two, allowing simultaneous personal connection as never before, but also the ability to reach out to the entire world. The challenge is that this connection has been both liberating and disruptive. It has freed communication, but it has also been co-opted to aid the vile parts of it as well. The speed and scale have allowed these vile parts to escape many of the firebreaks that society had built up to protect itself. Indeed, I often think about a quote in the book from a retired US Army officer, who described how every village once had an idiot. And now, the internet has brought them all together and made them more powerful than ever before.

Does using Facebook make you more informed about political issues? Here’s a paper from Sangwon Lee and Michael Xenos of the University of Wisconsin at Madison that investigated that question using surveys and statistical analyses. Ultimately, the authors conclude that frequent Facebook usage can make you less informed about politics. It concludes that the overall effect on political knowledge is insignificant. (The paper is scheduled to be published in the January issue of Computers In Human Behavior.)

These psychological mechanisms suggest that frequent exposure to news content may not lead to actual knowledge gain. This circumstance is especially relevant for news posts on Facebook as users increasingly encounter political content when visiting the site — either news articles shared by traditional news media outlets or user-generated content shared by members of the users’ unique social networks. This might lead to the misperception that Facebook helps them stay updated, even if they are not actively seeking news else where. Indeed, what may be happening is that users gain only a little knowledge from Facebook, because most of them skim the political content rather than devote much cognitive processing to it. Perhaps even worse, this misunderstanding of knowledge gain may discourage users from seeking news elsewhere or from paying attention to the news in general, negatively aﬀecting their political knowledge

With talk swirling about privacy regulation in the wake of California passing a strict privacy law of its own, Satya Nadella called for a single federal law this week:

“We hope that there’s more of a national privacy law,” Nadella said in an interview Wednesday with Bloomberg News at the U.S. Naval Academy in Annapolis, Maryland, where he discussed leadership with midshipmen.

Sankalp Phartiyal goes on the road with the actors performing skits about fake news and WhatsApp in India — and there’s video of the skits! This story is a blessing.

The campaign is not entirely altruistic. It is being run in conjunction with Reliance Jio, the fast-growing telecom carrier controlled by billionaire Mukesh Ambani that recently made WhatsApp available on its $20 JioPhone.

Instructions on how to install and use the app on the JioPhone, which has connected tens of millions of low-income Indians to the Internet for the first time, are also a part of the 10-city roadshow.

Daniel Funke checks in with the state of misinformation in Brazil amid its current presidential election:

Among the misinformation is a false photo claiming that left-wing presidential candidate Fernando Haddad had received nearly 10,000 votes from only 777 people, which Lupa debunked the day after the election. Another photo falsely claimed that a former finance minister said ballot boxes were ordered to defraud the election, which Aos Fatos also debunked Oct. 7. There are even voting machine fraud memes.

Why? Nalon said hoaxers are trying to delegitimize the election.

“It’s the story of the election, actually — it’s the most relevant hoax, I think,” she said. “People are trying to attack the electronic polling system in order to delegitimize whoever is the winner of the election. It is something that was built by far-right influencers.”

YouTube is trying to reduce the uploading of spammy content by preventing creators from monetizing it:

A few days ago, in a post on its help forum, YouTube gave an explanation to people who may have been removed from its Partner Program for “duplicative content,” which appears to have less to do with fair use and copyright and more to do with videos that don’t add value. For YouTube, that means anything that “appears to be automatically generated,” anything “pulled from third-party sources with no content or narrative added by the creator,” stuff that’s been “uploaded many times by multiple users” if you’re not the original uploader, or content that’s been “uploaded in a way that is trying to get around our copyright tools.”

YouTubers are posting heartfelt videos about mental-health issues and then directing fans to a shady online counselor:

Eighty-six users have filed complaints about the app with the Better Business Bureau, a nonprofit aimed at holding businesses accountable for bad practices. In a Reddit thread, several users describe being charged excessive fees (likely due to the fact that they didn’t realize the plan they purchased charged the full annual fee up front), and claim the counselors on the app were unresponsive, unhelpful, or refused them treatment.

As Polygon points out, BetterHelp’s terms of service state that the company can’t guarantee a qualified professional. “We do not control the quality of the Counselor Services and we do not determine whether any Counselor is qualified to provide any specific service as well as whether a Counselor is categorized correctly or matched correctly to you,” the terms of service read. “The Counselor Services are not a complete substitute for a face-to-face examination and/or session by a licensed qualified professional.”

This time instead of exposing users’ data, a Facebook bug erased it. A previously undisclosed Facebook glitch caused it to delete some users’ Live videos if they tried to post them to their Story and the News Feed after finishing their broadcast. Facebook wouldn’t say how many users or livestreams were impacted, but told the bug was intermittent and affected a minority of all Live videos. It’s since patched the bug and restored some of the videos, but is notifying some users with an apology that their Live videos have been deleted permanently.

Nick Tabor takes a measured but skeptical look at the Summit Learning Program, which Mark Zuckerberg supercharged by lending it a bunch of engineers, and which led to a parent revolt in Cheshire, CT. As Tabor notes, hundreds of other communities have happily adopted Summit’s learning technology. But general skepticism around Zuckerberg and Facebook threatens to poison the program in the eyes of the public. (I wrote about Summit when Zuckerberg first got involved; students I spoke with really liked it.)

Have you ever suffered a thumb-related injury related to scrolling through feeds for several hours a day? If so, you’ll be excited to check out this test, in which you can simply mash your paw on the screen to load fresh content until your phone dies.

Outvote is a friend-to-friend texting app for political campaigns. Download it and it can help you find your friends that are registered to vote, and offer scripts provided by the campaign to reach out and get them to the polls.

“Election Bundle” is kind of a funny name for a pair of browser extensions that (1) prevents Facebook from tracking you around the web for advertising purposes and (2) gathers political ads in your News Feed and sends them ProPublica. But that’s what they’re calling it!

Greg Sargent tells publishers to stop doing the thing where they tweet something like, “President Trump says the FBI is run by lizard people who feast on the blood of our young,” without explicitly reminding everyone that he is making that up.

“When people see stuff on social media, what they often see is only the headlines,” Silverman said. “If you are restating claims that are false or misleading in headlines, you are spreading misinformation. And social media is pouring gasoline on that fire.”

This is a crucial insight, and while things have gotten better in recent months, the problem remains one that plenty of traditional journalists and news organizations still refuse to take seriously enough. You constantly see headlines on news organizations’ websites that blare forth a politician’s false, dubious or unsupported claims without informing readers that those claims are, well, false or dubious or unsupported. Often it requires reading deep into a story to discover a corrective, if there is a corrective at all.

Max Read looks at Thursday’s summit between Donald Trump and Kanye West in the context of recent discussions around social media and mental illness:

The connection between eccentricity, erratic behavior, celebrity, and attention is not, obviously, a new dynamic — think of Tom Cruise or Charlie Sheen. But social media, and the news its dominance incentivizes, has created an environment in which the quickest and surest way toward blanket coverage of you and your output is acting in a way consistent with mental illness, regardless of whether or not you would be diagnosed as ill in a clinical setting. This is as true in business, where erratic behavior and market manipulation are two sides of the same coin — just ask Elon Musk — or in politics, where a particularly obsessive set of theories about Donald Trump can net you tens of thousands of followers, as it is in entertainment. What’s necessary to succeed in an economy where attention is the reserve currency is a set of attributes that appear with no small frequency in the DSM.

So far, the discussion of “deepfakes” — videos altered to make it look like real people are doing things they never actually did — has focused mostly on its potential to usher in the apocalypse. Less discussed has been what it can do for music videos. Here James Vincent talks to Ryan Staake, who used the technology to make the recent video “1999” with Charli XCX:

The “1999” video is a perfect use case for deepfakes. In it, Charli and singer Troye Sivan pay homage to various 1990s touchstones, like Steve Jobs, TLC’s “Waterfalls” music video, Titanic, the Nokia 3310, The Sims, and so on. At two points, the creators of the video used the same basic deepfakes algorithms to paste Charli and Sivan’s faces onto dancers imitating the Spice Girls and the Backstreet Boys.

“When you start to think about the complexity of getting them in and out of wardrobe and makeup for each of those characters, it would take five times longer,” says Staake. “So in a way, it was a pragmatic solution. But then, we also started playing off the bizarreness and aesthetics of it. It’s one of those things where part of the excitement is just trying to see if it works. Like, can we use this weird fake celeb porn tool in a legit music video?”

Y'know, I’ve thought about it and on balance I still find this deeply disturbing.