In 2016, Tristan Harris, whose job title at Google was “design ethicist,” left the company to focus on a new nonprofit he called Time Well Spent. The goal of Time Well Spent is to reverse what it cals “the digital attention crisis” — the brilliants minds at Google, Apple, Facebook, and elsewhere who “hijack our minds” through ever-more sophisticated manipulation techniques delivered through our smartphones. Harris has emerged as a vocal critic of Facebook, appearing on NBC this week to call the company “a living, breathing crime scene.”

You might expect that Facebook, which derives its profits from the amount of time people spend interacting with the advertisements in its apps, would reject the Time Well Spent thesis. Instead, the company co-opted it. In a Jan. 11th post, Mark Zuckerberg invoked the initiative by name. “By focusing on bringing people closer together — whether it’s with family and friends, or around important moments in the world — we can help make sure that Facebook is time well spent,” he wrote.

Today, one of Harris’ collaborators returned the volley. In a pairof closely argued essays on Medium, Joe Edelman — who says he coined the term “time well spent” with Harris five years ago — lays out a suggested path forward for Facebook.

“It’s possible (but very tricky) to design software so as to address the users’ sense of meaning,” Edelman wrote in the first essay. “But it requires profound changes to how software gets made! These changes make others your company has gone through (such as the adoption of machine learning, the transition from web to mobile) look easy.”

Less than a month into the new year, “time well spent” promises to become the “fake news” of 2018: a term overused into oblivion by partisans of every stripe. To Zuckerberg, “time well spent” means independent research showing that people value the time they spent on Facebook, and feel better about themselves afterward. To Harris, it represents a shift away from measuring comments and shares to emphasizing companies’ positive contributions to users’ lives. There’s overlap there, but are also some fundamental differences. In 2018, the battle will play out.

Social software simplifies and expedites certain social relationships, and certain actions, at the expense of others. And if the simplified actions and relationships weren’t designed with a users’ particular values in mind, then using the software can make living by their values more difficult, which leaves them feeling like their time was not well spent.

For example, it may be harder to live by the value of honesty on Instagram, if honest posts get fewer likes. Similarly, a courageous statement on Twitter could lead to harassing replies. On every platform, a person who wants to be attentive to their friends can find themselves in a state of frazzled distraction.

As users, we end up acting and socializing in ways we don’t believe in, and later regret. We act against our values: by procrastinating from work, by avoiding our feelings, by pandering to other people’s opinions, by participating in a hateful mob reacting to the news, and so on.

Edelman notes that social software controls the nature of our actions to a far greater degree than anything we experience offline. Even at schools with dress codes, teens find ways to push against norms with their choices of shoes, socks, or backpacks. On Facebook and other social platforms, a one-size-fits-all approach means we’re locked into their peculiar modes of interaction. We chafe against them, and feel bad about ourselves afterward.

In his second essay, Edelman attempts to chart a path forward. It’s more esoteric than his first essays, and largely aimed at software designers. But it’s well worth reading for anyone wondering what the next generation of social software might look like — how it might avoid triggering the ennui that today’s platforms to. (Or at the very least, trigger new and different forms of ennui!)

Edelman suggests that designers of social software begin by asking themselves what their users’ actual values are, and then reflect on how they can let users live out their values through software:

For example, if an Instagram user valued being creative or being honest or connecting adventurously, then designers would need to ask: what kinds of social environments make it easier to be creative, to be honest, or to connect adventurously? They could make a list of places where people find these things easier: camping trips, open-mics, writing groups, and so on.

Next, the designers would ask which features of these environments make them good or bad practice spaces. For instance, do mechanisms for showing relative status (like follower counts) help or hurt when someone is trying to be creative? How about when they want to connect adventurously? Is it easier to be creative with a small group of close connections or a large group of distant ones? And so on.

This isn’t the only approach available to Time Well Spent for companies like Facebook: It was published on a day when Facebook’s own announcement on the matter was that it would allow users to watch videos at the same time, and comment together.

But it is a more deeply considered one than what the company has announced so far. And while the tug-of-war over “time well spent” is likely to continue, the moral advantage still belongs to the folks that coined the term.

Much of the testimony focused on how the companies use artificial intelligence to detect and remove terrorist content. Facebook’s head of product policy and counterterrorism Monika Bickert said that the company is able to automatically remove 99 percent of ISIS and Al Qaeda content before it’s flagged, although she admitted that humans were still necessary to detect nuances in who shared the content. YouTube and Twitter also trumpeted some of their successes with machine detection.

Still, the companies did not escape tough questions from some members of the committee. Thune asked YouTube about a how-to bomb-making video, which had reportedly been re-uploaded several times. “How is it possible for that to happen?” he asked, as YouTube responded that it had been able to take down the re-uploads quickly. Sen. Wicker (R-MS) pushed Twitter on its position not to cooperate with providing data to law enforcement for surveillance operation, a position that the company defended.

The company said before that it hadn’t see any “significant” meddling. But now it’s taking another look:

Facebook is launching a new investigation into whether Russian propagandists coordinated a disinformation campaign around the 2016 Brexit vote. In a letter to the head of a parliamentary committee on digital affairs, UK policy director Simon Milner says Facebook will conduct “detailed analysis of historic data” over a period of several weeks. It’s looking for accounts that it may have missed in a December probe, which found little evidence of potential Russian interference — a conclusion that the committee head, MP Damian Collins, disputed.

Milner writes that the earlier investigation focused on the Internet Research Agency, a Kremlin-linked company that’s allegedly behind many US-based fake accounts. In response to Collins’ request for closer investigation, Facebook “can confirm that our investigatory team is now looking to see if we can identify other similar clusters engaged in coordinated activity around the Brexit referendum that was not identified previously.” Milner also requested any information, “including intelligence assessments or reports,” that might help the new hunt.

Noah Kulwin takes a look at how WhatsApp has taken over Brazil, and become a primary channel for propagating hoaxes along the way. As always, it’s complicated:

The creation and spread of fake news has alarmed governments the world over, but few countries have expressed as much concern as Brazil. And with good reason, as multiple reports have concluded that fake news stories regularly outperformed real stories in Brazil in 2016. The problem has grown so rampant that Brazil’s national police recently announced plans to find and “punish the authors of ‘fake news’” ahead of this year’s high-stakes presidential elections.

But WhatsApp adds another complication to Brazil’s fake news quandary: secrecy. Unlike the fully public Twitter, or Facebook, where posts are somewhat public and can more easily be tracked and analyzed by independent parties, WhatsApp is a closed, peer-to-peer messaging service. On WhatsApp, the most toxic aspects of fake news multiply: The platform exacerbates pockets of powerful echo chambers in a political environment already deeply polarized and makes tracking the reach and origins of disinformation particularly difficult for researchers, journalists, and, in Brazil’s case, the federal police.

I actually think there would be some value in privately shaming people who shared content from some of these Russia-linked accounts. Think before you retweet y'all:

Twitter Inc. will inform users who may have seen posts covertly crafted by Russians during the 2016 presidential campaign, the company’s director of U.S. public policy told a Senate committee Wednesday.

The company is “working to identify and inform individually the users” who could have come across accounts linked to the Internet Research Agency, which aided Russian efforts to meddle in the race, Carlos Monje told the Commerce, Science and Technology Committee.

There are more than 1.5 million nonprofits on Facebook and many of them have built presences there at the platform’s encouragement. Now, they’re headed into a period of uncertainty inspired by News Feed changes which could diminish a critical tool they use to reach people interested in supporting their causes.

Discord is used by millions of gamers every day to chat, relax, and coordinate while playing their game of choice. But this multimillion-dollar messaging platform has another ballooning user base: revenge-porn offenders.

The Daily Beast has found hundreds of images, almost all of women, and many apparently shared without consent across a spread of Discord servers. Many of the Discord users originate from infamous revenge-porn site Anon-IB, and have set up these chatrooms with the deliberate intent of more easily sharing and finding new images.

“All Discord does is remove the users. They just make new ones,” Mia Landsem, a professional taekwondo athlete and activist who infiltrates the offending Discord communities, told The Daily Beast. Landsem also shared screenshots indicating that other Discord servers are focused specifically on spreading videos of rape and serious abuse.

Vine shut down a year ago today. Julia Alexander looks at what happened when its shut down and its biggest stars fled to YouTube:

The growing disdain original YouTubers had for Viners came to a head in June 2017, during an annual YouTube convention known as VidCon. Logan Paul announced he had hidden $3,000 somewhere around the convention center. In the middle of it, Paul went down and was ambushed by fans, breaking the convention’s rules and creating a very serious safety hazard. The footage Paul uploaded to YouTube was mocked by just about every major YouTube commentator, but it also solidified the Viners had invaded, and they had arrived.

Google had a surprise viral hit with its newish Arts & Culture app, which has a feature that attempts to match your face to a work of art in its database. The results have taken over social media, but not in Texas or Illinois:

The reason? Laws in those states ban the collection of biometric data, including a record of “face geometry,” without a user’s consent. Google, a unit of Alphabet Inc., GOOGL 0.74% is blocking the selfie service in its arts and culture app in Illinois and Texas because of the privacy laws, according to a person familiar with the company.

Google Arts & Culture, which became the No. 1 free app on the Apple Inc. and Google Play app stores over the weekend, is among a growing wave of tech products using software that can recognize faces—from doorbells that identify guests to security cameras that recognize shoplifters to iPhones that unlock with a glance.

When the Dolan Twins, brothers with over 5 million followers on YouTube, tried to organize an impromptu meet-up in November in London’s Hyde Park, things quickly spiraled out of control. They were forced to cancel the appearance before they even arrived due to lack of crowd control. And despite the cancellation, thousands of teens still descended on the park, wreaking havoc and reportedly trampling each other.

Here’s a direct consequence of the decline in news video being shared on Facebook. NowThis, which made lots of generic videos consisting of little more than captions over stock footage, used to exist almost entirely on Facebook. Now, to give itself a more stable platform, it has begun posting videos to — gasp! — its own website. Blogging is back!

The nature of Snapchat makes it difficult to track these scams. Unlike other social media platforms, Snapchat is a closed (not to mention purposefully ephemeral) system. Users exist in their own private spheres of content, which makes it hard to track posts as an outsider. Everything that is shared expires within 24 hours.

Facebook today began testing Watch Party, a new feature available to groups that lets members watch videos at the same time while continuing to comment and interact. Any public Facebook video can be included in a Watch Party, whether it’s live or prerecorded. The goal is to make watching videos a fun, social experience rather than a passive one, the company said.

“As we think about video on Facebook, we’re focused on creating experiences that bring people closer together and inspire human connection instead of passive consumption,” said Fidji Simo, vice president of product at Facebook, in a blog post.

The feature is not very different from what can already be done in Instagram. In order to achieve the same thing now, just snap a pic, click on the pen tool at the top, select a color, and then press and hold on the screen to create a blank surface to write on. Additionally, in the new implementation, it appears the default background is a bright ombré, giving it the flair of Facebook statuses with colorful backdrops. Mashable says users will also have the option of using a photo for the background. Creating a shortcut to share text-based content within Stories is a useful tool that caters to something people already do on the platform, just with an extra step or two. It’s unclear if Instagram will be rolling Type out in a more widespread manner.

Facebook’s stated reasoning for this change only heightens these contradictions: if indeed Facebook as-is harms some users, fixing that is a good thing. And yet the same criticism becomes even more urgent: should the personal welfare of 2 billion people be Mark Zuckerberg’s personal responsibility?

After speaking to Time Well Spent’s Tristan Harris, Farhad Manjoo has some practical ideas on how Apple, alone among tech companies, could encourage less smartphone dependence:

Mr. Harris suggested several ideas for Apple to make a less-addictive smartphone. For starters, Apple could give people a lot more feedback about how they’re using their devices.

Imagine if, once a week, your phone gave you a report on how you spent your time, similar to how your activity tracker tells you how sedentary you were last week. It could also needle you: “Farhad, you spent half your week scrolling through Twitter. Do you really feel proud of that?” It could offer to help: “If I notice you spending too much time on Snapchat next week, would you like me to remind you?”

Another idea is to let you impose more fine-grained controls over notifications. Today, when you let an app send you mobile alerts, it’s usually an all-or-nothing proposition — you say yes to letting it buzz you, and suddenly it’s buzzing you all the time.

What we already know about how fake news spreads on Facebook makes this a scary proposition. Here’s how: A group wants to spread a particular piece of misinformation or propaganda for whatever reason. They do this by paying Facebook to show this content to people who are likely to share it. Those people are shown these paid-for posts, and they then start spreading it around their network.

Examples of this tend to center on politics and elections, but there are other types of scams circulating on Facebook. Right now, bitcoin and cryptocurrency are particularly hot.

The meme of the year so far is people eating TidePods — or pretending to on social media — because they look bizarrely delicious. This has become enough of a thing that YouTube is now having to scramble the moderation jets, lest a rash of deaths result and the company wind up with Tide on its hands:

YouTube has released a statement in response to a viral trend in which people are posting videos of themselves purposely ingesting laundry detergent. The so-called “Tide Pod challenge,” which reportedly began as a joke, has become enough of a thing that it has garnered responses from a government watchdog, poison control centers, and Tide’s parent company, Procter & Gamble. Now YouTube is weighing in too:

“YouTube’s Community Guidelines prohibit content that’s intended to encourage dangerous activities that have an inherent risk of physical harm. We work to quickly remove flagged videos that violate our policies.”

That said, if you have eaten TidePods we will publish recipes in this newsletter because YouTube does not tell us what to do!