Kevin Systrom very nearly tells the truth

The greatest mystery in all of social media is why Kevin Systrom and Mike Krieger left Instagram. I understand that there are mysteries that are more consequential. I understand that there are mysteries that are more mysterious. But if you want to know where Facebook is going, and where Instagram is going, it seems to me that you would very much want to know exactly what happened on or around September 24th, when Systrom and Krieger hurriedly left.

Today we had a chance to see Systrom answer that question live. At a conference celebrating Wired’s 25th birthday, Systrom took the stage with interviewer Lauren Goode to talk about what had happened, and what he plans next. He did not, sadly, tell us exactly what happened on September 24th. But Goode, who did an excellent job gently nudging Systrom toward telling the truth, kept pushing. And Systrom said this:

“When you leave anything, there are obviously reasons for leaving. No one ever leaves a job because everything’s awesome, right? Work’s hard.”

As for what wasn’t so awesome about life at Facebook, Systrom wouldn’t say. Instead, he talked about how unusual it was for him and Krieger to have stayed at Facebook so long after the acquisition, about how well positioned Instagram was to succeed in the future, and about how happy it would make him if it did. (“If this thing triples in size and becomes the most important thing in the world, that would be an awesome outcome for me, even if i’m not running it.”)

Systrom is extremely charming, and he practically glided through the interview, making the audience laugh more than any other presenter on stage Monday. He said he planned to pursue another company someday, probably with Krieger, but not on any particular time frame. Maybe it would be in social media, he suggested, or maybe space or music. (These last two examples seemed to be mostly rhetorical examples, but who knows.)

The mask slipped exactly once, in the most tantalizing way, when Goode asked him about his farewell blog post. She referred to it as his "exit statement,” which made him laugh. And then he described it this way: ““it was me writing very, very quickly.”

It was written very, very quickly, because something happened. I suspect we’ll know exactly what someday. (Good follow-up question for the Wall Street Journal, which is scheduled to have Systrom on stage in November: why were you writing so quickly?)

Meanwhile, Jeff Bezos made a surprise appearance at the conference to talk about his space company, Blue Origin. But as is now fashionable for CEOs of non-social media companies, Bezos took time to criticize them for promoting tribalism.

"The internet in its current incarnation is a confirmation bias machine,” Bezos said. “If your news feed is showing you things, it’s showing you things that confirm your point of view. By and large, having a technology that increases confirmation bias probably isn’t good. It is going to lead to more tribalism.”

The silver lining, Bezos said, is that humans have always managed to correct themselves after unleashing terrible new technologies upon the world. “We don’t know the solutions to these problems yet, but we’ll figure them out,” he said. “I worry some of these technologies will be very useful for autocratic regimes to enforce their will. A lot of things are going to happen that we’re not going to like that come out of technology. But that’s not new. That’s always been the case. We’ll figure it out.”

This is another notable step down Facebook’s road toward being a reluctant arbiter of truth. The company now says it will ban false information about voting requirements or reports of violence or long lines at polling stations in the run-up to and during next month’s U.S. midterm elections. Recall that until very recently the preferred move would have been to simply down-rank these posts in the News Feed.

Facebook instituted a global ban on false information about when and where to vote in 2016, but Monday’s move goes further, including posts about exaggerated identification requirements, or statements about conditions at polling sites that might deter people from showing up to vote, such as hours-long waits or violence.

Paul Mozur writes about a division of the Myanmar military that used Facebook to turn its own people against each other, sparking waves of violence. As Kevin Roose notes, a modern ethnic cleansing campaign begins in a way that is quite similar to a digital media startup. A damning and necessary read:

While Facebook took down the official accounts of senior Myanmar military leaders in August, the breadth and details of the propaganda campaign — which was hidden behind fake names and sham accounts — went undetected. The campaign, described by five people who asked for anonymity because they feared for their safety, included hundreds of military personnel who created troll accounts and news and celebrity pages on Facebook and then flooded them with incendiary comments and posts timed for peak viewership.

Working in shifts out of bases clustered in foothills near the capital, Naypyidaw, officers were also tasked with collecting intelligence on popular accounts and criticizing posts unfavorable to the military, the people said. So secretive were the operations that all but top leaders had to check their phones at the door.

A group of anonymous Microsoft employees posted an open letter today calling on their company not to bid on Project Jedi, a $10 billion cloud computing project that they worry will use their AI to builds weapons and otherwise cause harm:

The contract is massive in scope and shrouded in secrecy, which makes it nearly impossible to know what we as workers would be building. At an industry day for JEDI, DoD Chief Management Officer John H. Gibson II explained the program’s impact, saying, “We need to be very clear. This program is truly about increasing the lethality of our department.”

Many Microsoft employees don’t believe that what we build should be used for waging war. When we decided to work at Microsoft, we were doing so in the hopes of “empowering every person on the planet to achieve more,” not with the intent of ending lives and enhancing lethality.

Exploring Online Hate s a joint project from the New America Foundation and the Anti-Defamation League that attempts to monitor trends in the spread of hateful ideologies on Twitter, Colin Lecher reports:

The dashboard looks for trends in hateful activity through a sample of 1,000 Twitter accounts that the groups say show “hateful content directed against protected groups.” According to the project methodology, researchers selected 40 accounts that showed hateful conduct, and then algorithmically generated a larger dataset of related accounts.

By crunching data from the Twitter accounts, the groups say, researchers can take the pulse of online hate in real time, then relay information on trending topics and sources of discussion. As of Monday afternoon, trending hashtags included both the innocuous, like #midterms2018, and the troubling, like references to the conspiracy #qanon. Top keyword terms seemed to focus on Sen. Elizabeth Warren’s DNA ancestry test. The top source referenced in the tweets was YouTube.

Here’s an investigation into how people come to embrace fascism that finds that 39 of 75 people studied say that the internet led them there, with YouTube as the most-cited source:

Thirty-nine of the 75 fascist activists we studied credit the Internet with their red-pilling. YouTube seems to be the single most frequently discussed website. The specific videos credited, however, span a multitude of creators, from British YouTuber Sargon of Akkad (Carl Benjamin) to Infowars founder Alex Jones.

When Facebook deletes pages that are found to have participated in coordinated bad behavior, as it did last week, it typically doesn’t tell us the names of those pages. There’s growing pressure on the company to disclose more about who ran these pages and what they were doing, in part to serve as an accountability check on Facebook to make sure it’s not purging good pages on accident. Anyway, this site is building up a list of those caught up in last week’s purge.

A majority of Americans can’t tell social media bots from human beings, and most think bots are bad, according to a new study from Pew Research Center. (Here’s some more good evidence on bots becoming more believable, in relation to Intercom’s new customer service bots.) Shannon Liao has the Pew story:

Only 47 percent of Americans are somewhat confident they can identify social media bots from real humans. In contrast, most Americans surveyed in a study about fake news were confident they could identify false stories.

The Pew study is an uncommon look at what the average person thinks about these automated accounts that plague social media platforms. After surveying over 4,500 adults in the US, Pew found that most people actually don’t know much about bots. Two-thirds of Americans have at least heard of social media bots, but only 16 percent say they’ve heard a lot about them, while 34 percent say they’ve never heard of them at all. The knowledgeable tend to be younger, and men are more likely than women (by 22 percentage points) to say they’ve heard of bots. Since the survey results are self-reported, there’s a chance people are overstating or understating their knowledge of bots.

Facebook employees Kendra Sinclair and Jared Vengrin tell the inspiring story of how they left the Palo Alto home they owned, along with the secondary apartment they kept in the city for fun, and managed to find the perfect $7,600 a month apartment in New York City. Generally I encourage tech employees to always tell us what’s on their minds, but there is really never any good reason to tell anyone this:

Katie Shannon brought down LinkedIn one time and she feels really bad about it. I’m here to tell Katie to relax. What she describes here was a gift to the internet:

Hundreds of LinkedIn engineers were left with the question, “How did that just happen?” That site issue was the worst I’ve ever seen at LinkedIn in my 3-and-a-half years at the company. No one was able to reach any url that had linkedin.com in it for 1 hour and 12 minutes, resulting in many of our millions of users unable to reach the site. After a few hours of investigation, we realized what the root cause of the issue was: me.

Brian Merchant attends the Magic Leap developer conference and concludes that its hype was undeserved:

This week, the company held its inaugural developer conference, to try to entice third-party creators, and to introduce the gear to a wider audience. In many ways, this is the final stage of the product’s public debut. In 2015, Abovitz told Wired, “When we launch it, it is going to be huge.” After spending two days at LEAPcon, I feel it is my duty—in the name of instilling a modicum of sanity into an age where a company that has never actually sold a product to a consumer can be worth a billion dollars more than the entire GDP of Fiji—to inform you that it is not.

I talked with Houseparty’s Ben Rubin about Facemail, his company’s first move into asynchronous communication, and about how things are going generally for the company:

How’s Houseparty doing? Rubin acknowledged that the company is growing more slowly than other social networks at their peak. But the unchecked growth of social networks has had serious consequences around the world, he said. “We have millions of people come every day and have fun conversations with people they care about and have a good time,” Rubin said. “Shouldn’t we just continue to build?”

Houseparty’s initial growth was sufficient to spook Facebook, which built a clone called Bonfire that it began testing in the Netherlands in September 2017. But more than a year later, it has yet to launch in the United States. An upside of Houseparty’s relatively slow growth is that giants are less likely to come after it.

Eve Peyser writes about how Twitter encourages us to adopt a single virtual identity, which we then have to shed as we grow up and change:

Translating the essence of who you are into a digestible product is a strange way to live, especially when you’re a young adult and your sense of self is in flux. It was never my main intention to peddle my personality for a living, but in the era of social media, the personal brand reigns supreme; self-commodification was an inevitable outcome for a young writer like myself—extremely online, comfortable with confessing her most deranged impulses to a large audience, and looking for affirmation and love.

Translating the ups and downs of my existence into my personal brand was a way of life for me. The more I viewed my life as something to be consumed by other people, capitalizing on all the pain and pleasure and resentment and fear that come along with being alive, the more compulsively I posted. My way of being online was always unsustainable, and each time I couldn’t sustain it any longer, I shed my skin, and evolved into a slightly more adept version of myself.

And here is a very long thread from a guy who worked on Google+ who did not enjoy working on Google+. It has a strong start!

Now that Google+ has been shuttered, I should air my dirty laundry on how awful the project and exec team was.I'm still pissed about the bait and switch they pulled by telling me I'd be working on Chrome, then putting me on this god forsaken piece of shit on day one.

“If you’d like to ask Tony to ‘please frost my flakes, daddy,’ you’re going have to send him a letter now like everyone else,” says Ashley Feinberg, in this hilarious (and also chilling!) account of how furries adopted Tony the TIger as a mascot, and sent him incredibly inappropriate replies to every tweet until he had no choice but to delete his Twitter: