How Young Mark Zuckerberg Paved the Way for the Russians

What’s staggering, in retrospect, is that a 21-year-old kid in flip-flops and surf shorts was making decisions that would eventually set the stage for Russians to hack our election process a decade later.

In 2005, Mark Zuckerberg showed up to the offices of MarketWatch to sit for an online video interview with reporter Bambi Francisco. Zuckerberg, who looked like he had just just woken up, was dressed like he was on his way to the beach: black basketball shorts, flip-flops, and a red T-shirt with the words “My mom thinks I’m cool,” emblazoned across the front. Zuckerberg, after all, was only 21 years old at the time. But aside from his youthfulness, the truly fascinating element about this time-capsule video, like other interviews he gave during the early days of Facebook, is how naïve he appears about his own brainchild.

In his conversation with MarketWatch, Zuckerberg nonchalantly explained his early decision to localize the way people used Facebook—to allow only people from specific colleges to sign up for the service and to restrict their network to people from that college. This limiting factor, he explained, “made it so people were comfortable sharing information that they probably would not have otherwise.” These privacy settings, in other words, made people feel safe enough to share personal and private things with Facebook.

A year later, however, Facebook abandoned the college-only rule, and made it so that anyone could sign up for an account. There was, of course, an uproar. Nevertheless, more people signed up. And they continued to sign up. Now, a quarter of the planet is on the service.

What’s staggering, in retrospect, is that a 21-year-old kid in flip-flops and surf shorts was making decisions back then that would set the stage for Russians to hack our election process a decade later. It’s stupefying to see how our eroded privacy (which we all happily signed up for, indeed) coupled with targeting tools built by Facebook, became the printing press for fake news, which was then targeted to the most insecure voters in the country. There are similar interviews from the same time period, it is also worth noting, when a young Jack Dorsey—a nose ring cupping his nostril, his hair scraggly—talked with his co-founders about Twttr (yes, there was a pre-vowels era) and all the innocent ways his service could be used to chat with your friends and make party plans.

While it would be easy to blame the younger iterations of these C.E.O.s for the world we now find ourselves in today (I do think the venture capitalists, specifically people like Peter Thiel, who initially funded Facebook, and Marc Andreessen, who sits on the board, deserve some blame for not questioning the negative possibilities of these decisions earlier), a more apt question is how we can stop these things, or likely worse scenarios, from happening in the future, not only at these specific companies, but also with other technologies that are being built by young engineers in Silicon Valley today.

For example, we’re barreling headfirst into a world of driverless cars, which will have innumerable positive impacts on society, including reducing car accidents and fatalities that result from human error. But, what are people doing to ensure that these cars can’t be hacked and turned into weapons, used to kill people who are protesting, or even walking down the street minding their own business? Or what if the Russians one day hacked into a driverless car networks and sent all of the Democratic voters to the wrong precinct, making them illegible to vote? It doesn’t take much of an imagination to envision how a sky full of delivery drones, upcoming audio and video technologies, and a slew of other start-up ideas could be used against us.

Facebook and Twitter have both come around to the manner in which their platforms have been hijacked by bad actors, but the antecedents of the problems were largely financial. Facebook currently spends more than $3 million a quarter on lobbying efforts in Washington, but back in 2011, the company lobbied the F.E.C. to block rules that required displaying the sponsor of a political ad. Facebook argued that the ads were so small, almost like “bumper stickers, which don’t require any kind of disclosure.” In hindsight, that doesn’t seem to have been such a good idea. When I was at an event at Harvard University shortly after the election, Donald Trump’s senior campaign staffers were bragging that they didn’t have the kind of money that Hillary Clinton had at her disposal, so they were forced to use Facebook, which even back then, they knew was drastically more effective than a TV ad. Twitter didn’t eradicate its trolls; doing so would have diminished its already paltry user growth. Again, good for the company’s bottom-line, bad idea for our democracy.

When I bring this recent history up to people in Silicon Valley, they defend these companies (and the people who build them), saying that technology is just a tool that can be used for good or evil. Most people use a hammer, after all, to build a house, or hang a picture frame on the wall. A small selection of bad people use them as weapons. I understand this argument, but technology also allows its founder to prevent their brainchild form being weaponized. Additionally, these “tools” aren’t just dumb objects, like a hammer. They are smart, with algorithms that can do things that humans can do (and in some instances, can not), by helping find like-minded people on a network, or seeing what they are interested in, or what they fear.

Which of course brings us to the scariest technology of all. What everyone in Silicon Valley sees as the next big thing, artificial intelligence. There are plenty of incredibly smart people who have sounded the alarm about what could go wrong with A.I., includingElon Musk and Stephen Hawking, and yet people are building it as you read these words, without oversight and possibly even understanding. Clearly, Vladimir Putin has seen the power of technology, with the influence he played in both the U.S. election and Brexit in the United Kingdom. And he knows full well that A.I. isn’t just a tool, but rather, as he noted earlier this month, “the future, not only for Russia, but for all humankind,” and that “whoever becomes the leader in this sphere will become the ruler of the world.” Off course, it’s the C.E.O. of one of the most powerful companies on the planet, Mark Zuckerberg, who doesn’t think it’s a threat at all.