Google Cofounder Sergey Brin Warns of AI's Dark Side

Artificial intelligence is a recurring theme in recent remarks by top executives at Alphabet. The company’s latest Founders’ Letter, penned by Sergey Brin, is no exception—but he also finds time to namecheck possible downsides around safety, jobs, and fairness.

The company has issued a Founders’ Letter—usually penned by Brin, cofounder Larry Page or both—every year, beginning with the letter that accompanied Google’s 2004 IPO. Machine learning and artificial intelligence have been mentioned before. But this year Brin expounds at length on a recent boom in development in AI that he describes as a “renaissance.”

“The new spring in artificial intelligence is the most significant development in computing in my lifetime,” Brin writes—no small statement from a man whose company has already wrought great changes in how people and businesses use computers.

When Google was founded in 1998, Brin writes, the machine learning technique known as artificial neural networks, invented in the 1940s and loosely inspired by studies of the brain, was “a forgotten footnote in computer science.” Today the method is the engine of the recent surge in excitement and investment around artificial intelligence. The letter unspools a partial list of where Alphabet uses neural networks, for tasks such as enabling self-driving cars to recognize objects, translating languages, adding captions to YouTube videos, diagnosing eye disease, and even creating better neural networks.

Brin nods to the gains in computing power that have made this possible. He says the custom AI chip running inside some Google servers is more than a million times more powerful than the Pentium II chips in Google’s first servers. In a flash of math humor, he says that Google’s quantum computing chips might one day offer jumps in speed over existing computers that can be only be described with the number that gave Google its name, a googol, or a 1 followed by 100 zeroes.

As you might expect, Brin expects Alphabet and others to find more uses for AI. But he also acknowledges that the technology brings possible downsides. “Such powerful tools also bring with them new questions and responsibilities,” he writes.

AI tools might change the nature and number of jobs, or be used to manipulate people, Brin says—a line that may prompt readers to think of concerns around political manipulation on Facebook. Safety worries range from “fears of sci-fi style sentience to the more near-term questions such as validating the performance of self-driving cars,” Brin writes.

All that might sound like a lot for Google and the tech industry to contemplate while also working at full speed to squeeze profits from new AI technology. Even some Google employees aren’t sure the company is on the right track—thousands signed a letter protesting the company’s contract with the Pentagon to apply machine learning to video from drones.

Brin doesn’t mention that challenge, and wraps up his discussion of AI’s downsides on a soothing note. His letter points to the company’s membership in industry group Partnership on AI, and Alphabet’s research in areas such as how to make learning software that doesn’t cheat), and AI software whose decisions are more easily understood by humans. “I expect machine learning technology to continue to evolve rapidly and for Alphabet to continue to be a leader — in both the technological and ethical evolution of the field,” Brin writes.