The Week in Tech: YouTube Fined $170 Million Over Child Privacy Violations

Each week, we review the week’s news, offering analysis about the most important developments in the tech industry.

Hello, my name is Cade Metz. I cover artificial intelligence, self-driving cars and other “emerging technologies” for The New York Times. Today I’m here to give you the lowdown on the week’s tech news.

It was another black eye for the giants of the industry. On Wednesday, Google agreed to pay a record $170 million fine. Regulators said YouTube, the company’s enormously popular video site, had illegally collected personal information from children without the consent of their parents and used it to make millions of dollars from targeted ads.

Natasha Singer and Kate Conger reported in The Times that Google would pay $136 million to the Federal Trade Commission and $34 million to New York after a joint investigation found that YouTube had violated the federal Children’s Online Privacy Protection Act, known as COPPA.

That $136 million is the largest civil penalty laid down by the F.T.C. in a children’s privacy case — the previous record was $5.7 million — but critics said it was a pittance for a tech giant like Google.

The settlement is another indication that the Trump administration is willing to take action against Big Tech. In July, Facebook agreed to pay $5 billion to settle a privacy case with the trade commission. But on Vox, Peter Kafka wrote that both the Google and Facebook settlements showed that the government wasn’t ready to regulate the internet, arguing that there was no guarantee the companies would obey the law in the future.

“In both cases, the government is relying on the five-person F.T.C. to rein in the most powerful forces on the internet by asking them to interpret and enforce laws that are, in internet terms, prehistoric,” Mr. Kafka wrote.

The settlement requires YouTube to ask people who upload videos whether they are sharing content for children and to change its data-gathering and ad-targeting behavior accordingly. But Mr. Kafka — and an F.T.C. commissioner, Rebecca Kelly Slaughter, who voted against the settlement — believes the company should be required to proactively identify content that children will view.

“A cynical observer might wonder whether in the wake of this order YouTube will be even more inclined to turn a blind eye to inaccurate designations of child-directed content in order to maximize its profit,” Ms. Slaughter wrote in her dissent.

British court allows live facial recognition

Don’t misbehave in Britain anytime soon — at least not in public. Adam Satariano reported that a British court had ruled that governments could use live facial recognition technology without violating human rights. In other words, the police are free to use cameras in public spaces to identify people in real time.

Facial recognition technology is improving rapidly, and many legal and ethical questions loom as police departments and other government organizations increase their use of it in countries across the globe, from Britain to the United States to China.

Brought by a resident of South Wales, where the police have deployed live facial recognition, the High Court case is one of the first of its kind. The South Wales police and crime commissioner hailed the court’s decision. Ed Bridges, the man who brought the suit, vowed to appeal.

“This sinister technology undermines our privacy, and I will continue to fight against its unlawful use to ensure our rights are protected and we are free from disproportionate government surveillance,” Mr. Bridges said.

Big Tech talks election security with F.B.I.

Election Day is still 14 months away. And Big Tech is already planning security. Bloomberg reported that Facebook, Google, Twitter and Microsoft met with government officials in Silicon Valley on Wednesday with an eye to reducing online disinformation and foreign interference in the run-up to the next American presidential election.

Security officers from the companies gathered at Facebook headquarters in Menlo Park, Calif., with representatives of the F.B.I., the Office of the Director of National Intelligence and the Department of Homeland Security. They want to facilitate coordination between industry and government in ways that didn’t happen during the 2016 election. That could involve building threat models and, ultimately, sharing information about the real threats.

Yes, it’s early. But after the 2016 election, it’s never early.

Some stories you shouldn’t miss

The chief executive of a company called Basecampsaid Google search ads were a “shakedown.” This is not an original thought, but amid the talk of antitrust action, it sounded a little different from before.

In The Atlantic, Zachary Fryer-Biggs explored the future of warfare and “robots that can kill.” Though the United States military says it has no plans to take humans out of the loop, automated technology is improving.

Casper Klynge is the world’s first foreign ambassador to the technology industry. But Silicon Valley hasn’t exactly embraced Mr. Klynge, a Danish diplomat, as Adam Satariano noted in a profile. Don’t miss the moment when a Silicon Valley executive bad-mouths European tech policy before handing Mr. Klynge a goody bag.

India is planning to land a rover on the moonon Saturday. That would make it the fourth nation to get there, after Russia, China, and the United States.

Hackers backed by the Chinese government have been burrowing into telecom networks in other parts of Asia in an effort to track Uighurs, China’s mostly Muslim ethnic minority. So said Reuters.

YouTube said it was removing five times as many videos for violating its hate speech rules. Wired said this missed the point.

More than half of all adults in the United States believe law enforcement will use facial recognition responsibly. Do they say the same for the tech giants? Not so much, according to a new Pew Research Center survey.

My colleague Mike Isaac’s Uber book came out. It’s called “Super Pumped,” and you can hear him talk about it on “The Daily.”