According to the New York Times article that helped expose these malicious videos and holes in the YouTube Kids filter, the videos “are another example of the potential for abuse on digital media platforms that rely on computer algorithms, rather than humans, to police the content that appears in front of people — in this case, very young people.”

Wojcicki reports that “[m]achine learning is helping our human reviewers remove nearly five times as many videos than they were previously.” Because of the positive results YouTube has observed from machine learning, the technique is now being applied to other areas, including child safety and hate speech.

ADL has long partnered with YouTube and other online platforms to help devise strategies for combatting online hate and extremism. For a massive video platform like YouTube, there is much potential in deploying cutting-edge Artificial Intelligence and machine learning, and YouTube is reporting what seem to be positive results. To ensure that parents can trust that an app like YouTube Kids is safe, and for platforms generally to excise the hate and racism that have made so many inhospitable, trained human reviewers must play a significant role in content moderation.

With Wojcicki’s recent announcement, it will be worth watching to see the steps YouTube takes toward finding the right balance.