Thu, 03/21/2019
Though Facebook's AI-powered censors managed to "mistakenly" flag Zero Hedge as a repeat violator of the social network's "community standards", when it comes to livestreamed videos depicting horrific and extreme violence, the company is still working out some kinks in its ability to immediately identify and remove livestreams depicting horrific acts of terror and violence like the video published Friday by the Christchurch Shooter.
In a blog post mea culpa published Thursday, Facebook's VP of Integrity Guy Rosen explained why the company failed to immediately remove the horrifying livestream of the attacks that the shooter, a 28-year-old Australian who also published a manifesto laying out his violent, islamophobic ideology, posted to Facebook Live, and which was viewed 4,000 times before being taken down.

According to Rosen, one reason the video lingered for so long on its platform - Facebook didn't remove the video until police responding to the incident reached out to the company, despite it being reported multiple times - was that the video wasn't prioritized for immediate review by the company's staff. As it stands, Facebook only prioritizes reported livestreams tagged as suicide or self harm for immediate review.
To rectify this, the company is "reexamining its reporting logic" and will likely expand the report categories prioritized for immediate review.