AI Failure to Detect NZ Attack Video Could See Facebook Use 'Terrorism' Tag

In the latest update on the terrorist attack in New Zealand, Facebook has vowed to stop the spread of viral videos, like the footage filmed by the shooter, and engage Artificial Intelligence to recognize banned material faster.

When the 17-minute live video, filmed by attacker Brenton Tarrant as he walked into a mosque in Christchurch and shot dozens of people, appeared on Facebook, it took 29 minutes after its start for a user to report it.

Artificial Intelligence

Addressing the widespread concern over how the social media giant plans to prevent extremist content from online circulation, Guy Rosen, Facebook's vice president of integrity said that Facebook will work on enhancing AI algorythms that proactively detect malicious content, but warned that AI is "not perfect."

Despite trained to detect content, such as terrorist propaganda and graphic violence, Facebook's AI systems failed to detect the Christchurch mosque shooting footage due to a number of reasons, outlined by Rosen.

"To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content — for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground," the VP said.

In the first 24 hours after the horrific attack, which saw 50 people murdered, Facebook reported to have removed more than 1.2 million videos of the carnage and 300,000 additional copies after they were posted.

"In total, we found and blocked over 800 visually-distinct variants of the video that were circulating. This is different from official terrorist propaganda from organizations such as ISIS [Daesh] — which while distributed to a hard core set of followers, is not rebroadcast by mainstream media organizations and is not re-shared widely by individuals," Rosen said.

Interestingly, no users reported the video during the live broadcast and Facebook received a complaint only 12 minutes after the video ended. In an attempt to explain users' behaviour and alarm triggers, Rosen argued that Facebook may have not accounted for more accurate and specific reasons users could list in their reports.

"In this report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures. As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review," he said.

Some took his words to mean that in addition to existing categories — such as "nudity," "hate speech," "spam," "harassment," "violence," "unauthorized sales," "suicide or self-injury," "gross content" and "other" — Facebook will introduce new tags, like "murder" or "terrorism."

In latest statement on the Christchurch terror attack, Facebook says the Live video wasn't acted on straight away because it wasn't flagged under a new category "suicide"….

Overall, the video was shared by users for a variety of reasons, said Rosen.

"Some intended to promote the killer's actions, others were curious, and others actually intended to highlight and denounce the violence. Distribution was further propelled by broad reporting of the existence of a video, which may have prompted people to seek it out and to then share it further with their friends."

Matching Technology

Following the attack on 15 March, the New Zealand Police urged those affected by the circulating footage of the attack to seek appropriate help.

Police is aware there are distressing materials related to this event circulating widely online. We would urge anyone who has been affected by seeing these materials to seek appropriate support.

"You can't have something so graphic and it not [have an impact]… and that's why it's so important it's removed," the PM said.

In its further steps towards tackling extremist content, Facebook promised to improve its "matching technology so that we can stop the spread of viral videos of this nature, regardless of how they were originally produced."

Hello,
!

We are committed to protecting your personal information and we have updated our Privacy Policy to comply with the General Data Protection Regulation (GDPR), a new EU regulation that went into effect on May 25, 2018.

Please review our Privacy Policy. It contains details about the types of data we collect, how we use it, and your data protection rights.

Since you already shared your personal data with us when you created your personal account, to continue using it, please check the box below:

I agree to the processing of my personal data for the purpose of creating a personal account on this site, in compliance with the Privacy Policy.

If you do not want us to continue processing your data, please click here to delete your account.

promotes the use of narcotic / psychotropic substances, provides information on their production and use;

contains links to viruses and malicious software;

is part of an organized action involving large volumes of comments with identical or similar content ("flash mob");

“floods” the discussion thread with a large number of incoherent or irrelevant messages;

violates etiquette, exhibiting any form of aggressive, humiliating or abusive behavior ("trolling");

doesn’t follow standard rules of the English language, for example, is typed fully or mostly in capital letters or isn’t broken down into sentences.

The administration has the right to block a user’s access to the page or delete a user’s account without notice if the user is in violation of these rules or if behavior indicating said violation is detected.