Their increased technology will use more efficient video analysis models, updating those Google has used within the past six months. The increase in engineering will include new “content classifiers” to more quickly identify and remove extremist and terrorist content.

Not relying on just the video models alone, Google has vowed to hire more members of the YouTube’s Trusted Flagger programme for human detection to identify problematic videos.

Google’s tougher video standards include monitoring videos containing suprematist religious views by not monetizing or promoting said videos throughout YouTube. These videos usually don’t get immediately get flagged since they don’t directly violate YouTube policies. With more experts, they’ll track down extremist videos that aren’t usually flagged and flag them as they see appropriate.

To expand their counter-extremism efforts, Google will put out more targeted online advertisements to make viewers aware of the situation by redirecting them to anti-terrorism videos. Google will also team up with counter-extremist groups to help further identify radical content.

Google is collaborating with other big Internet names, such as Facebook, Microsoft and Twitter to broaden the arch of their approach to tackle online terrorism.

In regards to past efforts, Walker said, “While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now.”