Why join?

Already a member?

Reports indicate that the European Commission will also undertake an impact assessment to study possible binding legislation. And in October, a German law entered into force promising staggering fines of up to €50 million for social media firms that do not remove hate speech within set timeframes.

Collectively, this fresh political impetus to regulate decisions that have so far been left to the companies is an indication of the industry's general failing to show the committed action repeatedly called for by the public and policymakers.

Despite some positive steps, efforts have broadly been found wanting, and it is abundantly clear that concerns over the potential impacts to profits and existing business have trumped the moral obligation to stem the proliferation of radicalising content that inspires deadly violence.

At the recent third high-level meeting of the EU Internet Forum, 'Big Tech' participants had a prominent stage from which to present clear and concrete measures for aggressively and systematically thwarting the spread of such content and disrupting the flow of terrorist communications.

Instead, the gathering concluded with general assertions to step up efforts but no apparent detailed plans or deadlines agreed for implementing targeted measures that could have a robust impact.

As commissioner for home affairs Dimitris Avramopoulos noted, "this is progress, but it is not yet enough to turn the tide."

More should be required of tech firms with three key objectives in mind: speed, consistency and transparency.

Speedy removal

Recent research by the Counter Extremism Project shows that certain platforms consistently lag behind in removing terrorist content, much of it official and unofficial Islamic State propaganda.

In all but a few instances identified, links to terrorist content remained live on Google Drive, Google Photos or both many hours after other platforms had removed it. In some cases, the content was still available days and weeks after initial upload.

Google was not alone in its sluggish response, however. Online storage unicorn Dropbox and the Internet Archive also routinely failed to expeditiously remove terrorist videos, as did file hosting and sharing services like US-based MediaFire and New Zealand-based Mega.

WordPress, the world's leading blogging platform, continues to provide support for a banned neo-Nazi group's online front and several terrorist websites, some of which were first flagged by CEP more than a year ago.

Step-by-step bomb-making guide

Promptly removing this propaganda matters.

In late November, a notorious bomb-making video reappeared on Google and other platforms, providing step-by-step instructions for constructing a volatile explosive that has been used in multiple attacks this year alone, including at Manchester Arena in May.

Other content has expressly incited jihad against innocent civilians and offered guidance on how to individually carry out knife and vehicle attacks.

Disrupting the spread of such terrorist content is essential. It requires a systematic approach and consistent enforcement, not only internally at the likes of corporate giants Google and Facebook but throughout the wider internet industry.

YouTube's removal last month of thousands of videos promoting the terrorist propaganda of al-Qaeda operative Anwar al-Awlaki set an important example that should be replicated by other platforms big and small.

Yet it is not clear that the video-sharing website's newly adopted zero-tolerance policy will permeate to parent company Google's other services, which have proven to be equally important havens for terrorist material.

Awlaki is also not the only jihadist with a strong virtual following. Disturbing content continues to circulate from other terrorist actors such as Yusuf al-Qaradawi, Turki al-Binali and Abdullah Faisal. The action taken against Mr. Awlaki's content should extend as well to these individuals and others known to be promoting violence and terrorism.

Internet companies should be forthright about any procedures implemented in accordance with published policies and Terms of Service, in a manner that is clear and transparent to the public.

Earlier this month, for example, Google announced that it would hire more than 10,000 human reviewers by 2018 to help remove policy-violating content. However, this announcement came months after YouTube claimed to be removing extremist material faster than ever and after multiple reassurances to the public, lawmakers, and advertisers that the company was doing the best it could to eliminate this content. Facebook made a similar pledge in May as well.

After all this time, tech giants have still not provided a clear picture of the scope of the problem and what exactly they are doing to solve it.

Missing the data?

Specific benchmarks for performance, in particular as it relates to content removal, are nowhere to be found—which is strange coming from an industry that prides itself on data collection and analysis.

The grave and very real risk to the security of communities in Europe and around the world makes the elimination of the worst terrorist and extremist content critical.

The burgeoning trend toward regulating the treatment of this content will likely intensify so long as internet companies continue to shun their responsibility to permanently eliminate terrorist propaganda on the web and do their part to help stop radicalisation.

EU home affairs commissioner Dimitris Avramopolous congratulates some media platforms on their efforts to take down jihadist terrorist content at the EU Internet Forum - but warns it is 'not enough to turn tide'.