The tech firms have long struggled to balance their missions of supporting free speech with the need to remove and prevent the spread of terrorist content.
Photograph: Leon Neal/AFP/Getty Images

Facebook, YouTube, Twitter and Microsoft have created a joint forum to counter terrorism following years of criticisms that the technology corporations have failed to block violent extremists and propaganda on their platforms.

The Silicon Valley companies announced the Global Internet Forum to Counter Terrorism on Monday, saying the collaboration would focus on technological solutions, research and partnerships with governments and civic groups.

The tech firms have long struggled to balance their missions of supporting free speech with the need to remove and prevent the spread of terrorist content. The companies have faced intense scrutiny over the way terrorist groups have used the site for recruitment and for spreading hateful and violent messages.

To censor or sanction extreme content? Either way, Facebook can't win

Read more

As part of the new forum, the companies said they would share best practices regarding “content detection and classification techniques using machine learning” and “define standard transparency reporting methods for terrorist content removals”. Through a partnership with a United Nations counter-terrorism committee and a range of organizations, the tech firms said they would also “identify how best to counter extremism and online hate, while respecting freedom of expression and privacy”.

In December, Google, Facebook, Twitter and Microsoft unveiled a similar information-sharing initiative, pledging to work together to created a database of unique digital fingerprints known as “hashes” for videos and images that promote terrorism. That means when one firm flags and removes a piece of content that features violent terrorist imagery or a recruitment video, the other companies could use the hash to identify and take down the same content on their platforms.

Internal Facebook documents recently obtained by the Guardian provided a window into the complex rules and methods behind the social media corporation’s moderation of terrorist content. The guidelines for moderators revealed that the company requires them to learn the names and faces of more than 600 terrorist leaders, for example. The leaked documents also revealed that Facebook identified more than 1,300 posts on the site as “credible terrorist threats” in a single month and argued that the information uncovered had been “a massive help on identifying new terrorist organisations/leaders”.

While governments have urged companies like Facebook to do more, the social network has also faced backlash for ethically questionable censorship of non-terrorist content under the guise of countering propaganda. Facebook sparked controversy last year when it censored academics, journalists and others following the death of a high-profile Kashmiri separatist militant who was labeled a terrorist by Indian authorities, but considered a freedom fighter by many Kashmiris and Pakistanis.