Facebook, Amazon, Google, Twitter and Microsoft will develop shared tools to detect and remove terrorist or extremist content among other measures.

The pledge was made at a Paris summit, which was called after the terror attack in Christchurch, New Zealand left 51 people dead.

The March attack was live-streamed on social media.

French President Emmanuel Macron hosted the event with New Zealand Prime Minister Jacinda Ardern, who has been calling for technology executives to sign a pledge as the “Christchurch Call”.

What was pledged?

The firms said they would update their terms of use to “expressly prohibit the distribution of terrorist and violent extremist content”.

They will develop crisis protocols to respond to emerging or active events such as a terror attack.

The companies said they would also commit to publishing “transparency reports” on the detection and removal of terror or violent extremist content.

Live-stream limits

Before the event Facebook announced curbs on its streaming feature.

The tech giant said there would be a “one-strike policy” banning those who violate new Facebook Live rules.

In a statement, Facebook said that anyone sharing “violating content” like a statement from a terrorist group without context would be blocked from using Facebook Live for a set period, such as 30 days.