Facebook considered shutting down livestreaming in response to March 15

Facebook decided against shutting down its livestreaming service after the terror attack on March 15 because "evil" people will simply use a different platform.

Nick Clegg, Facebook's VP of global policy and communications, told the Herald that the company had looked at several ways to make its services safer after Christchurch terror attack.

"Given evil people will want to do heinous things ... We've looked at this very carefully, including sort of disabling it altogether.

"We don't actually think that in itself would stop the motivation, certainly, and the ability of a terrorist to livestream in a coordinated way, which appears to be the case [on March 15].

"Even if you remove the product, there'd be plenty of other ways to livestream."

Clegg was representing Facebook in Paris at the Christchurch Call summit, where 17 countries, the European Commission and several tech companies signed the Call to Action to work together to eliminate terrorist and violent extremist content online.

In what is believed to be a world-first, major tech companies Microsoft, Twitter, Facebook, Google and Amazon released a joint statement saying they would set out concrete steps to address the abuse of technology to spread terrorist content.

Facebook boss Mark Zuckerberg, who was not in Paris for the summit, has said that putting a delay on livestreaming as an added safeguard would fundamentally break the service.

Clegg echoed this sentiment, saying it was "something that is used by millions of people for good, decent, happy, innocent reasons - it's part of the suite of things we provide so people can express themselves as fully and freely as they like".

Yesterday, Facebook also said it will change its policy and restrict more users who have broken certain rules from livestreaming.

Clegg, the UK's former deputy prime minister, said if the changes had been in place on March 15, the gunman would not have been able to livestream his video.

He said the risk of another March 15 happening again could never be fully eliminated in a "free country", so the question should be how to minimise it.

"We're trying to narrow the funnel so if someone has contravened our rules, that can lead to them being disabled from using live altogether.

In the Call to Action, tech companies agreed to share the outcomes of their algorithms and counter the drivers of terrorism, including the potential of algorithms to lead users down an online rabbit hole to radicalisation.

But Clegg said that did not happen at Facebook.

"We do not design an algorithm to send people down a rabbit hole of ever-more extreme material."

He couldn't speak for other online platforms, but he said Facebook's algorithms filtered down content to what users want to see backed on online behaviour such as groups that they follow and the posts they engage with.

"Would you produce a newspaper that actively has material that no one wants to read? Why would we produce a product no one wants to use?

"We have already taken the most advanced step in lifting the veil on all of this by introducing a feature which is 'Why am I seeing this?' You can go to what you're looking at and it will explain to you why you're seeing this."

He said Facebook also used algorithms to diminish or delete violent hateful content, and yesterday it announced US$7.5 million to develop that technology.