In more than 40 episodes spanning 75 years, equity and bond fund investors have defied predictions that they would panic and spark crises. Yet banking regulators won’t let go of their “run” scenario. Why?

On Wednesday, French President Emmanuel Macron, New Zealand Prime Minister Jacinda Ardern, as well as other leaders and high-profile executives from Google, Facebook and Twitter endorsed non-binding commitments — dubbed the "Christchurch Call" after the March 15 terror attack — aimed at curbing the spread of terrorist material on the web.

The proposals were presented exactly two months after the terror attack in which 51 people were murdered at two mosques in New Zealand in March in shootings livestreamed on Facebook by the alleged gunman, who had also posted a hate-filled manifesto online. Both the video of the attack and the manifesto were circulated widely online, despite efforts to remove them from platforms like Facebook's Instagram and Google's YouTube.

“Our objective was simple: that what happened … never happens again,” Macron said at a press conference at the Elysée Palace in Paris alongside Ardern.

“The social media dimension of this attack was unprecedented and our response today … is unprecedented, never before have countries and tech companies come together and committed to an action plan to develop new technologies to make our communities safer,” Ardern told the press.

The announcement came amid growing calls from politicians worldwide for social media companies to do more to tackle the hate speech, disinformation and terrorist material that now proliferates online.

But amid mounting demands for new legislation, policymakers, tech executives and freedom of speech campaigners have yet to decide how best to protect people online while not harming their right to freedom of speech.

“The internet is global and online threats have no borders" — Theresa May

In support of the Christchurch Call, Microsoft, Twitter, Facebook, Google and Amazon committed to a nine-point action plan to tackle the spread of terrorist content online.

Some of the actions repeat the Christchurch Call, including cross-industry cooperation, an update of community standards to prohibit extremist content and regular transparency reports.

The tech companies also pledged to "identify appropriate checks" to ensure that livestreaming a terror attack is no longer possible. Earlier Wednesday, Facebook announced that users who break the site’s rules would be barred from using Facebook Live for a certain amount of time.

The leaders and the text of the call did not specify an implementation mechanism. Macron said that would be discussed in a meeting in Aqaba, Jordan in June between tech CTOs and government technical advisers. He also said the first results of the call would be shared during the United Nations General Assembly in September in New York.

“In September we will list the concrete actions taken on the basis of this call and the content taken down based on this call and we will attempt to bring more companies and countries into the call.”

While the leaders and tech executives gathered in Paris renewed their calls for action — countries like France, Germany and the U.K. already have laws or proposals to limit the posting of extremist content online — the latest push announced Wednesday still does not grapple with the underlying problem of how to regulate the global internet when few policymakers can agree to legally binding rules to stop the worst content from being shared online.

"The future of how we decide what is legitimate and non-legitimate speech is at stake," said David Kaye, a United Nations special rapporteur for freedom of expression and digital communications, who recently published a book on global internet governance.

"These are matters of public debate and governments have a right to play a role," he added. "My concern is about the approach that they decide to take."

'Christchurch Call'

These concerns were not publicly addressed in Paris when government leaders and tech executives announced their latest proposals.

Based on previous discussions about curbing hate speech between Macron and Jacinda Ardern, New Zealand's prime minister, the expected commitments have quickly garnered support from many of the world's largest tech companies, which have embraced pro-regulation stances after years of fighting legislation.

New Zealand Prime Minister Jacinda Ardern speaks to journalists during a press conference at the Justice Precinct in Christchurch on March 20, 2019 | Marty Melville/AFP via Getty Images

Nick Clegg, a former British deputy prime minister and now Facebook's chief lobbyist, was joined by other senior tech executives, including Twitter's Chief Executive Jack Dorsey, in backing the new efforts to clamp down on digital extremist material. Leaders from eight countries including Indonesia, Norway and Canada, as well as the European Union's Jean-Claude Juncker, threw their support behind the proposals.

The United States, which is home to wide freedom of expression protections and the world’s most important tech companies, was notably not among the endorsers.

“While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the Call,” the White House’s Office of Science and Technology Policy said in a press release. “We will continue to engage governments, industry, and civil society to counter terrorist content on the internet.”

Ardern downplayed the U.S. absence, underlining instead that the White House statement “demonstrates broad and unquestionable support for the call.” Macron was less enthusiastic, saying “we will do everything so that we are able to get a concrete and more formal commitment” from the U.S.

Nevertheless, he described the U.S. statement as “positive” and chose to highlight the presence of Canadian Prime Minister Justin Trudeau as a sign of the “North American world moving.”

“The internet is global and online threats have no borders," said Theresa May, the U.K. prime minister, ahead of the announcement. "Companies should be held to consistent international standards, so their customers enjoy the same level of protection wherever they live."

As part of the commitments, the governments promised to enforce existing rules that limit the spread of terrorist material, as well as consider potential new regulation or standards to reduce the dissemination of such content as long as it does not interfere with a so-called open internet.

The companies, all of which already have strict — if patchy — rules on what can be posted on their platforms, agreed to enforce their existing community standards to remove hateful or extreme material, and ensure that their algorithms do not help the spread of violent content by promoting such material on their sites.

Both governments and tech companies further agreed to work together with local law enforcement to investigate illegal behavior, as well as to cooperate in responding to future terrorist attacks, particularly when extremist material is shared online.

“We share the commitment of world leaders to curb the spread of terrorism and extremism online,” Facebook's Clegg said in a statement ahead of the official announcement. "We are committed to working with world leaders, governments, industry and safety experts."

Time running out

Global leaders and tech executives put on a unified front on Wednesday, highlighting how both sides are eager to find ways to clamp down on the worst forms of online speech.

But behind such glad-handing, politicians are quickly running out of patience with social media companies, particularly when terrorist material, including the livestreamed attack on the Christchurch mosques, is still available on parts of the internet.

The likes of Google and Facebook claim that it is not their responsibility to police the internet.

Already, governments in London, Paris and Berlin are preparing, or have passed, new legislation to force tech companies to better monitor what is posted on their networks. That includes hefty fines of up to €50 million in Germany for the failure to remove digital hate speech within 24 hours, although so far, no tech company has been penalized.

EU officials and American lawmakers are similarly discussing whether to reopen the debate about making tech companies legally liable for content on their platforms (currently, they are not) — a move that would likely lead to significant lobbying in both Brussels and Washington.

Despite the participation of many of the world's largest tech companies in the so-called Christchurch Call, the failure to include smaller platforms, most notably the likes of 4Chan and Discord where much of the extremist material still circulates, leaves a significant hole in governments' ability to track and remove the most extreme content from the internet.

The likes of Google and Facebook claim that it is not their responsibility to police the internet, and that it is often difficult to draw a line between illegal material online that should be removed from content that, although unpalatable to most, should be protected under the right to freedom of speech.

“It’s clear that something has to be done to combat harmful content,” said Martin Moore, director of the Centre for the Study of Media, Communication and Power at King’s College London. “But we quickly descend into arguments about what that harm is. The devil is in the detail.”