The heads of Britain, France and Italy are setting an ambitious goal for tech companies to tackle online postings that promote terrorism: Take them down within an hour or two.

Convening world and tech leaders Wednesday at the United Nations, British Prime Minister Theresa May said that internet companies are making progress but need to go “further and faster” to keep violent extremist material from spreading online.

The average lifetime of Islamic State extremists’ online propaganda shrank from six days to 36 hours in the first six months of this year, May said.

“That is still 36 hours too long,” she said.

French President Emmanuel Macron and Italian Prime Minister Paolo Gentiloni joined May in leading what she called a first-of-its-kind session on the sidelines of annual U.N. General Assembly meeting of global leaders.

It comes as internet services are facing increasing pressure to rid themselves of messages that, authorities say, provide inspiration and instructions for militant attacks. With potential legal consequences looming - May and Macron have suggested their countries could impose legal liability and fines if tech companies don’t do enough to deal with extremist material - online giants are eager to show they’re taking the issue seriously.

This summer, Facebook, Microsoft, Twitter and Google-owned Youtube launched a joint counterterrorism initiative to collaborate on technology and work with experts. Menlo Park, California-based Facebook announced it had started using its artificial intelligence capabilities to find and remove extremist content, as it does to block child pornography. The company now has 150 engineers, content reviewers, language specialists, academics and former law enforcement figures focused on counterterrorism, global policy and counterterror head Monika Bickert told the U.N. gathering Wednesday.

San Francisco-based Twitter recently said it suspended 300,000 accounts for promoting terrorism just in the first six months of this year, the great majority flagged by its own internal efforts before posting anything. Youtube has more than doubled the number of violent extremist videos removed in recent months, Google Senior Vice President Kent Walker said Wednesday as he announced the Mountain View, California-based company would commit millions of dollars to research on combatting extremist content online.

“Removing all of this content within a few hours, or even stopping it from getting there in the first place, poses an enormous technological and scientific challenge that we continue to undertake,” he told the world leaders. “The haystacks here are unimaginably large, and the needles are both really small and constantly changing.”

Another challenge: taking on extremist postings without impinging on free speech. Walker acknowledged “we still don’t always get this right”: YouTube’s machine learning protocols recently removed activists’ videos from Syria’s civil war amid a search for graphic or pro-terrorist material, for example. The company said it would restore any videos improperly taken down, and at least some have already been returned.

There are other issues at play, as well: “We all know there are economic interests there, there are privacy problems,” Gentiloni said. But “we can’t reduce our ambition because of the difficulties.”