recruit more people to act as “trusted flaggers” who could make final decisions about videos its software struggled to classify

take a tougher stance on videos that violated YouTube policies

expand the work it did to help counter-radicalisation efforts

More work

In addition, it said, it would work with Facebook, Microsoft and Twitter to establish an industry body that would produce technology other smaller companies could use to police problematic content.

“Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free,” wrote Kent Walker, Google’s general counsel. “We must not let them.”

Labour MP Yvette Cooper said Google’s pledge to take action was “welcome”.

Chairing of the House of Commons Home Affairs Select Committee, Ms Cooper oversaw a report that was heavily critical of social networks and the efforts they took to root out illegal content.

“The select committee recommended that they should be more proactive in searching for – and taking down – illegal and extremist content, and to invest more of their profits in moderation,” she said.

“News that Google will now proactively scan content and fund the trusted flaggers who were helping to moderate their own site is therefore important and welcome, though there is still more to do,” she added.

Google’s announcements comes a few days after Facebook made a similar pledge that would involve it deploying artificial intelligence software to police what people post.