Facebook Messenger used to fight extremism

By Catrin Nye and William KremerBBC Victoria Derbyshire programme and World Hacks

Image copyrightGetty Images

Facebook Messenger has been used to try to deradicalise extremists in a pilot project funded entirely by the company.

People posting extreme far-right and Islamist content in the UK were identified and contacted in an attempt to challenge their views.

Of the 569 people contacted, 76 had a conversation of five or more messages and eight showed signs it had a positive impact, researchers claim.

Privacy campaigners say it means Facebook is straying into surveillance.

Technology companies have been urged to do more to stop extremist material littering their sites following a series of cases involving people who were radicalised online.

This pilot was led by the counter-extremism organisation Institute for Strategic Dialogue (ISD), which says it was trying to mimic extremists’ own recruitment methods.

It told the BBC’s Victoria Derbyshire programme and BBC World Service’s World Hacks it used software to scan several far-right and Islamist pages on Facebook for targets. It then manually looked at their profiles looking for instances of violent, dehumanising and hateful language.

Terrorism survivors

It employed 11 “intervention providers” – either former extremists, survivors of terrorism or trained counsellors, who were paid £25 per hour for eight hours’ work a week.

One was Colin Bidwell, who was caught up in the Tunisia terror attack in 2015.

Under a fake profile, he spoke to people who appeared to support Islamist extremism, including some who may support the Tunisia gunman, and was tasked with challenging their views with chatty conversation and questions.

“I think I’m entitled to ask those questions after what I’ve been through,” he explained. “If there’s the smallest chance that I could make some form of difference or awareness, for me I’m in.”

Image caption Colin Bidwell was caught up in the Tunisia terror attack in 2015

Many did not respond, but some entered into long conversations. Mr Bidwell would talk a little about religion, about the effect the attack has had on his wife and how he worries for the future of his children in “such a violent world”.

“One of the things I would say is, ‘You can have your extreme beliefs, but when it gets to the extreme violence – that’s the bit I don’t understand’,” he said.

Other intervention providers would use different tactics depending on their background – a former extremist targeted young women telling them she used to think like they did, but that violence was not the answer.

‘Back from the edge’

Roughly half the people they chose to try and chat with had showed support for Islamist extremism and half had far-right sympathies. The group was also split evenly between men and women.

The aim was to “walk them back from the edge, potentially, of violence”, said Sasha Havlicek, the chief executive of the ISD.

“We were trying to fill a really big gap in responses to online recruitment and radicalisation and that gap is in the direct messaging space.

Image caption Sasha Havlicek believes direct messaging can be used to counter extremism

“There’s quite a lot of work being done to counter general propaganda with counter-speech and the removal of content, but we know that extremists are very effective in direct messaging,” she explained.

“And yet there’s no systematic work being done to reach out on that direct engagement basis with individuals being drawn into these groups.”

Privacy campaigners are concerned about the project, especially that Facebook funded something that broke its own rules by creating fake profiles.

Image caption Millie Graham Wood says any posts found promoting extremism should have been taken down

Millie Graham Wood, a solicitor at the Privacy International charity, said: “If there’s stuff that they’re identifying that shouldn’t be there, Facebook should be taking it down.

“Even if the organisation [ISD] itself may have been involved in doing research over many years, that does not mean that they’re qualified to carry out this sort of… surveillance role.”

‘Really authentic’

Facebook funded the initiative but would not disclose how much it had spent. It said it did not give ISD special access to its users’ profiles.

Its public policy manager, Karim Palant, said the company does not allow the creation of fake profiles – which the project relied on – and said that the research was done without Facebook interference.

“The research techniques and exactly what they did was a matter for them,” he said.

During conversations, the intervention providers did not volunteer the fact that they were working for the ISD, unless asked directly. This happened seven times during the project, and on those occasions the conversation ended, sometimes after a row.

Overall, of the 569 people contacted, researchers claim eight of the people contacted showed signs, in the conversations, of rethinking their views.

Despite the small numbers involved, the ISD argue the pilot showed online counter-extremism conversations can make a difference.

It wants to now explore how it could be expanded both in the UK, and overseas, and how a similar method could be used on platforms such as Instagram, Reddit, and Twitter.