Facebook has launched a UK arm to its international fact-checking initiative following more than two years of criticism about how the social network has handled the spread of misinformation on its platform.

Full Fact, a fact-checking charity founded in 2010, will review stories, images and videos which have been flagged by users and rate them based on their accuracy.

The charity's efforts will focus on misinformation it perceives to be the most damaging, such as fake medical information, false stories around terror attacks and hoaxes around elections.

Full Fact, a fact-checking charity founded in 2010, will review stories, images and videos which have been flagged by users and rate them based on their accuracy.

HOW DOES FACEBOOK'S FAKE NEW FACT CHECKING WORK?

Under the new measures, Facebook users will be able to report posts they fear may be inaccurate for Full Fact to review, while other suspicious posts will be identified by Facebook technology.

Posts will then be labelled as true, not true or a mixture when users share them.

If a piece of content is proven to be false, it will appear lower in Facebook's News Feed, but will not be deleted.

Facebook's leadership has been repeatedly criticised by politicians in recent years as problems of misinformation and foreign interference have plagued elections around the world.

The Brexit referendum and 2017 general election were both found to have been tarnished by so-called fake news, while online mistruths around the NHS and immigration have been blamed for stoking division in nations around the world.

Social media companies have faced the threat of regulation if they fail to act on false information on their platforms, and Facebook has been called to answer questions from lawmakers in numerous countries on the subject.

In a highly publicised evidence session before the US Congress in April, founder Mark Zuckerberg addressed the company's failings on false information and the data scandal involving Cambridge Analytica.

However, he failed to appear when called to the UK Parliament's inquiry into fake news, prompting MPs to leave an empty chair for him during a session with vice-president Richard Allan in November.

Under the new measures, Facebook users will be able to report posts they fear may be inaccurate for Full Fact to review, while other suspicious posts will be identified by Facebook technology.

Posts will then be labelled as true, not true or a mixture when users share them.

Share this article

If a piece of content is proven to be false, it will appear lower in Facebook's News Feed, but will not be deleted.

Claire Wardle, executive director of First Draft, which worked with Full Fact on the 2017 general election, said the biggest problem is that Facebook holds all the information about the project, making it almost impossible for independent auditors to see whether it is working.

'Facebook has this global database of online misinformation and that is something that should be available to researchers and the public,' said Dr Wardle.

'The first concern is to protect free speech and people's ability to say what they want,' said Will Moy, director of Full Fact, adding that the main problem on social media is often that 'it is harder and harder to know what to trust'.

Rather than the 'nuanced political fact-checking' on topics such as Brexit and immigration often found on Full Fact's website, Mr Moy predicted misinformation around health will be one of the biggest issues his team will be tackling.

Facebook first launched its fact-checking initiative in December 2016, after concerns were raised about hoaxes and propaganda spread around the election of Donald Trump.

The social network now works with fact-checkers in more than 20 countries to review content on its platform but studies disagree as to whether their efforts have been effective.

Full Fact will publish all its fact-checks on its website, Mr Moy said, as well as quarterly reports reviewing the relationship with Facebook.

Sarah Brown, training and news literacy manager, EMEA at Facebook, said in a statement: 'People don't want to see false news on Facebook, and nor do we.

'We're delighted to be working with an organisation as reputable and respected as Full Fact to tackle this issue.

'By combining technology with the expertise of our factchecking partners, we're working continuously to reduce the spread of misinformation on our platform.'

WHAT DO FACEBOOK'S GUIDELINES FOR CONTENT SAY?

Facebook has disclosed its rules and guidelines for deciding what its 2.2 billion users can post on the social network.

The full guidelines can be read here. Below is a summary of what they say:

1. Credible violence

Facebook says it considers the language, context and details in order to distinguish casual statements from content that constitutes a credible threat to public or personal safety.

2. Dangerous individuals and organisations

Facebook does not allow any organizations or individuals that are engaged in terrorist, organized hate, mass or serial murder, human trafficking, organized violence or criminal activity.

3. Promoting or publicising crime

Facebook says it prohibit people from promoting or publicizing violent crime, theft, and/or fraud. It does not allow people to depict criminal activity or admit to crimes they or their associates have committed.

4. Coordinating harm

The social network says people can draw attention to harmful activity that they may witness or experience as long as they do not advocate for or coordinate harm.

5. Regulated goods

The site prohibits attempts topurchase, sell, or trade non-medical drugs, pharmaceutical drugs, and marijuana as well as firearms.

6. Suicide and self-injury

The rules for 'credible violence' apply for suicide and self-injury.

7. Child nudity and sexual exploitation of children

Facebook does not allow content that sexually exploits or endangers children. When it becomes aware of apparent child exploitation, we report it to the National Center for Missing and Exploited Children (NCMEC).

8. Sexual exploitation of adults

The site removes images that depict incidents of sexual violence and intimate images shared without permission from the people pictured.

9. Bullying

Facebook removes content that purposefully targets private individuals with the intention of degrading or shaming them.

10. Harassment

Facebook's harassment policy applies to both public and private individuals.

It says that context and intent matter, and that the site will allow people to share and re-share posts if it is clear that something was shared in order to condemn or draw attention to harassment.

11. Privacy breaches and image privacy rights

Users should not post personal or confidential information about others without first getting their consent, says Facebook.

12. Hate speech

Facebook does not allow hate speech on Facebook because it says it creates an environment of intimidation and exclusion and in some cases may promote real-world violence.

13. Graphic violence

Facebook will remove content that glorifies violence or celebrates the suffering or humiliation of others.

It will, however, allow graphic content (with some limitations) to help people raise awareness about issues.

14. Adult nudity and sexual activity

The site restricts the display of nudity or sexual activity.

It will also default to removing sexual imagery to prevent the sharing of non-consensual or underage content.

15. Cruel and insensitive

Facebook says it has higher expectations for content that defined as cruel and insensitive.

It defines this as content that targets victims of serious physical or emotional harm.