Facebook is expanding its fake news spotting systems to include photos and videos as part of its ongoing battle to halt the spread of misinformation on its service.

Following successful trials in France, India, and Mexico, the company said it will now roll-out the system in 17 countries worldwide in a bid to staunch what it has branded ‘misinformation in these new visual formats.’

The Artificial Intelligence (AI) system feeds potentially fake content to human fact-checkers, who use visual verification techniques such as reverse image searching and analysing image metadata to check the veracity of photos and videos.

Previously, the company’s efforts to tackle misinformation had been focused on rooting out false articles and webpage links.

Russian agents and other malicious groups seeking to influence democratic elections in the US and elsewhere have repeatedly used images and video.

These carry more visual appeal than text or false articles and are also harder to spot using fake news tracking software, which typically hunts for keywords in text.

Scroll down for video

Pictured is a hoax video (left) and article (right) posted to Facebook. The fake news stories claimed Nasa had confirmed the Earth will go dark for several days. Facebook is expanding its fake news tracking software to include photos and videos

HOW IS FACEBOOK TRACKING DOWN FAKE PHOTOS AND VIDEO?

Facebook uses AI to track down potentially false photos and videos.

This machine learning software uses various signals, including feedback from Facebook users, to identify false content.

The company then sends these photos and video to human fact-checkers for review, much like its fake news systems for misleading articles.

Fact-checkers use ‘visual verification techniques’ to rate whether or not an image is fake.

These include reverse image searching and analysing when and where the photo or video was taken.

Once something had been filed as fake news, a warning pops up on the site flagging it as such.

Facebook said it has been testing the image fact-checks since the spring, beginning with a trial alongside French news agency AFP.

Now, it will send disputed photographs and videos to 27 fact-checking organisations in 17 countries to verify the flagged content.

The company has remained tight-lipped on the criteria it employs to evaluate photos and videos and how much an image can be edited before it is ruled fake.

‘We know that this kind of sharing is particularly compelling because it’s visual.

‘It also creates an easy opportunity for manipulation by bad actors.

‘We have built a machine learning model that uses various engagement signals, including feedback from people on Facebook, to identify potentially false content.

How Britain was formed: Mainland was created from the…Drink to old times: World’s ‘oldest brewery’ discovered in…Apple launches pre-orders for its all-new iPhone XS and XS…Sabotage aboard the International Space Station? MORE…

Share this article

‘We then send those photos and videos to fact-checkers for their review, or fact-checkers can surface content on their own.’

She said Facebook’s fact-checkers and algorithms are searching for three types of fake news commonly spread through images and video.

These include content that has been ‘manipulated or fabricated’, used out of context, or combined with text or audio that makes false claims.

Facebook’s algorithms are searching for three types of fake news commonly spread through images and video. These include content that has been ‘manipulated or fabricated’ (left), used out of context (centre), or combined with text or audio that makes false claims (right)

Fact-checkers use visual verification techniques, such as reverse image searching and analysing when and where the photo or video was taken.

The teams combine this research with research from experts, academics and government agencies, Facebook claimed.

‘As we get more ratings from fact-checkers on photos and videos, we will be able to improve the accuracy of our machine learning model,’ Ms Woodford said.

‘We are also leveraging other technologies to better recognise false or misleading content.

Facebook is upgrading its fake news spotting software to scan photos and videos in its fight to cull the spread of misinformation on its service. Pictured is Facebook CEO Mark Zuckerberg at the firm’s F8 developer conference in May, where the spread of fake news was a key topic

‘For example, we use optical character recognition (OCR) to extract text from photos and compare that text to headlines from fact-checkers’ articles.

‘We are also working on new ways to detect if a photo or video has been manipulated.’

Russia has repeatedly been accused of using memes and other viral images to influence western elections.

WHAT TYPES OF FAKE PHOTOS AND VIDEO IS FACEBOOK SEARCHING FOR?

She said Facebook’s fact-checkers and algorithms are searching for three types of fake news commonly spread through images and video.

1) Manipulated or fabricated: Content that has been edited or doctored to spread fake news.

Facebook gives an example in which the face of Mexican politician Ricardo Anaya was photoshopped onto a US Green Card ahead of a key election.

The photo was created to make people believe he was from Atalanta, Georgia, despite running for election in Mexico.

2) Out of context: Facebook posts that take images out of their original context to spread misinformation.

An example given by Facebook shows a user claiming a Syrian girl seen in several photos is an ‘actor’ used as part of a western propaganda campaign.

The post appears to suggest the injured child was spotted in photos of three ‘attacks’ carried out by the forces of Putin-backed Bashar Hafez al-Assad.

Facebook’s fake-news system was able to confirm that the photos posted were from the same attack on the Syrian city of Aleppo.

3) Text or audio claim: Facebook photo or video that is layered with text or audio that contains fake news.

A photo posted with a hoax caption picked out by Facebook claimed that Indian Prime Minister Narendra Modi was rated by BBC ‘researchers’ as 2018’s seventh ‘most corrupt prime minister in the world’.

Edited photos and strong visuals were commonly spread on Facebook by Russian agents attempting to interfere with the 2016 US presidential election.

Facebook is better prepared to defend against efforts to manipulate the platform to influence elections, according to CEO Mark Zuckerberg.

‘We’ve found and taken down foreign influence campaigns from Russia and Iran attempting to interfere in the US, UK, Middle East, and elsewhere – as well as groups in Mexico and Brazil that have been active in their own country.’

Zuckerberg repeated his admission that Facebook was ill-prepared for the vast influence efforts on social media in the 2016 US election.

But he added that ‘today, Facebook is better prepared for these kinds of attacks.’

The billionaire also warned that the task is difficult because ‘we face sophisticated, well-funded adversaries. They won’t give up, and they will keep evolving.’

WHAT HAS FACEBOOK DONE TO TACKLE FAKE NEWS?

In 2016, following the shock November 2016 US election results, Mark Zuckerberg claimed: ‘Of all the content on Facebook, more than 99 per cent of what people see is authentic’.

He also cautioned that the company should not rush into fact-checking.

But Zuckerberg soon came under fire after it emerged fake news had helped sway the election results.

In response, the company rolled out a ‘Disputed’ flagging system that it announced in a December 2016 post.

The system meant users were responsible for flagging items that they believed were fake, rather than the company.

In April 2017, Facebook suggested the system had been a success.

It said that ‘overall false news has decreased on Facebook’ – but did not provide any proof.

‘It’s hard for us to measure because we can’t read everything that gets posted’, it said.

But it soon emerged that Facebook was not providing the full story.

In July 2017, Oxford researchers found that ‘computational propaganda is one of the most powerful tools against democracy,’ and Facebook was playing a major role in spreading fake news.

In response, Facebook said it would ban pages that post hoax stories from being allowed to advertise in August 2017.

In September, Facebook finally admitted during congressional questioning that a Russian propaganda mill had placed adverts on Facebook to sway voters around the 2016 campaign.

In December 2017, Facebook admitted that its flagging system for fake news was a failure.

Since then, it has used third-party fact-checkers to identify hoaxes, and then given such stories less prominence in the Facebook News Feed when people share links to them.

In January, Zuckerberg said Facebook would prioritise ‘trustworthy’ news by using member surveys to identify high-quality outlets.

Facebook has now quietly begun ‘fact-checking’ photos and videos to reduce fake news stories. However, the details of how it is doing this remain unclear.