The company has been accused of allowing foreign intervention in the 2016 presidential election in the United States, and has been asked to give assurances it will fend off abuse in the May European elections.

Concerns about foreign interference in our own upcoming election grew in the wake of a cyber attack on federal parliament in February.

This week Facebook founder and CEO Mark Zuckerberg said the company had cleaned up its platform at a number of elections since 2016.

But he stressed: “I don’t think anyone can guarantee in a world where you have nation states that are trying to interfere in elections, there’s no single thing we can do and say OK, we’ve now solved the issue.”

Facebook CEO Mark Zuckerberg. Picture: AP/Andrew HarnikSource:AP

For Australia, the company will “combat foreign interference” with the ad ban starting the day after Prime Minister Scott Morrison calls an election.

“The restriction will take effect the day after the election is called and will apply to ads we determine to be coming from foreign entities that are of an electoral nature, meaning they contain references to politicians, parties or election suppression,” Ms Garlick said today.

“We also won’t allow foreign ads that include political slogans and party logos.”

Greater transparency will be applied to advertisements with Facebook updating information on its Ad Library feature “to make it easier to learn about all ads on Facebook and the Pages that run them”.

“The Ad Library now includes all active ads any Page is running, along with more Page information such as creation date, name changes, Page merges, and primary country location of people who manage Pages with large audiences,” she said.

“Shining a brighter light on advertising and Pages makes both Facebook and advertisers more accountable, which is good for people and good for democracy.”

Facebook has entered into a third-party fact checking deal for Australia with Agence France–Presse to “review stories, check their facts, and rate their accuracy”.

“The independent fact-checkers we work with are certified through a nonpartisan International Fact-Checking Network,” Ms Garlick said.

“Once a story is rated as false, we show it lower in the News Feed.

“In our past experience, once a story is rated as false, we’ve been able to reduce its future views by more than 80 per cent on average.”

There also will be a redoubled effort to reject “false news” which could be offensive.

“We remove content that violates our community standards, which helps enforce the safety and security of the platform.”

This could also involve removing or reducing the visibility of articles which “do not directly violate our community standards but still undermines the authenticity of the platform — like clickbait or sensational material.

A cyber attack on parliament raised concerns about the security of our own electoral process.Source:istock

Platform users will also be given “more context” around the information they see on Facebook.

For instance, when someone comes across a story, they can tap on the “About this article” button to see more details on the story and its publisher.

The other major Facebook measure will be the removal of fake accounts from the platform.

“We block millions of fake accounts at registration every day,” said Facebook.

“And we continuously build and update our technical systems to make it easier to respond to reports of abuse, detect and remove spam, identify and eliminate fake accounts, and prevent accounts from being compromised.

“We’ve also made improvements to recognise these inauthentic accounts more easily by identifying patterns of activity, without assessing account content.

“Globally, we took action on more than 1.5 billion fake accounts between April and September 2018, either during the registration process or within minutes of the account being created.

“We found 99.6 per cent of these accounts using technology before anyone reported them to us.”

The company says it will also increase safety and security efforts to tackle all kinds of “inauthentic” behaviour and abuse on the platform. that includes acts like phishing, harassment and threats of violence.

“We know that all of these tactics can intensify during elections, which is why we invest in a combination of expert resources and technology to find, disrupt and remove this type of behaviour,” Ms Garlick said.

“We now have more than 30,000 people working on safety and security across Facebook, three times as many as we had in 2017.

“We have also improved our machine learning capabilities around political content and inauthentic behaviour, which allows us to better find and removing violating behaviour.”

She said Facebook wanted to send a clear message ahead of the election.

“Through this work, we want to make it harder to interfere with elections on the platform, and easier for people to make their voices legitimately heard in the political process,” she said.

“We have dedicated global teams working around the clock on every upcoming election around the world, including the federal election in Australia.”