Facebook, Google and Twitter told to do more to fight fake news ahead of European elections

A first batch of monthly progress reports from tech giants and advertising companies on what they’re doing to help fight online disinformation have been published by the European Commission.

Platforms including Facebook, Google and Twitter signed up to a voluntary EU code of practice on the issue last year.

The first reports cover measures taken by platforms up to December 31, 2018.

The implementation reports are intended to detail progress towards the goal of putting the squeeze on disinformation — such as by proactively identifying and removing fake accounts — but the European Commission has today called for tech firms to intensify their efforts, warning that more needs to be done in the run up to the 2019 European Parliament elections, which take place in May.

The Commission announced a multi-pronged action plan on disinformation two months ago, urging greater co-ordination on the issue between EU Member States and pushing for efforts to raise awareness and encourage critical thinking among the region’s people.

But it also heaped pressure on tech companies, especially, warning it wanted to see rapid action and progress.

A month on and it sounds less than impressed with tech giants’ ‘progress’ on the issue.

Although, as we reported at the time, the code suffered from a failure to nail down terms and requirements — suggesting not only that measuring progress would be tricky but that progress itself might prove an elusive and slippery animal.

The first response certainly looks to be a mixed bag. Which is perhaps expected given the overarching difficulty of attacking a complex and multi-faceted problem like disinformation quickly.

Though there’s also little doubt that opaque platforms used to getting their own way with data and content are going to be dragged kicking and screaming towards greater transparency. Hence it suits their purpose to be able to produce multi-page chronicles of ‘steps taken’, which allows them to project an aura of action — while continuing to indulge in their preferred foot-drag.

The Guardian reports especially critical comments made by the Commission vis-a-vis Facebook’s response, for example — with Julian King saying at today’s press conference that the company still hasn’t given independent researchers access to its data.

“We need to do something about that,” he added.

Here’s the Commission’s brief rundown of what’s been done by tech firms but with emphasis firmly placed on what’s yet to be done:

Facebook has taken or is taking measures towards the implementation of all of the commitments but now needs to provide greater clarity on how the social network will deploy its consumer empowerment tools and boost cooperation with fact-checkers and the research community across the whole EU.

Google has taken steps to implement all its commitments, in particular those designed to improve the scrutiny of ad placements, transparency of political advertisement and providing users with information, tools and support to empower them in their online experience. However some tools are only available in a small number of Member States. The Commission also calls on the online search engine to support research actions on a wider scale.

Twitter has prioritised actions against malicious actors, closing fake or suspicious accounts and automated systems/bots. Still, more information is needed on how this will restrict persistent purveyors of disinformation from promoting their tweets.

Mozilla is about to launch an upgraded version of its browser to block cross-site tracking by default but the online browser should be more concrete on how this will limit the information revealed about users’ browsing activities, which could potentially be used for disinformation campaigns.

Commenting in a statement, Mariya Gabriel, commissioner for digital economy and society, said: “Today’s reports rightly focus on urgent actions, such as taking down fake accounts. It is a good start. Now I expect the signatories to intensify their monitoring and reporting and increase their cooperation with fact-checkers and research community. We need to ensure our citizens’ access to quality and objective information allowing them to make informed choices.”

Strip out the diplomatic fillip and the message boils down to: Must do better, fast.

All of which explains why Facebook got out ahead of the Commission’s publication of the reports by putting its fresh-in-post European politician turned head of global comms, Nick Clegg, on a podium in Brussels yesterday — in an attempt to control the PR message about what it’s doing (or rather not doing, as the EC sees it) to boot fake activity into touch.

Clegg (re)announced more controls around the placement of political ads, and said Facebook would set up new human-staffed operations centers — in Dublin and Singapore — to monitor how localised political news is distributed on its network.

Although the centers won’t launch until March. So, again, not something Facebook has done.

The staged press event with Clegg making his maiden public speech for his new employer may have backfired a bit because he managed to be incredibly boring. Although making a hot button political issue as tedious as possible is probably a key Facebook strategy.

Anything to drain public outrage to make the real policymakers go away.

(The Commission’s brandished stick remains that if it doesn’t see enough voluntary progress from platforms, via the Code, is to say it could move towards regulating to tackle disinformation.)

Advertising groups are also signed up to the voluntary code. And the World Federation of Advertisers (WFA), European Association of Communication Agencies and Interactive Advertising Bureau Europe have also submitted reports today.

In its report, the WFA writes that the issue of disinformation has been incorporated into its Global Media Charter, which it says identifies “key issues within the digital advertising ecosystem”, as its members see it. It adds that the charter makes the following two obligation statements:

We [advertisers] understand that advertising can fuel and sustain sites which misuse and infringe upon Intellectual Property (IP) laws. Equally advertising revenue may be used to sustain sites responsible for ‘fake news’ content or ‘disinformation’. Advertisers commit to avoiding (and support their partners in the avoidance of) the funding of actors seeking to influence division or seeking to inflict reputational harm on business or society and politics at large through content that appears false and/or misleading.

While the Code of Practice doesn’t contain a great deal of quantifiable substance, some have read its tea-leaves as a sign that signatories are committing to bot detection and identification — by promising to “establish clear marking systems and rules for bots to ensure their activities cannot be confused with human interactions”.

But while Twitter has previously suggested it’s working on a system for badging bots on its platform (i.e. to help distinguish them from human users) nothing of the kind has yet seen the light of day as an actual Twitter feature. (The company is busy experimenting with other kinds of stuff.) So it looks like it also needs to provide more info on that front.

We reached out to the tech companies for comment on the Commission’s response to their implementation reports.

Google emailed us the following statement, attributed to Lie Junius, its director of public policy:

Supporting elections in Europe and around the world is hugely important to us. We’ll continue to work in partnership with the EU through its Code of Practice on Disinformation, including by publishing regular reports about our work to prevent abuse, as well as with governments, law enforcement, others in our industry and the NGO community to strengthen protections around elections, protect users, and help combat disinformation.

A Twitter spokesperson also told us:

Disinformation is a societal problem and therefore requires a societal response. We continue to work closely with theEuropean Commission to play our part in tackling it. We’ve formed a global partnership with UNESCO on media literacy, updated our fake accounts policy, and invested in better tools to proactively detect malicious activity. We’ve also provided users with more granular choices when reporting platform manipulation, including flagging a potentially fake account.

At the time of writing Facebook had not responded to a request for comment.