Billions of images are uploaded to the internet every day, but many people aren’t fully aware of how their pictures can be used or abused by others. Alexandros Antoniou, Lecturer in Media Law at theUniversity of Essex, explains the risks of uploading your photos to apps or websites, and the rather ambiguous legal situation.

The latest photo app craze can make you look like a movie star. Zao uses artificial intelligence to replace the faces of characters in film or TV clips with images of anyone whose photo you upload to the app.

The effect is startlingly realistic and shows just how far this sort of “deepfake” technology has come. [See here for an example.] But it also highlights how great the risks have become of making your photos available online where anyone can use or abuse them – and the limitation of the law in dealing with this issue.

One of the key problems is the legal right that companies have to use your photos once you upload them to their apps or websites. Several media reports state that Zao’s terms and conditions initially gave it “free, irrevocable, permanent, transferable, and relicenseable” rights. A backlash against this has now pushed the company that makes the app, Momo Inc, to change its terms. It says it won’t use headshots or videos uploaded by users except to improve the app and won’t store images if users delete them from their accounts.

But many other photo applications, such as the age or gender-changing FaceApp, retain similar rights to effectively do whatever they want with the uploaded content. With “a perpetual, irrevocable, non-exclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license”, FaceApp essentially gains all of the original owner’s rights to the photo (except that it does not have an exclusive licence).

These terms are not dissimilar to those of even more popular and mainstream apps such as Instagram, which opens its photo-feed to advertisers. Instagram can let other companies exploit users’ photos without any compensation and further pass this right on to a third party without additional permission.

And, of course, you don’t even have to give permission for someone to use your photos for them to do so anyway. Once they’re online, your images can be circulated without your knowledge. Studies have demonstrated that people’s understanding of online privacy settings remains largely limited. If privacy settings aren’t sufficient, anyone could access your photos.

This can include journalists, who can republish images from social media without necessarily violating privacy laws if the images have already entered the public domain. Photos are still protected by copyright law but images taken from videos can sometimes be published under “fair dealing” provisions.“ It is also not uncommon for journalists to use screenshots of the web page on which an image is published, which the publication would not automatically have to remove if a user deleted the original photo.

The development of AI makes the potential consequences of giving up control of your photos even greater. The deepfake technology used by Zao and other apps can create manipulated photos and videos that are very hard to tell from the real thing.

This has already led to deepfake pornography, which involves superimposing someone’s face onto explicit images using an AI-based synthesis technique. This type of online abuse can expose victims to financial or emotional blackmail and ultimately cause them serious psychological and emotional harm.

Legal recourse

If you do find your image has been used in ways you never wanted, how can the law help? If it’s simply a case of the image being used within the terms and conditions you agreed to (and no other laws have been broken), you probably don’t have much recourse. Under the EU’s ”right to be forgotten“, you could ask for your photos to be deleted from a company’s servers so they can’t make further use of them, although this isn’t an absolute right.

But the advent of AI-apps creates another issue. While a company can delete your images from their servers, it may be impossible to remove the related data from AI software that has processed and learned from the pictures. This data may be effectively unavailable but it cannot be truly “forgotten”.

There are other ways in which the law seems out of step with the evolution of AI-altered images, particularly when they’re used for abusive and offensive purposes. In the UK, the recent law targeting “revenge pornography” only applies to private sexual images, which might not include non-private photographs transposed onto a sexual image of another person.

There might be other options for those who fall victim to the creation of deepfake pornography. The fact that people have been convicted of posting other people’s images on pornographic websites shows that the law of harassment can also be relied upon under certain circumstances. It might also be possible to make claims for misuse of private information, defamation or breach of copyright.

If someone tricks you into granting access to your social media profile and taking photos to generate fake pornographic images, you could also engage data protection legislation. But it’s questionable whether civil legal actions such as defamation or privacy suitably recognise the harm that can be done with deepfakes in a way that a specific criminal charge would – and as yet no such charge exists.

The advent of social media has fundamentally challenged our expectations over how we control our photographs. The current patchwork of legal provisions needs to be reviewed to consider the emerging ways images are created and shared without a subject’s consent, so that these issues are addressed more widely with a lasting impact. Along with a new regulatory framework for online safety, users need to be educated to appreciate risks of online activity and navigate online spaces responsibly.

This article is republished from The Conversation under a Creative Commons license. Read the original article. This article represents the views of the author, and not the position of the LSE Media Policy Project, nor of the London School of Economics and Political Science.

Discussion about the media in Argentina tends to blur the explosive impact of the traditional media sector’s global economic crisis with particularities of the national media culture and the evaluation of the changing policies and rules of play laid down by the last governments, led by Cristina Fernández de Kirchner (2007-2011 and 2011-2015) and Mauricio Macri (2015-2019). Argentina’s media policies have tended to be seen as an example for the Latin American region, both when Fernández de Kirchner boosted reforms and when Macri undid them, by opening up the possibility of large players providing the whole set of converged services (pay tv, mobile, fixed connectivity and telephony, open tv and radio), which is not yet allowed in other countries such as Mexico or Brazil.

The global crisis in traditional media, as a result of the emergence of digital platforms in content distribution, is usually present in the debate. However, less is said about the local ingredients that combine the structure of the media sector – which is very dependent on the rules of the game set by the State – with public communication policies.

Macri’s government actively intervened in the area of communication policies, with numerous decrees that eliminated the restrictions on concentration set forward in the audiovisual law, also known as “Media Law”, passed by Congress in 2009. Macri abandoned the discourse of the democratization of the media put forward by Kirchnerism and replaced it with the need to attract investment. His administration included amending legislation by means of executive orders and resulted in high media concentration in the hands of major players, the closure of media and increasing job insecurity. The consequences are summarized by the Media Ownership Monitor (a Project coordinated by Reporters Without Borders and Tiempo Argentino, a cooperative journal):

Grupo Clarín merged with Telecom to create one of the three largest business groups in Argentina and the most powerful in the history of local communications;

Over 3500 media workers were laid off or accepted voluntary retirement in Buenos Aires alone, according to the Buenos Aires Press Trade Union (SIPREBA);

State-owned media lost over half of the audience they had in 2015 and have laid off a record number of employees.

Over time, Macri’s communications policies included the promotion of convergence as a guiding concept of greater competition. However, the official help of Grupo Clarín’s expansive needs, as this main media group decided to grow by acquiring one of the biggest telecommunications companies in the country, went against the speech of fostering competition. In fact, competition was seriously damaged by the Cablevisión-Telecom merger (the largest in Latin American communications history), as was shown by the cries of its competitor Telefónica (Movistar). Macri also failed to design a truly “convergent” regulatory framework and independent regulator for the sector.

The main changes decreed during the years of Macri’s presidency were: the creation of a new authority of communications (ENaCom), which depends on the President; a 15 year extension of broadcasting licenses; the authorization to transfer licenses without prior state agreement; the exclusion of cable TV service providers from compliance with audiovisual regulations; and the official move to merge the main cable operator of the largest multimedia group (Cablevisión, Grupo Clarín) with one of the two biggest telecommunications companies (Telecom). These measures arise mainly from the Decree of Necessity and Urgency (DNU) 267/15 and Decree 1340/16.

Hence Macri’s government signalled a change in direction from the public policies that his predecessor, Fernández de Kirchner, had developed during her two presidencies. While during Fernández de Kirchner’s time in power policies followed established norms and appealed to the protection of public interest (even when her administrations did not actually achieve their goals), during Macri’s administration, regulatory bodies were put at the service of the interests of the largest industrial players in the sector, without prior integral planning of the direction to be adopted.

The official endorsement of the merger between Telecom and Cablevisión, led by the shareholders of Grupo Clarín, was made possible by very convenient reports for Grupo Clarín produced by ENaCom and the National Commission for the Defense of Competition, a body that is also politically and administratively dependent on the President. The merger is the largest concentration in the history of communications in Latin America and consolidates the construction of a large national group as a result of policies in the sector after four years of government. For a President who wanted to enter the OECD, this is not a good example of public policy. Macri’s favoritism towards Clarín does not foster competition and protects an actor with monopolistic characteristics. When the Clarín Group and Fintech sealed the merger, the combined services of Cablevisión and Telecom accounted for 42% of fixed telephony nationally, 34% of mobile telephony, 56% of fixed broadband Internet connections, 35% of mobile connectivity and 40% of pay TV.

There were other important developments in the sector, such as the sales of the Telefé network (the most important open TV network) by Telefónica to Viacom in 2016 and Supercanal, the cable operator that leads the middle market segment. The bleak picture is enhanced by the crisis and under-funding of state media (mainly the information segment of Channel 7, and the thematic channels Encuentro, PakaPaka, DeporTV, and Télam, the state news agency) and the closure and ownership changes in numerous private media outlets.

Paradoxically, after almost four years of an agenda classified as pro-market and with a good deal of new regulation in communications designed to promote competition, the media market is going through a crisis that, although impacted by external and internal factors, has been exacerbated by government action and its economic program.

It is important to note that Argentina’s media policies are often used as a model for other Latin American sector reforms, so the lessons from this case – especially given the permission for the sector’s bigger players to provide converged services – might have an impact on the rest of the region.

This article represents the views of the authors, and not the position of the LSE Media Policy Project, nor of the London School of Economics and Political Science.

Policy makers in the UK and elsewhere are grappling with how to address the spread of ‘harmful’ content online.Bertie Vidgen, a Research Associate at the Alan Turing Institute whose research focuses on detecting, analysing, and countering online hate speech in both news and social media, looks at an industry efforts to tackle problematic content and practices online and assesses whether it will be sufficiently effective.

Earlier this summer, some of the world’s biggest tech, product and branding companies (including Google, Facebook, Unilever and Procter & Gamble) launched The Global Alliance for Responsible Media. This self-described ‘unprecedented’ alliance aims to tackle ‘dangerous, hateful, disruptive and fake content online’ which it says, if left unchecked, ‘risks threatening our global community’. The Alliance appears to be driven by a mix of corporate social responsibility and self-preservation, as it explicitly aims to identify a ‘concrete set of actions, processes and protocols for protecting brands’. This is entirely understandable given the negative publicity received by companies who are perceived to enable online harms to proliferate. The Home Affairs select committee’s investigation into abuse, hate and extremism online suggested YouTube, Twitter and Facebook had ‘consciously’ failed to adequately address online harm, and the Cambridge Analytica scandal appears to have lowered public trust in all social media platforms. As Mark Zuckerberg has said, these companies are keen to take steps to strengthen their relationship with the public.

There is much to be celebrated about this Alliance – apart from anything else, it reflects a huge shift in attitudes towards hate speech, online harm and regulation of the Internet. It is also well-timed, coming when many governments are trying to improve regulation of the Internet. In the UK, the Department for Digital, Culture, Media and Sport is reviewing responses to the consultation on its ‘online harms’ White Paper, which covers a myriad of behaviours, including revenge porn, hate speech, misinformation, terrorism and the sale of illegal goods. The Alliance could devote some serious resources to addressing online harm and, more importantly, enable more collaboration between companies. At present, efforts to pool technical resources and tools have been very limited, with initiatives such as the Global Internet Forum to Counter Terrorism. The Alliance could address this issue. Going further, it could help with the development of consistent cross-industry policy responses. This would allow us to not only tackle harmful content when it appears on one platform but to also address how it, and its purveyors, move across the Internet.

So far, so good. Indeed, on paper this Alliance represents what a lot of activists working to counter online harms have long wanted; a broad coalition that works together spends serious money to protect individuals. What this vision means in practice (and whether it will be realised) is, at this point, up for debate as little detail has been provided. It’s unclear exactly what harms the Alliance is going to tackle, how it will do it, who will be responsible for implementation, how it will evaluate performance and who will implement sanctions (if any) for lack of action. The Alliance also faces some really difficult social and ethical challenges, which urgently need to be reflected on and addressed. No amount of money or technical sophistication will overcome these alone:

Online harms are contextual. What is harmful in one place might not be considered harmful in another. All of the partner companies operate across multiple countries and will struggle to agree on a unified global response – just think about the hugely divergent responses to the Mohammed cartoon scandal in 2014. Most plausibly, the companies will only be able to agree a ‘minimum’ set of standards and values, such as agreeing to enforce the law in whichever jurisdictions they operate. Ultimately, the breadth of the partners may prove its undoing, as the Alliance struggles to agree on a position. This also links to the second issue;

Identifying, categorising and detecting harmful content involves making lots of decisions, many of which can be contentious. For instance, at the extremes, it is easy to distinguish between terrorist and non-terrorist content. Where it becomes difficult is in the middle grey area and, sadly, this is where most online content lies. The Alliance will need to make some difficult decisions, which may be challenged by groups in society (most likely, free speech advocates) and may have political ramifications. It is unclear whether all of the partners will be willing to make these decisions, especially given that one of their primary goals is to protect their brands.

Currently, the Alliance claims it wants to ‘improve the safety of online environments’. There is nothing wrong with framing harm in terms of ‘safety’, and such language can make difficult decisions more amenable to otherwise reluctant non-expert decision makers. Nonetheless, it shifts the focus towards individual actions rather than the structures which allow, enable and even encourage those actions to take place. This is a crucial issue given recent concerns that some platforms thrive on highly polarising, abusive and vitriolic content as they can motivate higher user engagement. The Alliance will need to decide whether it wants to address these issues or just focus on moderating specific bits of content.

The most encouraging part of the new Alliance is the nascent recognition that harmful online content poses a societal problem. It should not be dealt with by each platform independently – because their harmful effects are borne by all of us, we need a joined-up, integrated approach. And, notwithstanding the concerns raised here, any steps to challenge, constrain and remove harmful online content should be welcomed. But this Alliance will only effect real change if it tackles these difficult social issues head-on. We’ll have to see whether it will.

This article represents the views of the author, and not the position of the LSE Media Policy Project, nor of the London School of Economics and Political Science.

Flat Earthers believe in a conspiracy that experts suppress the truth of a flat earth. In the Philippines, this fringe community sneaked into mainstream public conversation last April, when a veteran journalist endorsed the Flat Earth Philippines’ Twitter page and cited ancient scripture as basis for the claim that the earth is flat.

Is there more to this fringe community that challenges established scientific facts? Does their growing network have anything to do with the rise of disinformation in the Philippines?

Our research team followed Filipino flat earth accounts and groups across Twitter, Facebook, and YouTube to observe their information sharing and conversation patterns. Here’s what we found.

Religious foundations, technological reinforcements

Filipino Flat Earthers have similar characteristics to fundamentalist Christian flat earthers in the United States, where flat earth conventions are reported to involve people praying over one another and swapping Bible verses. In the Philippines, the official description of one group referenced Romans 1:18, which describes a vengeful Old Testament God who will reveal himself against “those suppressing the truth”. Other posters explicitly identified themselves as “Christian Flat Earthers” and often ended their posts with “Glory to God”.

This subculture marries belief in anti-government conspiracy theories such as distrust in NASA scientists in the US or the mainstream media outlets in the Philippines with self-reinforced biblical literalism, using bible verses as proof for flat Earth.

Sustaining these beliefs are YouTube clips that further draw people in their conspiratorial beliefs. An experiment with a new YouTube account showed that viewing one video shortly led to all twelve of the recommended videos on our homepage having more flat earth content. Both the recommendation algorithm and the auto-play function of YouTube create the perfect binge watch conditions that reinforce conspiratorial beliefs.

Community formation

Beyond exchanging conspiracy videos, Filipino Flat Earth Facebook groups build communities among their members. Exchanging stories of perceived victimhood is not only therapeutic; it also consolidates members around an ‘us-versus-them’ group identity.

Members support each other when they are hurt or humiliated for believing in conspiracies, by offering consolation or defending their peers through coordinated and simultaneous posting and commenting, mobilizing themselves as “keyboard warriors”.

Seeding political messages

The Filipino Flat Earth groups have become a petri dish of political fake news and campaign propaganda for the 2019 elections. We observed that active posters of Filipino Flat Earth Facebook groups seeded content favorable to President Duterte and his allies.

For example, one user gratefully shared a video of Duterte from the official Presidential Communications Operations Office page for promoting tourism in the country. Another shared a false news report that praised Duterte’s anti-corruption drive involving Duterte kicking the World Bank and IMF out of the country.

Admins also behave in suspicious ways similar to disinformation architects we have previously studied. They consistently and systematically shared posts critical of politicians running against President Duterte’s party. In one instance, the admin quickly attempted to silence a member arguing for flaws in Duterte’s candidates.

The Facebook profiles of several admins and active posters of Filipino flat earth groups were also suspicious: nicknames rather than full names, scant personal information, invisible friends’ lists, and use of generic profile pictures like animals, inspirational quotes, or pictures of sexy women with large sunglasses. The Facebook pages these admins follow included pro-government fan pages and Facebook groups of ‘influencers’ infamous for spreading fake news favorable to President Duterte.

Beware of social media manipulation

We don’t think it’s a mere coincidence that flat earth conspiracy has hijacked attention in the middle of a heated election season. With distrust against the scientific and media establishment, flat earthers are vulnerable into being organized to support personalities that challenge the political establishment.

While we don’t discount that most of the members of the Facebook group are real, the behaviors of the Filipino flat earth group admins are suspicious in their anonymity and organized seeding of political messages. In this fractious political environment, online bad actors seek to weaponize organic communities to serve the nefarious ambitions of their political clients.

In each focus group we invited children to share what they would like to know about their online data and privacy and, second, to list the things that others (parents, educators, regulators and companies) should do differently to make privacy better.

Children’s want to know how their data flows online, how long it is kept and how it is used. They also want clearer Terms and Conditions, more free content (e.g., fewer in-app purchases), for apps only to collect data that is necessary, not sharing children’s data with other companies, prevention of data leaks and to be able to permanently delete online information. These are the priorities as children see them, and the main demands that they have for regulators and industry.

As we also heard from them, children do not think that others (adults) listen to their opinions, suggestions or complaints, and they are keen for their voices to be heard more. They experience the digital environment as quite dismissive of their needs and personal preferences. And they expect the internet to be fair, and for parents, educators, regulators and companies to act responsibly and in children’s interests.

While children can be practical and understand that the commercial model behind the apps they use requires their participation (e.g., that sometimes they have to watch an advert to unlock some content), they also become critical and frustrated when things do not work as they should and nothing is done to fix the problems they encounter (e.g., when they report an incident online and do not hear back).

We earlier addressed the role of teachers and schools, recognising that the task of improving children’s understanding of the data ecology and their rights within this largely falls to them. That’s why we built our online toolkit – do check it out at www.myprivacy.uk

But as we have argued before, education alone cannot resolve all the challenges of a datafied world. Children cannot learn what is impossibly complex. Teachers cannot teach what they themselves don’t understand. So the digital environment must be made more child-rights-respecting, and more navigable by everyone.

Most important, the present take-it-or-leave-it (or, “give us your data or miss out”) digital environment mitigates against user responsibility. So for industry, probably in tandem with policy makers, the time is right to develop a better-designed digital environment that enhances understanding and offers people – young and old – more meaningful choices.

This article represents the views of the authors, and not the position of the LSE Media Policy Project, nor of the London School of Economics and Political Science.

Journalists, politicians, academics, and governments all agree that the problem of news manipulation needs to be addressed, even if no one seems to be able to agree on what to call it. The terms ‘fake news’, ‘misinformation’, ‘disinformation’, and ‘propaganda’ are all used interchangeably. But none of this should distract from the fact that we are dealing with a serious problem that’s getting bigger. We know from scientific research that fake news spreads farther, faster and deeper on social media than any other type of news, even (real) stories about natural disasters, terror attacks and the Kardashians. And that’s just the internet; fake news can have real world consequences as well. In 2018, more than twenty people were killed in India after rumours went viral on WhatsApp alleging the presence of child abductors in several villages across the country. Many readers will also remember the PizzaGate affair from 2016, when a man entered a New York restaurant and fired a rifle, after he was duped by a false story about Hillary Clinton running a child trafficking ring there.

Sustainable solutions to the fake news problem aren’t easy to find, although not for lack of trying. Proposed solutions range from legislating news media, to tweaking algorithms and moderating content, to “softer” approaches such as fact-checking, debunking, and media literacy education. All of these approaches have their pros and cons. Oftentimes, lawmakers are reluctant to introduce new laws to halt the flow of misinformation, as this quickly runs into issues with freedom of the press, and may provide an excuse for authoritarian states to crack down on dissenting opinions. Deleting, demonetising or disincentivising content that is deemed problematic, as is being done by social media companies like Facebook, YouTube and Twitter, has a backfiring potential, as algorithms aren’t 100% accurate in deciding what counts as problematic content, and mishandled grey zone cases can lead to lots of problems. Next up are fact-checking organisations like Snopes who debunk viral fake news stories. Although useful, the problem is the continued influence of misinformation: once people have been exposed to a falsehood they often continue to believe in it even after a correction has been issued. Lastly, media literacy initiatives are set up all over the world to teach children not to fall for misleading stories. The sad thing is that those who aren’t currently in education (meaning: the vast majority of the population) aren’t benefitting from such efforts.

Being behavioural scientists, we started to think about what else could be done. Enter inoculation theory, a classic theory from social psychology that posits that it’s possible to confer psychological resistance against persuasion by pre-emptively exposing people to a weakened version of a deceptive argument, much like a real vaccine confers resistance against a pathogen after being exposed to a severely weakened dose of the “virus”. After an initial paper which showed that it’s possible to ‘inoculate’ people against a specific piece of misinformation about climate change, we decided to up the ante a little bit. Rather than ‘inoculating’ people against specific falsehoods, we theorised that it should also be possible to help people spot common strategies used in the production of most fake news. The big advantage of this, at least in theory, is that people develop something like a Spidey Sense for manipulation, and can spot fake or misleading content by teasing out manipulation strategies.

Long story short, we built a video game. Together with partners at DROG, a Dutch anti-misinformation platform, we created Bad News, a multiple award-nominated free-to-play online browser game in which players start out as anonymous Twitter users and grow to become fake news tycoons by making use of various common manipulation strategies to gain followers whilst preserving online credibility. In total, there are six badges to be earned in the game, each representing one such strategy: impersonating people online, using emotional language, polarising audiences, spreading conspiracy theories, discrediting opponents, and trolling.

So why play the ‘bad guy’? Think of it this way: the first time you go to a magic show, you’re likely to be duped by the trick. But once the magician explains the trick to you, you won’t be fooled by it again. However, just sitting back and letting other people tell you what the facts are isn’t very fun. Bad News inoculates people against deception by letting people do the trick themselves in a simulated environment. After all, experience is a powerful teacher.

So did it work? Short answer: yes. Since its launch in early 2018, the game has been played more than half a million times. And, importantly, about 15.000 people responded to our in-game survey, the results of which we published recently in the journal Palgrave Communications. We wanted to test if players become better at spotting deception techniques in Twitter (news) posts that we showed them before and after playing the game. To do so, we designed a number of headlines that were fictitious but based on real instances of fake news. Why? Well, one major issue in the literature on fake news is that memory confounds (people may know a headline is fake or real simply because they remember it). The results of our study showed that people significantly downgraded the reliability of fake (but not real!) news items after playing and these effects were robust across different demographics like age, gender, education level and political affiliation.

There were some limitations, of course. For one, our sample was self-selected and we did not have a randomised control group. However, we’re currently running trials that are confirming our prior findings, even over time. Together with the UK Foreign office we have also translated the game into 14 new languages (including German, Czech, Polish, Greek, Esperanto, Swedish and Serbian) which will allow us to do large-scale cross-cultural comparisons. We’re also working with WhatsApp to combat rumours spreading on direct messaging apps by designing a new version of the game. Ultimately, although we think that a cure for the post-truth era will require a multi-layered defence system, one thing is clear: the science of prebunking has only just begun.

LSE’s Emma Goodman discusses the risks of anti-vaccination content on social media, and how the tech companies are addressing it. This article is a response to a workshop organised earlier this month by the Royal Society for Public Health (RSPH) on ‘Vaccines, information and social media’.

The problem

It’s not the only health issue to be subject to misinformation online, but the prevalence of anti-vaccination content on social media is causing great concern among public health officials, politicians and more. The UK government’s White Paper on online harms mentions vaccination misinformation under “Threats to our way of life,” specifying that “the spread of inaccurate anti-vaccination messaging online poses a risk to public health.”

The public health consequences of not vaccinating are evident. Many countries that had been close to eliminating measles have seen a significant increase in cases, mainly among growing unvaccinated populations. According to the World Health Organisation (WHO), vaccination currently prevents 2-3 million deaths a year, and a further 1.5m could be avoided if global coverage of vaccinations improved. But despite efforts from public health bodies to increase vaccination coverage of preventable diseases such as measles, rates have been falling.

The reasons are complex and not entirely clear. The WHO has ranked ‘vaccine hesitancy’ as one of the biggest threats to global health, and although this isn’t always the reason people don’t vaccinate themselves or their children – often it’s down to convenience or other pragmatic considerations rather than ideologies – there does seem to be a growing problem with trust in vaccinations. Worldwide, the Wellcome Global Monitor 2018 found that 79% of people agree that vaccines are safe, while this drops to 59% in Western Europe, and 40% in Eastern Europe.

The UK-focused 2019 RSPH report ‘Moving the Needle’ found that 41% of parents surveyed had been exposed to negative messages about vaccination on social media, rising to 50% of parents of under-5s (as a parent of young children myself, I am honestly surprised these figures aren’t higher.) The risk, the report argues, is that repetition of information is often mistaken for accuracy, citing a 2015 study that showed that even when participants were armed with prior knowledge, they could succumb to the effects of ‘illusory truth.’

Even if just a small percentage of the population is opposed to vaccinations, they are very passionate and social media has enabled them to have a loud voice. In the ‘attention economy,’ social media tends to favour the emotive and sensational over factual content, the negative over the positive, the outrageous over the mundane. Recommendation engines are expert at sending users down paths that expose them to more of the same views, while likely becoming increasdingly extreme.

Social media platforms can also suffer from the kind of harmful data void described by Microsoft researchers in 2018, whereby the lack of a steady stream of quality, authoritative information on a topic leaves spaces open to manipulation by the promotion of misinformation.

So what are social media companies doing about anti-vaccination material on their platforms?

Under pressure from governments, civil society and the public, there is no obvious path forward for tech companies. They can’t just ignore this content, but removing it would limit their users’ freedom of expression and put the companies themselves into an ‘arbiter of truth’ role, which neither they nor most others believe they should take on. The main players, therefore, are all trying slightly different approaches:

Facebook

As it does for any incidences of misinformation, Facebook works with third party fact-checkers who analyse flagged stories and if found to be misleading, the platform reduces the attention that they receive, by reducing their ranking in Facebook’s news feed and search functions. Distribution usually falls by around 80% after being flagged as misleading, the company says.

When page or group admins spread vaccine hoaxes that have been publicly debunked by authoritative organisations such as WHO, the company goes a step further by not just reducing the distribution of the post, but also of the page or group that posted the information. They will not be included in recommendations or in predictions when a user is typing in Search.

Ads with vaccine misinformation are rejected, and you can no longer target people based on options like “vaccine controversies.”

Instagram

In March, Facebook said that said vaccine misinformation would not be shown or recommended in Instagram’s Explore, hashtag and search pages. In May, Instagram said that it would hide hashtags that include a ‘high percentage’ of inaccurate information relating to vaccines (that which is at odds with facts determined by organisations such as WHO).

However, as shown in this screenshot, although the only hashtag to come up prominently in a research search for ‘vaccines’ is #vaccines, which has a mix of pro- and anti-vaccination content, several of the top accounts that immediately appear have a pretty clear anti-vaccination agenda.

Twitter

Twitter announced in May 2019 that it launched a new tool in search that would direct users to credible public health resources when they search for certain vaccine-related keywords, such as the Center for Disease Control and Prevention, which is run by officials at the US Department of Health and Human Services. In the UK, searching for ‘vaccinations’ brings up a tweet from the NHS as the first result.

Pinterest

If you search for ‘vaccines’ on Pinterest you get no results – the company has just decided to shut off these discussions entirely.

It is also impossible to ‘pin’ pages from certain URLs that are known to carry problematic content such as health misinformation, Pinterest told the Guardian.

YouTube

Youtube announced in February that it was ‘demonetising’ anti-vaccination content – in other words, removing the ability to advertise alongside it. According to Buzzfeed, the move followed complaints from several large advertisers who did not want their content alongside videos promoting anti-vaccine views. It is a similar approach to that which the company takes for hateful or violent content, and one which is described in a Guardian editorial as a ‘halfway house between complete freedom and extinction.’

It also provides information panels on videos detected as having an anti-vaccine message, linking to a Wikipedia page on vaccine hesitancy. The company’s announcement in January that it was working to reduce recommendations of ‘borderline’ content could also apply to vaccine-related misinformation.

Will such approaches be effective?

An acknowledgement that free speech is not the same as free reach and that platforms have responsibility for the recommendations that they make is significant (especially given that, for example, 70% of viewing on YouTube is through recommendations). Providing easy access to reliable, evidence-based information is also a welcome step forward.

However, there are limitations to the benefits that these ad hoc moves can have.

Lack of information isn’t the problem. There is a huge amount of evidence for the benefits of immunization, but presenting this evidence to those who feel otherwise isn’t necessarily going to change their minds. A social media redirect to an authoritative source is unlikely to be sufficient to have an impact on those with deeply entrenched viewpoints.

It is also important to remember that social media are not the only way that anti-vaccine misinformation can spread: spurious, sensationalist claims related to vaccines appear in traditional media also.

As argued by UN special rapporteur on freedom of expression David Kaye in a letter to Facebook CEO Mark Zuckerberg in May, the “ad-hoc development” of policies addressing issues such as vaccine hesitation “may be susceptible to criticisms of bias and arbitrariness.” Kaye advises that “aligning these measures with human rights standards, however, can place them on a more principled footing.”

It was suggested at the RSPH workshop that more effective campaigning against vaccine misinformation could focus on the dangers of the disease in question, rather than the safety of vaccines, given that it is when confronted with an outbreak that people often seem to agree to vaccinate. Involving the target populations in building campaigns was also thought to be very helpful.

Improving media and digital literacy so that the public is more easily able to identify reliable sources of information is always beneficial, but requires sustained spending and a significant, coordinated effort.

Since 2016, concerns about ‘fake news’ have reached new heights. But what is the actual impact of online misinformation on audience’s political knowledge and attitudes? The LSE’s Rodolfo Leyva recently completed two experimental studies: one looking at the effects of fake news stories on voter support for US presidential election candidates, and one testing the effects of conservative newspapers in the UK on anti-immigrant attitudes and political behavioral intentions. Based on his work, he explains here the relative risks of ‘real’ and ‘fake’ news.

Before reacting to the title, please note the following qualifiers. By “real news”, I’m primarily referring to the political journalism of the Western and mainstream conservative press, though, centrist and ostensibly liberal outlets are also implicated when they use the manipulative reporting techniques discussed below. By harm, I’m only referring to the media’s corrosive influence on audiences’ political knowledge, attitudes, and participation.

That said, here’s what we know from the extensive theoretical and empirical scholarship on media priming, framing, and cultivation effects.

Due to commercial imperatives, time pressures, and editorial slant, news organizations’ political reporting tends to cover and focus on certain considerations, connotations, attributes, emotional factors, and values, while neglecting others. Their reporting also tends to be embedded with subtle or blatant textual and/or visual cues and frames such as negative wording, innuendos, symbols, salacious imagery, and sensationalist headlines. Taken together, these editorial choices and language:

Provide a framework to more easily conceptualize and digest complex topics

Help give audiences a sense of the most urgent social problems or viable candidates

Enable one to quickly draw inferences from and evaluate the information presented.

Accordingly, exposure to such content, irrespective of its accuracy, automatically primes and heightens the accessibility of emotions and schemas that are consistent with these cues or frames i.e., mental representations of beliefs, attitudes, practices, knowledge, events etc. that are stored and networked in memory.

In readers with weak political positions and/or little to no prior knowledge on the subject matter, such exposure and consequent triggering can generate new corresponding schemas. Whereas, in those with stronger political positions and knowledge, this tends to simply update their pre-existing schemas and usually only if the information presented aligns with these. In either case, when activated, these associations can remain accessible and orient subsequent related judgments and behaviours, such as candidate evaluations and selective news consumption. Moreover, once a given schema exceeds an activation threshold, it’s then likelier to suppress the activation of rival schemas and considerations, and induce automatic judgments and behaviours.

This is not to suggest that our political identities are determined by what we see in the news, nor to exaggerate the media’s persuasion power. Indeed, the extent to which even legitimate journalism influences our attitudes and practices over and above other primary agents of political socialization (e.g., family, schools, friends), is probably minimal. Nevertheless, frequent exposure to prevalent, recurring, and conceptually congruent news-media representations increases the probability that the salient, semantic, thematic, and affective aspects of these representations can incrementally shape a person’s mnemonic structures and concomitant conceptions of reality, value systems, and behaviours.

This now brings us to the mainstream right-wing press. Their reporting regularly includes loaded and emotive wording; stereotypical imagery or textual descriptions of ethnic minorities; misrepresentations of leftist figures and policies; disproportionate coverage of crimes committed by immigrants or other disenfranchised groups through selections of highly sensationalist cases; and/or citations of dubious studies, sources, or statistics. To be fair, outlets vary in the degree to which they employ these framing techniques, and centrist and left-leaning press also use them, although much less so.

Nascent research is shedding light on how these reporting practices can affect political attitudes and behaviours. For example, a correlational study from 2012 showed that people who regularly watched Fox News had greater unfavourable views of Mexican immigrants and support for restrictive immigration policies than viewers of the centrist outlet CNN. In a related vein, a Spanish experiment also from 2012 found that exposure to a newspaper article that associated increased immigration with rising crime, induced stronger anti-immigrant attitudes. Similarly, my experiments suggest that regular exposure to stereotypical representations of migrants in articles from leading British conservative newspapers (e.g., The Daily Mail, The Daily Express) cultivates anti-immigrant schemas, which, in turn, mediate the likelihood of voting for politicians running on anti-immigration platforms. These and other studies consistently indicate that repeated attention to journalistic content containing evocative, derogatory, alarmist and/or otherwise misleading representations of minority groups primes and strengthens corresponding negative opinions and policy preferences.

Now, to address the issue of ‘fake news,’ I must first note that most of hoopla surrounding it refers to digital fake news (DFN). DFN can, therefore, be more specifically defined as non-satirical deliberately fabricated news stories that are designed to appear credible, and disseminated through the Internet in order to generate advertising revenue and/or influence people’s politics. DFN stories are usually full-fledged articles from dedicated host websites that are distributed through Facebook and Twitter, and their content often contains eye-catching headlines, provocative imagery, defamatory accusations, and demonstrably false claims about a political candidate, party, or policy. So to put it differently, DFN is honed clickbait that is intentionally framed to manipulate people’s socio-political thoughts and practices via eliciting emotional reactions. But, despite this clear purpose, there’s no evidence that DFN leads to any change in political attitude or behaviour.

The handful of studies on DFN do, however, show that:

people are likelier to believe DFN about candidates from parties opposite to their own

selective-exposure and belief susceptibility to DFN is predominantly a pathology of the far-right.

Correspondingly, an experiment I ran recently yielded similar findings. In it, participants aged 25-49 were exposed to either anti-Clinton or anti-Trump DFN, and were then tested on their candidate evaluations and preferences against a control group. Neither group changed their attitude to the candidates based on the DFN that they saw. However, significant results did show that believability in the anti-Clinton DFN materials was strongest amongst the most conservative participants, and that high believability negatively moderated evaluations of Hillary Clinton. The research on DFN so far thus seems to indicate that when DFN is encountered and read, people – especially those of a far-right persuasion – will tend to rely on their triggered pre-existing political schemas and gut feelings to evaluate its content such that they will be more likely to:

Reject and be unaffected by DFN that doesn’t correspond to their existing beliefs,

Have their views be marginally reinforced by DFN that is consistent with their existing attitude.

So why might this be the case? Before answering, please note that the following explanations are tentative, and need to be verified with additional studies. (This is not a convenient cop out or cheap ploy to get research funding; it’s just that science is necessarily slow and cautious.) Now, as I alluded to earlier, by around age 25, most people have relatively crystalized political identities and leanings that they’ve cultivated since childhood. Hence, for most developed adults, momentary attention to even emotive and sensationalist political news usually simply triggers and reinforces existing schemas, if the content is consistent with their attitude. Conversely, if news features content that is at odds with their existing beliefs, then its messages will likely be ignored. This doesn’t mean that people don’t read or can’t be influenced by counter-attitudinal news messages, but it’s unlikely, since people primarily pay attention to media that reaffirms what they already believe, and after adolescence, basic political schemas typically don’t really change.

In other words, people – particularly those who vote regularly – aren’t blank slates when it comes to politics, as they come equipped with pre-existing beliefs and values that can shield them against the persuasion effects of exposure to counter-attitudinal DFN. Further, these schemas have been significantly, although not solely, shaped by years of chronic attention to prevalent, recurring, and conceptually congruent news representations. And so, if these representations entail those disseminated by mainstream conservative media in particular, then the ‘harm’ has already been done. All exposure to congenial DFN does at this point, is simply bolster the misinformation placed and cultivated in one’s mind via years of consuming ‘real news’.

Importantly, I’m not arguing that we shouldn’t be concerned about ‘fake news’, but that it’s not the cause of our current political polarization, Brexit, or Trump’s election. For that we should be pointing fingers at, amongst other factors, mainstream political journalism. As such, we should probably also push for regulators to hold all news media accountable to existing and long-accepted professional journalism ethics, standards, and practices.

This post represents the views of the author, and not the position of the LSE Media Policy Project, nor of the London School of Economics and Political Science.

France and other countries (including Germany and the United Kingdom) are currently investing efforts in attempting to regulate the moderation of harmful content on social media platforms. If such initiatives are often supported by public authorities, they are the source of heated debates regarding their potential impact for freedom of expression. In an interview with French public policy think tank Institut Montaigne, Charlie Beckett, Professor of Media and Communications at the London School of Economics (LSE) and Lead Commissioner for the LSE Truth, Trust & Technology Commission (T3), advocates for the creation of an agency that would have the independence and expertise required to monitor platforms’ moderation of content.

LSE’s Damian Tambini assesses whether there is sufficient coordination between the various proposals currently under consideration in the UK technology policy environment.

It has been an extraordinarily busy year in UK tech policy. The Furman Review reported on digital competition, recommending changes to competition law and a new regulator to deal with data dominance, competition and consumer welfare. The Online Harms White Paper outlined a comprehensive new regulatory framework – and proposed a new regulator – to deal with everything from online disinformation to cyberbullying to incitement to violence, as I discuss here. And a new Digital Services Tax has been proposed to claw back some of the surplus from tech firms. These proposals followed a number of parliamentary inquiries on fake news, internet regulation, hate crime, and our own LSE Commission on Truth, Trust and Technology.

With so many policy rabbits scurrying in different directions it is time to ask: what is the relationship between competition policy and other interventions, and is there sufficient coordination? In particular, is the competition approach likely to conflict with other aspects? As we argued in our T3 report, the deeper policy issues at stake are going to take a long time to resolve and it is crucial that the UK adopts a coordinated approach if powerful platform companies are not to game the system.

On the basis of the evidence currently available there seems indeed to be a lack of coordination and a danger that different policies may conflict. With legislation expected in 2020, it is time to ask what would be the best policy sequence, and how the overlaps can be dealt with.

Tech mergers: a new approach

Take one example: the Furman review sets out a number of sensible reforms to merger rules which should tip the balance in favour of more referrals and a precautionary approach to big tech mergers. No longer would it be possible, if these reforms are implemented, for Facebook to purchase – and integrate services and data of – companies like Instagram or WhatsApp, with their own social network and Messenger, or for tech platforms to pursue the ‘strangle at birth’ strategy by purchasing potential competitors. Regulators will be more alive to the dangers of data concentration and the potential for indirect network effects to create problems that previous methodologies were not good at spotting. So in the competition area, we can expect some progress on the basis of these sensible recommendations.

But competition, data portability, interoperability and consumer switching will not deal with some of the deeper harms at stake. And the attempt to do so through a ‘duty of care’ – that another part of government is recommending- may well conflict with the ‘pro-competition’ approach of Furman. In particular, introducing duties of care and other regulatory obligations can raise barriers to market entry by obliging platforms to invest more in moderation.

Online harms: how much does Facebook spend on moderation, and why it matters

Let’s look at the example of the duty of care proposed by the UK government’s Online Harms White Paper: the attempt to oblige Facebook and other platforms to deal better with the negative externalities associated with digital platforms. There has been a debate about whether Facebook, for instance, has been effectively dealing with illegal content and other harms, and even whether the company enforces its own terms of service and community guidelines, and the proposal is to ensure that they are held to account for this by a new regulator.

As a result of this controversy about Facebook moderation, there is a fair amount of information in the public domain on what they do currently, and from this we can infer costs. In March 2019, Facebook claimed to have 15-30,000 content moderators worldwide. It was reported that a moderator based in the US will take home around $30,000, and a conservative estimate of the cost to FB would therefore be $40,000 per annum per moderator. Many moderators are located in developing countries such as India, where moderators could cost as little as $5000.

Assuming US cost per moderator @40k = 40,000 x 30,000= $1.2bn

Assuming India cost per moderator @5k = 5000 x 30,000 =$150m

Therefore, the estimated total global cost of moderation for Facebook is something between $150m and $1.2bn, a significant expenditure. However, not only are these very low estimates based on limited data, they are a snapshot of a fast changing scene.

There are consistent reports that the social network is scaling up operations following several public scandals including those involving incitement and hate speech, and Facebook made a number of announcements that they had increased the numbers of moderators in response, for example, to the German NetzDG law (the company now reports having 1200 moderators in Germany). As a result of this law, which obliges platforms to take down material that is considered to breach the law on hate speech, Germany now has the most developed liability framework. In July 2019, after 18 months of operation under the new law, Facebook was fined €2m by Germany’s Federal Office of Justice for not observing the correct procedures and selectively reporting on complaints.

There have been many discussions of the ‘human cost’ of moderation, and this is already translating into increased costs (longer rest periods, in work support) and legal fees/ compensation due to the distressing nature of the work. There are numerous stories reporting that Facebook moderators are receiving more money, or that more of them are being hired in some countries shows the enormous scale of the operation. And an independent report on content moderation from May 2019 shows that the number of pieces of content runs into the millions.

Facebook is also proposing to insert a number of ‘appeals procedures’ including a ‘supreme court’ for content moderation. This will dramatically increase the cost per staff members as legal and other expertise will be required.

So, moderation costs are rising extremely rapidly, arguably constitute a significant a barrier to entry for any social network trying to compete with a giant like Facebook. With the imposition of new regulatory obligations and standards, these significant barriers to entry will only grow.

Hipster Anti-trust and the Power Problem

The other point to note is that Furman decisively rejects so-called ‘hipster’ or ‘neo-Brandeisian’ anti-trust. This new approach to anti-trust, increasingly influential in the US for example with Democrats like Elizabeth Warren, calls for the break-up of tech platforms on the basis of an analysis not only of their entrenched market dominance, but on the basis of an analysis of the implications of this for political power. Furman says that such a new approach should not be taken for various reasons, including the claim that consumer welfare may be better served by large players.

However there is an important distinction to be made here. It may be the case, as former US Government official Carl Shapiro argued in his influential article, that anti-trust is not designed to deal with political concentrations of power. This is not the same thing as saying that the power problem does not exist.

Platform power and the role of technology in our society pose completely novel questions about privacy, data, new media and how they can be used to shape human behaviour and democracy. Furman may be right that competition law is not the way to deal with platforms or with the fundamental rights issues tied up with dominance and data, but they still need to be dealt with. There is clearly not enough coordination between these different policy initiatives and they are pulling in different directions. Regulation of online harms will raise barriers to entry, and likely undo the efforts of Furman to increase competition. Government should be thinking about a coordinated approach to the long term negotiation with platforms.

This post represents the views of the author, and not the position of the LSE Media Policy Project, nor of the London School of Economics and Political Science.