Tackling online harm – a regulator’s perspective

Speech by Sharon White to the Royal Television Society, 18 September 2018

Introduction

Good afternoon. My thanks to Kirsty for hosting this session, and to the RTS for organising such an important and thought-provoking day.

We’ve heard some fascinating insights today into the question of online regulation. I’d like to develop that theme by discussing how lessons from Ofcom’s experience of regulating broadcasting might be relevant to tackling online harm, as the Government considers possible legislation.

The Guardian sits above us. Many of you will know their motto: “Comment is free, but facts are sacred”.

That slogan dates from 1921, and captures the critical importance of accuracy in news. But it also speaks to the internet of today, where millions of people can give voice to their views.

The flourishing of the internet has been, perhaps, the greatest ever exercise in the democratisation of information. But it means that facts can be hard to discern from fiction.

So great has been the internet’s impact on media, just 15 years ago:

There was no Facebook, Twitter or YouTube.

Netflix was still a DVD mail order company.

And barely half of us had the internet at home.

But on the brighter side, there was no need to carry a selfie stick.

Today the web has encompassed almost every aspect of our lives. An explosion of online services, often free, have forever changed how people communicate and run their lives.

That dazzling growth has been fuelled by a free and open internet, unencumbered by most of the rules and regulations that apply offline.

For many people, that freedom from interference is the internet’s foundational stone.

In 1996, activists declared that cyberspace would be independent from nation states, regulations and the physical world of ‘flesh and steel’. Since then, no traditional industry has been able to match the internet’s pace of growth and innovation.

But we see growing evidence that, for all its undoubted benefits, that growth has come at a price.

The scale of online harm

Today, joint research from Ofcom and the Information Commissioner’s Office shows that four out of every five adult internet users have concerns about going online.

Some of those concerns relate to areas like hacking or privacy. But the most common, raised by two thirds of people, relate to content – particularly when aimed at children.

Some 12 million people using the internet have personally experienced online content or conduct that they found harmful.

All of us care about this – whether as parents, programme-makers, policymakers or regulators. France and Germany have passed legislation. Here too, as the Government prepares its white paper, internet safety is a matter of urgent debate across the major parties. In July, the DCMS select committee completed its interim report on disinformation and fake news. Among other things, it looked at the principles that should apply to future regulation.

As the communications regulator, we hope to contribute to that discussion: through our duties set by Parliament to encourage media literacy; our independent research into internet use; and our experience of protecting TV audiences.

So today we have published a paper outlining our experience of tackling harmful content in broadcasting, while protecting freedom of expression. As policymakers develop their plans, we look at how some of our experience there might be relevant to online harm.

Content could be harmful because it is illegal, dangerous, misleading or inappropriate for its audience. And it can be delivered in many forms – from TV-like programmes, to videos, images and text. Often these are served up at the same time, on a single screen. And different rules apply, depending on the mode of delivery.

The standards lottery

As the broadcasting regulator, we are very conscious of the growing disparity between the safeguards that everyone in this room is required to observe when making traditional TV programmes, and the much more limited ones that apply elsewhere.

To illustrate the point, consider the viewing of a typical child in the UK.

She spends around 90 minutes a day watching broadcast TV – and more than that on her phone or the internet. But the protection afforded to her, by a complex set of regulations, varies depending on the service that she happens to be watching.

Let’s say she’s watching Absolutely Fabulous with her parents. Like everything else on TV, that programme must abide by a range of detailed rules – covering areas such as crime, sex, drugs, language, violence and self-harm.

Or she might watch the same episode of Ab Fab on a catch-up player or Netflix – where it is still regulated, but to a much more limited set of standards under general European law. For example, there are rules on violence, but nothing on swearing. Patsy is off the leash.

And if our typical child picks up her phone to watch a clip of the same showon Facebook or YouTube, there is no regulation at all beyond the general law to protect her from harmful content.

So the broadcasting and online worlds are competing under different conditions, even as the online world takes up an ever greater share of our time. This has profound consequences for viewers – especially for children, who may well not distinguish between the two.

Without even knowing it, viewers are watching the same content, governed by different regulation in different places, or by none at all.

This is a standards lottery.

If protection matters, and we all believe it does, this cannot be our message to viewers – ‘choose your screen, and take your chances’.

Now there are welcome signs that the technology giants are increasingly alive to their responsibilities. Facebook and YouTube are hiring around 30,000 content moderators this year.

But trust in them is already weakening. Our research shows that people see social platforms as the single biggest source of online harm – and most people want the rules to be tighter.

The role of regulators is evolving too. New European laws will give national regulators some oversight of video-sharing platforms, requiring companies such as YouTube to address child harm, terrorism and hate speech. But most online content will remain unregulated, including words and images on social media, and videos that aren’t on sharing platforms.

The UK Government is already considering how to level that playing field. And the DCMS select committee has suggested that broadcasting standards, as defined by Parliament and implemented by Ofcom, should provide the basis for setting standards online.

So what are the lessons from broadcast regulation?

The challenges of regulating online

Our experience and research suggests that the answer is not simply to transplant traditional broadcast regulation, unamended, into the online corpus. Clearly, the internet is fundamentally different from television and radio in its nature, audience and scale.

The sheer volume of text, audio and video that is generated or shared online far outstrips the output of traditional media. That means, for example, that it could be impractical to review platforms’ decisions about content case-by-case.

Another big difference is that, partly because of its volume, most online content is moderated after it is published. There are no producers or compliance teams checking it beforehand. So sanctioning platforms for every undesirable post that gets online might not be practical or effective.

Looking more widely, evidence shows that people see the internet quite differently to television. On TV, viewers value impartiality in news, and want to see that guaranteed. But when they go online, they are content to pick from a wealth of different views – often one-sided and opinionated..

On an individual level, the internet is an unrivalled tool for people to express their views. If regulation is too blunt, it could undermine freedom of expression.

Can these hurdles be overcome? Based on our experience, we believe they can.

Of course the internet is different. But just as in broadcasting, its audience has fundamental rights, values and expectations.

In seeking to protect audiences, our experience of media regulation suggests that four broad lessons could be relevant.

Lessons from media regulation

First, the current broadcasting regulation started with a clear set of aims. It works well because the industry is held to high standards, by a clearly articulated set of rules that evolves with public opinion. These rules are rooted in Parliament’s aims. But they can still develop to reflect the audience’s changing behaviour, expectations and attitudes.

It was interesting to hear the Information Commissioner, Elizabeth Denham, at the House of Lords select committee last week. We share her view that flexible regulation, based on broad principles set by Parliament, has worked well – both in broadcasting standards, and in data protection.

Just as TV regulation has had to evolve in the age of digital and on-demand channels, internet regulation can recognise the pace of change online. For example, it could grant space for companies to innovate and find effective ways to protect their users.

Second, far from undermining freedom of expression, effective regulation can promote it. Parliament has shown the way, by requiring regulation to balance strong audience protections with the broadcasters’ right to transmit ideas – and people’s right to receive a variety of views.

Third, how to deal with the volume of unmoderated online content? One approach that has worked well in our arena is to regulate companies’ complaints processes, as we do with telecoms firms. Companies are penalised not for the harm itself, but for their failure to address it quickly and effectively.

As in Germany, this could mean requiring tech giants to be much more transparent about how they tackle online harm. The sun could shine brighter – even on Silicon Valley.

Likewise, people expect regulators to be equally transparent about the reasons and evidence for their decisions. And to impose meaningful sanctions to deter poor behaviour.

Finally, independence of regulation matters. In broadcasting, independence has proved fundamental to the regulator working in the interests of audiences, free from commercial or political influence. It helps ensure credibility in the system, and builds public trust. Crucially, Ofcom is accountable to Parliament.

As a regulator, we are required to keep audiences safe and protected – irrespective of the screen they watch, or the device they hold.

And lessons from broadcasting regulation could help inform the debate about future regulation.

But on the question of exactly how regulation might be applied, and by whom, we are entirely agnostic. Those are rightly matters for the Government and Parliament to decide.

Conclusion

So I hope our paper today might prove useful to policymakers – as they work to curtail the internet’s harmful aspects, while preserving its powerful benefits to society, culture, trade and freedom of expression.

For our part, we will keep working closely with Government and our partner bodies – the ICO, the Competition and Markets Authority and the Advertising Standards Authority in the UK; and with overseas regulators to share experiences.

Finally, we hope to exchange experiences with you, the lifeblood of our television industry – which, for all the pressures posed by online competition, remains the finest in the world.