The Online Harms White Paper

This month the UK Department of Digital, Culture, Media and Sport, in collaboration with the Home Office, published the Online Harms White Paper (the “White… Read more

more content below

This month the UK Department of Digital, Culture, Media and Sport, in collaboration with the Home Office, published the Online Harms White Paper (the “White Paper”), setting out a new regulatory framework to address a wide range of online ‘harms’ and make companies actively responsible for protecting their users online.

The main mechanic of the White Paper is the introduction of a statutory duty of care owed to service users, as overseen by a new independent regulator. The regulator’s role would be to instruct internet platforms on good practice and ensure their compliance with penalties for breaches, but in its early stages there are still questions as to who is caught, and what they will have to do differently.

The government is currently consulting on the issues raised in the White Paper and the consultation closes at 23:59 on 1 July 2019.

What are the harms the paper identifies?

At its heart, the intention of the White Paper is to address online harm, and to do this it sets out a non-exhaustive list of harms which are in scope for the new regulatory framework.[1] Many are existing and clearly defined crimes, such as child sexual exploitation and abuse, modern slavery and terrorist activity. These are already illegal, and already punishable.

However, other harms appearing on the list are less clear-cut: these are activities which are known to be harmful but are not necessarily illegal, for example advocacy of self-harm, trolling and disinformation. The extent to which service providers will be expected to prohibit such activities is therefore unclear, and potentially raises arguments in respect of curbing freedom of speech – one person’s disinformation may be another person’s political argument. It is therefore not yet clear how the proposed regulator will specify where the boundary lines are on these harms.

At present, certain specific exclusions are excluded from the framework’s scope (as they are dealt with elsewhere).[2] These are:

harm to organisations as opposed to individuals, e.g. companies

harm arising from a breach of data protection law

harm arising from a breach of cyber security or hacking

harm suffered on the dark web as opposed to the open web

The list of harms is ultimately open-ended, meaning that the proposed regulation is intended to cover any activity which harms individuals, especially children, or otherwise “threatens our way of life in the UK”. This would therefore include future harms arising out of technological development, but again, lacks clarity as to exactly what material service providers would have to address. It is possible that future codes of conduct will clarify the position.

Which platforms would be in scope?

The new regulations are intended to apply to “companies that allow users to share or discover user-generated content or interact with each other online”. This is a wide-ranging description that is clearly stated to include social media platforms, file hosting sites, public discussion forums, messaging services, and search engines.

Though the regulator will apply a risk-based approach in enforcement activities, all platforms providing any of these services will be caught, regardless of size, and should expect to meet new obligations. The White Paper purports to be solely focused on user-generated content and not, for example, on material produced by journalists, advertisers or technology manufacturers[3], though what is not yet clear is how this might apply to AI-generated advertising material.

At present the paper states that “private channels” are not included in the requirements to “scan or monitor content for tightly defined categories of illegal content”; it is, however, not yet clear what the definition of any such private channels will be. The paper itself acknowledges the difficulty of making this distinction, citing platforms which offer the potential to be both public and private, such as WhatsApp.

There is an additional concern about the smaller private web platforms from the defence and security services too, especially for hard right groups and potential extremist radicalisation groups who operate on a closed user basis.

In addition, the exemption does not mean an exemption for private channels from all regulatory requirements. The paper ultimately indicates that private channels will still fall under the framework under certain circumstances, or alternatively they may be subject to a separate set of requirements. The government is presently ‌‌consulting on the definition of private communications, and to what extent the regulatory framework should be applied to these.[4] It remains to be seen whether private channels will be proscribed by a minister or the new regulator.

What would regulated providers have to do?

Under the framework, the new regulator will set expectations for service providers who are caught by the regulations to do what is “reasonably practicable” to combat any harmful activity or content on their platform. [5] The regulator will set codes of practices which must be followed by providers – who will also have to ensure they are able to demonstrate how they are following the codes and fulfilling their statutory duty of care.

The extent to which providers will have to act will depend on factors such as the nature of the harm, the risk of the harm occurring on their platform, and the resources and technology available to them. Therefore, platforms with larger audiences, substantial resources, or platforms where children are present will attract more stringent requirements. Providers will seemingly not only have to watch for existing harms but will also be expected to take precautions against emerging future harms as well in an attempt to ‘future-proof’ the system.[6]

In addition, service providers will be expected to ensure an easy-to-access complaints function exists with regards to any harmful content, with the provider providing a suitable response and appeals process.[7]

There will not be a general obligation for providers to monitor all communication occurring on their platforms, but it is possible there will be a specific obligation to monitor for specific high-risk harms such as endangering national security or child welfare.[8]

Beyond this, however, the White Paper merely indicates that companies “must fulfil their new legal duties”, with the regulator setting out how to do this in new codes of practices.[9]

What are the possible penalties?

The core penalty in the White Paper is “substantial fines”.[10] The exact scale of these fines is not set out numerically, but the scope for substantial figures is there, possibly borrowing the GDPR approach of tying fines to a company’s global turnover.[11] According to the White Paper, the fines must be proportionate to the potential or actual damage caused, as well as the size and revenue of the service provider involved. Fines will therefore be set according to annual turnover, the volume of “illegal material” on the relevant platform, the volume of views of the illegal material, and also the time taken to respond to the regulator. The paper references “proven failures” and “clearly defined circumstances” for fines, which suggests that clearly illegal content will attract this financial penalty, but it remains unclear whether and how less clear-cut non-compliance may attract similar fines.

The White Paper also suggests that stricter non-monetary sanctions will be possible. The government is consulting on extra powers that would enable the regulator to “disrupt the business activities of a non-compliant company” as well as measures to impose liability on individual members of senior management, and measures to block non-compliant services.

These stricter measures would be in response to more serious breaches of the framework, for example failure to prevent terrorist use of the platform, and for repeated failure to rectify breaches.

Conclusion

While the proposal put forwards is at the early stages of creating a new regulatory framework which would be unlikely to take effect for some years, they are an indicator of the heightened focus on issues surrounding internet safety. Interestingly, the Home Secretary announced the proposal rather than the Digital Minister, although the Opposition broadly supports the proposal so if it reaches Parliament it looks likely that there will be cross-party consensus for implementation of these proposals or something similar.

As regulation looks to catch up to modern online realities, providers of online platforms are likely to face heightened scrutiny in a new regulatory landscape which will require all service providers involved in supporting the sharing of user content to actively ensure the safety of their users.

Sitemap

Follow us

Kemp Little LLP is a limited liability partnership registered in England and Wales (registered number OC300242) and is authorised and regulated by the Solicitors Regulation Authority. Its registered office is Cheapside House, 138 Cheapside, London EC2V 6BJ. The SRA Handbook can be accessed by clicking here.