Online Safety

The UK Online Safety Bill: who is in scope?

Published on 28th Sep 2023

What are the critical steps providers must take to ensure compliance in this evolving regulatory landscape?

Green code on smartphone and laptop screens

The Online Safety Bill, the UK's long-awaited landmark legislation designed to protect children and adults online, is finally due to become law in autumn 2023. The legislation will usher in a sea change in the way online spaces are regulated and will place wide-ranging responsibilities on online platforms for protecting users from illegal and harmful content. Ofcom is tasked with enforcing the bill and will have extensive powers to investigate providers and sanction them for breaches.

As a first step when preparing for the introduction of the new regime, providers of online services will need to work out whether they fall within its scope and, if so, what content on their platforms will be subject to regulation.

Which categories of service are within scope?

The bill is targeted primarily at online user-to-user services and search services.

User-to-user services are internet services that allow content generated, uploaded or shared by users on the service to be encountered by other users of the service. This includes social media platforms, messaging apps, online forums and message boards, dating apps and so on.

Even if the sharing of user-generated content is only a portion of a service's overall functionality, the service will be caught by the legislation (unless it benefits from an exemption).

Search services are services that are, or which include, a search engine. This brings the major search engines within scope, along with other services that search large numbers of websites and databases. It also covers speech-based digital assistants. However, websites that have internal search functionality, but do not present results from the wider internet, will not be covered.

The bill also applies to internet services which publish or display pornographic content.

Which services are exempt?

Several categories of service are expressly exempt. Email, SMS (short messaging service), MMS (multimedia messaging service) and one-to-one live aural communications services are not covered by the bill (although video calling services and instant messaging apps are); nor are certain services provided by public bodies.

Internal business services are also exempt. This exemption is intended to cover tools that are available only to a closed group of people within a business, such as business intranets, customer relationship management systems and database management software.

For the exemption to apply, the business which uses the service must be the provider of the service. The provider of a user-to-user services is defined as the entity that has control over who can use the service. If a business licences a software platform with user-to-user functionality from a third-party software-as-a-service provider and it controls access to the platform, then the business (rather than the third-party software-as-a-service provider) will be the "provider" for the purpose of the bill. The business will be able to rely on the exemption, as long as the other applicable conditions are satisfied.

"Limited functionality services" are also exempt. This covers services that permit only limited user interaction in relation to content published by the provider, such as posting comments or reviews or applying a "like" button or an emoji. This is intended to put services outside the scope of the bill where the only user interaction consists of "below the line" content or user reviews of directly provided goods and services. This means newspaper websites are not caught that host comments underneath articles nor are company websites that host customer reviews of the company's goods or services.

The bill gives the secretary of state the power to make regulations that change its scope, including the power to bring one-to-one live aural communications, comments and reviews on provider content within scope if they are considered to pose harm to UK users. Providers must, therefore, be alive to the possibility that, even if their services are not in scope on "day one", they may be brought within scope in the future. This is a reminder that compliance is not a one-off exercise but a continuous process.

What is meant by 'links with the UK'?

Services caught by the bill that are not subject to an exemption will be within scope as long as they have "links with the UK". A service will have links with the UK if any of the following apply:

  • it has a significant number of UK users;
  • UK users form a target market for the service; or
  • the service can be accessed in the UK, and there are reasonable grounds to believe there is a material risk of significant harm to UK individuals presented by the content on the service.

The bill, therefore, has extraterritorial effect and will apply to services which operate from outside the UK. Given the transnational nature of the internet, this cross-border application is necessary in order to create an effective regulatory regime. However, the concept of "links with the UK" is drawn very widely. A service may have little connection with the UK and may be used by very few UK individuals, yet may still be within scope of the bill if it is deemed that content on the service poses a serious risk to UK individuals (even if those individuals are few in number).

The bill does not specify what number of UK users would constitute a "significant number". The large social media platforms will be caught. However, a smaller platform whose UK users make up a relatively small proportion of the worldwide total may find it more challenging to determine whether it is in scope, particularly if there are few factors indicating that the UK is a target market for the service.

What content is within scope?

Providers that are within scope will have to consider which types of content are regulated by the bill. The draft legislation creates a regulatory regime based on "duties of care", which oblige providers to take steps to protect users from certain types of content. The duties focus on the systems and processes used to operate services and present content to users, rather than on the moderation of specific pieces of content. However, the ability to identify content which is of a type covered by the bill will still be an essential part of compliance.

The bill defines "regulated user-generated content" as all content generated by users except content of the type that would appear on an exempt service (for example, emails, SMS, one-to-one live aural communications and comments and reviews on provider content, etc.). News publisher content, which is any content published by a recognised news publisher (or reproductions of or links to such content) also falls outside of the definition of "regulated user-generated content", although certain providers will nevertheless need to be able to identify such content in order to comply with some of their duties.

Apart from this, effectively all user-generated content, in almost any form (including text, images, videos and music) is potentially in scope. This includes content created by bots.

Paid-for advertising content falls outside scope, as it is expected to be regulated through future legislation being explored as part of the government's Online Advertising Programme. The exception to this is fraudulent advertising. Larger providers will be subject to a duty to protect users from this limited category of advertising content under the Online Safety Bill.

What do the duties of care look like?

In-scope providers will be subject to various duties of care in respect of regulated content. The scope of a provider's duties will depend on that provider's categorisation. Certain services will be designated Category 1, 2A or 2B services, which will be subject to more onerous obligations (particularly Category 1 services).

The categories will be defined by "threshold conditions" which will be set out in secondary legislation. The boundaries of the categories are therefore not yet known, although they will relate to the number of UK users, the functionalities of the service and the likely risk of harm to users. It can safely be assumed that the largest social media platforms will be designated Category 1 services. However, there may be some potential for dispute or uncertainty at the margins when it comes to categorisation.

Regulated providers will need to carry out risk assessments to assess the risks of illegal content appearing on their services. They will then need to take steps to ensure that users do not encounter certain specified types of illegal content; and that, when the content does appear on the platform, it is quickly identified and removed.

Additionally, regulated providers will also need to determine whether children are likely to access their service. If children are likely to access a service, providers will be subject to additional duties to carry out a children's risk assessment and take steps to prevent child users from encountering the most dangerous types of harmful content and to mitigate the risks posed by other types of harmful content.

There are further duties in respect of content reporting and complaints procedures, as well as obligations relating to providers' terms of service, which apply to users of all ages. Larger providers will also be subject to various "user empowerment duties" to enable adult users to better control the content they are exposed to.

To offset the risk that compliance could undermine online freedom of expression, providers must have "particular regard" to freedom of speech and other fundamental rights when complying with their duties. Larger providers will also have express duties relating to journalistic and news-provider content and will be required to implement various safeguards to regulate the removal of such content. Despite this, concerns have been raised that these "balancing measures" will be less impactful than the more robust safety duties and that the bill may encourage over-zealous removal of content, which has a chilling effect on freedom of expression.

What are Ofcom's enforcement powers?

Ofcom will have wide-ranging powers to seek information from regulated providers to ensure they are complying with their obligations. If Ofcom determines that a provider has breached an enforceable obligation, it can compel the provider to take steps to remedy the breach and is also empowered to impose fines of up to £18 million or 10 per cent of the provider's worldwide annual revenue (whichever is higher). Ofcom can also require platforms to use accredited technology proactively to identify and remove content associated with terrorism or child sexual abuse.

Senior managers of regulated providers will also be liable for criminal prosecution, both in respect of failures to comply with Ofcom's investigation and enforcement procedures and, in certain circumstances, in relation to providers' substantive failures to comply with the children's online safety duties.

Essential steps

Given these severe enforcement powers, it will be essential for providers of internet services to take early and comprehensive steps to establish whether and to what extent their services are in scope, understand their duties under the bill and make appropriate changes to their systems and processes to ensure that they do not fall foul of the new law.

This article was first published in DataGuidance

Share

* This article is current as of the date of its publication and does not necessarily reflect the present state of the law or relevant regulation.

Connect with one of our experts

Interested in hearing more from Osborne Clarke?