Skip to content

UK Online Safety Bill, A Very Unofficial Explainer – 4th April 2022

-- 13 min read --
Online Safety Bill 2022 – Explanatory Notes

What does the Online Safety Bill do (in a nutshell)?

The Bill grants the UK Government authority to give detailed instructions to some online services about how they should run their operations for UK users.

These instructions will cover a broad range of issues related to content and behaviour under the general theme of ‘keeping people safe’.

Some online services will also be told how to treat content from journalists and politicians under a secondary theme of support for freedom of expression.

These instructions are not set out in detail in the Bill but will come later in the form of thousands of pages of guidance that will be drafted and approved under a range of different procedures which are described in the Bill.

The body that will be tasked with issuing and enforcing these instructions is the Office of Communications, Ofcom, which has been regulating the broadcasting and telecoms sectors in the UK since 2003.

The Bill also updates the laws that criminalise certain kinds of communications by people resident in the UK by introducing several specific new offences.

Why does the UK government think it needs these new powers?

The UK Government has given both performance and principle reasons for deciding it needs to be able to tell services what to do. 

In terms of performance, they do not think many online services have done a good job at keeping people in the UK safe to date.  

There have been many instances where people have complained about decisions made by services about harmful content and the Government believes that the new framework will lead to better decisions in future.

There is a secondary strand of criticism about platforms removing content that some people feel should be allowed and the Government has expressed its concerns about this.

The Government has also said it disagrees with the principle that important decisions that affect people in the UK are made by unelected executives of technology companies.  

So even if the companies improved their performance there would still be a rationale for the law in terms of shifting power from industry to Government.

Is this really a world first as the UK government claims?

Yes and no, but mostly yes. 

A number of countries have already passed laws giving themselves powers to direct online services, eg the Network Enforcement Act in Germany and the Protection from Online Falsehoods and Manipulation Act in Singapore.

What is different about the UK law is its breadth and the sophistication of the mechanism it puts in place for regulating online services.

Existing laws tend to be reactive and focus on specific content types rather than seeking to subject services to proactive detailed supervision by a designated online regulator.

NB online video sharing services like YouTube are already regulated in the EU under something called the AudioVisual Media Services Directive but this is a much lighter regime that will be replaced for the UK only by the new requirements in the Online Safety Bill

The EU is working on its own package of new measures called the Digital Services Act that is expected to include comprehensive controls for online services that could end up looking and feeling quite similar to the UK regime.

What other regulatory models are comparable with this one?

This legislation combines ingredients from a number of other regulatory regimes.

There is a dose of broadcasting regulation in the structure it creates as it places the broadcasting regulator, Ofcom, in the oversight role.

But the way in which online services will be regulated may feel more like the models used in financial services and data protection regulation than traditional broadcasting rules.

There is a strong emphasis on being able to require services to furnish the regulator with information about how they handle content, echoing the investigatory and audit powers used by data protection authorities and financial services regulators.

Ofcom will be able to instruct online services to implement particular preventive measures just as banks can be required to have specific safeguards in place.

Where Ofcom believes something is not working it will issue notices requiring regulated entities to make changes to their services to come into compliance which is similar to the way data protection law is enforced.

There is also a big dash of health and safety regulation as the law is built around the idea that services have a duty of care to their users just as employers have a duty of care to their employees.

During the process of development of the Bill there has been a shift from the idea of a single generalised duty of care to placing multiple ‘duties of care’ on services covering specific areas that will each be backed up by detailed guidance. 

Will tech company executives face criminal prosecution under this law?

There are some offences that could lead to a criminal prosecution of individual employees of online services.

But if you are hoping for Mark Zuckerberg to be thrown in jail next time Facebook takes your content down then you are likely to be disappointed.

The criminal offences for services relate to the provision of information to the regulator.

If a company refuses to provide information or lies to the regulator then Ofcom may seek to prosecute employees responsible for this non-compliance or deception.

It is of course possible that a major service will fall foul of these information offences and so end up getting prosecuted but this will be an extremely unlikely outcome. 

The major companies all have large teams of lawyers whose job is to ensure that they comply with relevant laws and we can expect them to handle requests from Ofcom very carefully so that they do not expose their colleagues to a personal risk of prosecution.

The services that will more likely end up facing a threat of prosecution are those which do not accept Ofcom’s oversight in the first place and refuse to cooperate with them at all or deliberately provide incorrect information in the hope this will make them go away.

So if criminal prosecutions are unlikely, then will the Bill at least mean we see services being whacked with big fines?

Yes, it does seem likely that the Bill will lead to services being fined and the Bill allows for these to be meaningful (up to 10% of a company’s global annual revenue or £18 million, whichever is larger).  

These fines could kick in when a company fails to meet any of its obligations as set out in the Bill, for example if a pornographic service did not carry out age verification, or if a user-to-user service did not produce risk assessments.

Ofcom would issue a notice to the service telling it why they are considered to be failing to comply with the law and there would then be some back and forth between Ofcom and the service before a final decision is issued.

Ofcom may consider that a service should pay a fine for its original non-compliance and/or if it fails to implement any required changes that Ofcom has included in a notice it has issued.

What situations are likely to lead to online services getting fined?

Based on experience with similar regimes like data protection law we can expect there to be three sets of circumstances where services end up exposing themselves to risks of fines.

The first is where a service makes a mistake and so ends up non-compliant by accident rather than deliberate choice.

An example of this in the context of this Bill might be where an engineer makes a software update that breaks a content scanning tool that the regulator has told it to have in place.

If an error like this came to light then the regulator might just order that it be fixed or it may feel that a fine is justified in order to encourage the service to be more careful in future.

The second is where there is a difference of interpretation of the legal requirements between the service and the regulator. 

Businesses will often interpret regulations in the way that is least onerous for them and their attempts at compliance may not meet the expectations of regulators.

We have seen this in the back and forth over so-called ‘cookie banners’ over the years where data protection authorities have taken enforcement action against websites that they feel are doing less than the relevant law requires.

In many cases, we might expect these disputes to be resolved without getting to the point where companies are fined as they work through any disagreements over the details of their compliance efforts with the regulator. 

But in other cases, where a company insists on one interpretation of what compliance should look like and the regulator insists on another, we may only settle who is right through a formal process of a penalty notice being issued and challenged by the service.

The third is where a service deliberately sets out not to comply with notices from the regulator because it fundamentally disagrees with them. 

This could be because it believes the regulator is acting outside of its legal powers or because it sees the regulator’s instructions as so harmful to its interests that it is prepared to risk the consequences of defiance.

In these cases, we can expect the regulator to sanction the service and then defend its position in front of any appeal by the service to a tribunal or court. 

What can the regulator do if a service point blank refuses to engage, ignoring any notices and then not paying any fines?

The Bill gives the regulator a ‘kill switch’ as the ultimate sanction, allowing it to order UK internet access providers to block people in the UK from using a particular service.

There are other measures the regulator can impose short of a full block which are described as ‘service restriction orders’ and might involve restricting access to payment services or prohibiting UK entities from advertising on a platform.

Which online services will be asked to comply with the UK government’s instructions?

The Bill is aimed at two types of general purpose service – search and user-to-user – and includes lots of text defining how the regulator (and potentially the courts) should categorise services.

The definition of search engines excludes search functionality where this just relates to a single website or database so having a search box on your own site will not bring you into scope.

But it includes any service that allows you to search ‘some’ websites or databases and so is intended to capture specialist search engines as well as the big global ones.

The definition for a user-to-user service is novel and an attempt to capture a broader range of current and future services than might be the case if they limited scope to some definition of ‘social media’.

It is clear that social media services are a key concern and all the current household name services will fall within the definition of ‘user-to-user’ services in the Bill but we will need to wait for later guidance to understand exactly how widely the net will be drawn.

There are also some specific provisions for online pornography services.

As this is a UK law, does this mean it only applies to services based out of the UK?

No. The explicit intent of the legislation is to require qualifying services to comply with UK law and be supervised by the UK regulator wherever they are based in the world.

The rationale for this is obviously that many of the major online services used by people in the UK are based in other countries.

A foreign service will be considered in scope if it has a ‘significant number’ of UK users and/or if UK users are a ‘target market’ for the service. 

This challenge of pulling foreign services into scope has been addressed in EU and UK data protection law so there are some precedents for considering questions of when a service is targeted at a country. 

But there will need to be further guidance on the specific definitions Ofcom will use for considering a service as having enough users or indicators of targeting to be in scope.

We can expect these criteria to be challenged by some foreign services who do not want to be regulated by Ofcom and so these definitions will be tested and refined in court.

How many services is this likely to include?

The Government’s estimate in their Impact Assessment is that 25,100 organisations will end up being regulated under the terms of the Bill but the actual number will only be known once it has come into force and people have tested the definitions against different services.

It is already clear from this estimate that we are not talking about just the big players but a wide range of smaller services that people in the UK use to discover information and share content with other people.

Will all services be regulated in the same way?

No. The Government has divided services up into four categories as it wants to apply different regulatory obligations to each of these.

The four categories are :- 1) Large User-to-User, 2) Search Engines, 3) Small User-to-User, and 4) Pornographic.

The term ‘regulated service’ is used in the Bill to include all four of these as they all become regulated in one form or another as a result of the legislation.

You will also see the term ‘Part 3 service’ used in the Bill to cover just the first three categories as the main duties of care for these entities are set out in a section of the Bill that has the heading ‘Part 3’ and this provides a shorthand way to refer to them collectively.

When talking about each of these three ‘Part 3 services’ a numbering convention is used – Category 1 (large user-to-user), Category 2A (search engines) and Category 2B (small user-to-user).

The Bill sets out a process for Ofcom to develop and issue the criteria determining who will get placed into each of these categories and this will involve looking at UK user numbers but may also include other factors.

Why have they made this so complicated?

There are important differences in how the Government wants each type of service to operate to reflect both their inherent capabilities and political intent.

The large user-to-user platforms, labelled ‘Category 1’, are seen as having particular societal importance and so get extra responsibilities at various points in the Bill.

These additional duties mean larger services will have to act against a broader range of harmful content while also paying special attention to protecting journalistic and political content.

The Government clearly considers these extra measures to be both necessary and proportionate in the context of larger services when they might not be for smaller ones.

Most of the new duties of care are not seen as relevant for pornographic services – the Government’s key concern is that they put in place age verification processes to prevent access by minors, something which they committed to in previous legislation.

How will this all get paid for?  Won’t it be a burden on the taxpayer?

The taxpayer need not worry! The UK expects the industry to pay fully for its own regulation. This is consistent with how Ofcom generally does its business where it levies fees from the regulated businesses.

The Impact Assessment tells us that they expect online services in aggregate to pay around £30 million per year in fees which will allow Ofcom to hire lots of people to carry out all of its new functions.

To put this into context, Ofcom spends around £25 million per year regulating broadcast television and radio combined.

How much will each online service end up having to pay?

The fee levels are not set explicitly in the Bill, and they will ultimately depend on how much it actually costs Ofcom to do the work, but there are some pointers to how they will be set.

Ofcom is expected to scale its fees based on the ‘qualifying worldwide revenue’ of an online service suggesting large companies will be asked to pay more but it is also given discretion to use ‘any other factors that Ofcom consider appropriate’.

Ofcom will also receive guidance from the Government on how it should calculate the fees and will need to take this into account as it develops its own ‘statement of principles.

Ofcom produces a set of tariff tables which are worth a look if you want to get a sense of the fees it gets from the other parts of the communications sector it regulates.

If the overall total is £30 million it seems likely that fees for the largest platforms will run into the millions while for smaller platforms they will be a few thousand, or even several hundred, pounds per year.

What other costs will online services face as a result of the Bill?

The Government’s impact assessment estimates that the regulated services will have to spend around £300 million per year on compliance.

Most of this extra cost is assumed to come from services having to hire more content moderators which can be an expensive business.

Some of the estimates do seem on the low side as I wrote in a previous post when the draft bill was published last year.

Should we worry about this given that many tech companies are very rich?

These estimated costs will not make much of a dent on the profits of the big household name tech platforms but we may want to keep an eye on two scenarios,

The first is that of services which are more community focused than commercial and so may have very little income while still providing valuable user-to-user services.

The obligations to carry out risk assessments and put in place user moderation systems will take a proportionately larger slice of the resources available to modestly funded organisations than those with significant commercial revenue.

The second is that of cumulative costs as more countries adopt similar legislation.

While this is not necessarily a problem for the UK government, it is potentially a significant issue for the internet writ large.

The internet is a ‘default global’ protocol where new services are connected to the whole world unless they take active steps to limit their reach.

But the costs and risks of compliance with multiple regulatory regimes may mean that services prefer to hold back from some markets in future where they consider that having users there would be more trouble than it is worth.

The strength of this effect will depend on where regulators set thresholds for particular forms of compliance and fee levels, but there will be an impact when services start receiving communications setting out fees and obligations as a condition of being present in particular markets.

How is the Government’s stated interest in freedom of expression reflected in the Bill?

There is a general requirement for all regulated user-to-user services to have regard to users’ rights to ‘freedom of expression within the law’ and there are some specific additional requirements that are for the larger services only.

Larger services will have to carry out impact assessments considering how any major changes they make to their services would affect freedom of expression.

They will also have additional “duties of care” requiring them to give special treatment to ‘content of democratic importance’ and ‘journalistic content’.

There are some significant questions about what the Government intends here that will need to be explored during scrutiny of the Bill.

Is it clear who will be given this special protection?

There are some definitions in the Bill but the guidance will need to flesh these out so that there are workable tests for when someone is a journalist and/or taking part in a political debate.

We can see the original motivation as being one of wanting to protect MPs and mainstream media outlets from having their content removed by platforms.

There is likely to be a broad consensus that mainstream voices are covered but as we move towards the fringes of both politics and journalism then there will be a need to test specific claims and decide who is in and out.

So, will platforms just have to leave up all political and journalistic content?

No. There are instances where content is both clearly harmful and related to someone who considers themselves to be a politician or journalist.

In these cases there will be a tension between the different duties imposed on a platform and they will have to make a decision that may lead to complaints of non-compliance whichever way they go.

Decisions about the social media accounts of former US President Trump are a high profile example of this conflict as people variously see his posts as actively stirring up violent conflict or as protected political communication (or both!).

There may be efforts to square this so that, for example, content is left up while its distribution is reduced but that may still leave critics on either side frustrated with some wanting full removal and others for it to be unrestricted.

These are decisions that large platforms have wrestled with daily for many years and it will be interesting to see how this process of weighing up competing interests can be codified in guidance from the UK regulator.

Is all this consistent with the UK’s human rights obligations?

Well, it must be because it says this on the front of the Bill!

Secretary Nadine Dorries has made the following statement under section 19(1)(a) of the Human Rights Act 1998:
In my view the provisions of the Online Safety Bill are compatible with the Convention rights.

Online Safety Bill 2022

This is actually a standard formula used on all UK legislation where the relevant Minister says it complies with the European Convention on Human Rights.

We are asked to rely on the fact that all UK Government representatives and entities like the regulator Ofcom are required by law to act in accordance with the Convention as the primary guarantor of our rights.

As long as the Human Rights Act 1998 is in force we could take the Government and/or regulator to court if we thought anything done under this law did breach our rights.

This mechanism has proved effective in other cases so we should not panic yet that this Bill can override human rights but any weakening of the Government’s commitments to the Act and Convention should ring alarm bells given the sweeping powers being taken here.

But what about ‘legal but harmful’ content, and identity verification, and the new offences, and a hundred other questions I still have?

Do not fear, I will get to those. I wanted to get this post out now as a primer about what the UK Government intends to build but, yes, there is lots of interesting detail still to work through and many more posts to come.

8 Comments

  1. Russ Russ

    You paint a picture of age verification being required only for pornographic services. My reading of Chapter 4 clause 31(2) is that age verification will be required for all in-scope websites, whether ‘Category 1’ or otherwise.

    • I think the intent of 31(2) is to offer a service a way to prove it does not need to comply with the special responsibilities to protect children. If a service can show that it is using age verification to block under 18 access then it does not have to do child risk assessments etc. So, it is not requiring all services to implement age verification but rather giving you an opt-out if you choose to do this. For most Category 1 services, eg FB, IG, TikTok etc, they are happy to have 13-17 year olds on so will not be able to use this opt-out. They may choose to implement age verification but are not strictly required to do so and this would not give them any particular advantage as all the child-specific requirements still fall on them as long as they allow access to U18s.

  2. Russ Russ

    Thankyou. That makes sense (I think!), but I have to say, as the Bill is currently drafted, I do not see the ‘opt out’ you describe. Maybe I’m missing something structural in the text.

    • It is always fun (?) to try and navigate a Bill as you have to jump around to follow the logic of linked clauses!

      A key element here is Clause 33 which explains that extra duties of care will apply to services that are ‘likely to be accessed by children’. The test for this is whether it is a) possible for children to access the service, and b) children are doing so in significant numbers.

      Age verification, if used to block under 18s, is offered as the way to demonstrate it is not possible for children to access the service which would relieve the provider of the extra duties of care as the first part of the test fails.

      If they cannot demonstrate that it is impossible for children to access their service then the second test of child user numbers comes into play and if an as-yet-to-be-determined threshold is met then the extra duties will apply.

  3. Russ Russ

    I’m even more confused now. Are you saying a website can dispense with age verification if it has done its child risk assessments and declares “Not many children visit this place”?

    • Apologies for any confusion.

      As I understand it age verification is not mandatory for general purpose services. If they choose to implement AV for EXCLUSIONARY purposes, ie they verify people are under 18 and then shut them out of the service, then this is acceptable proof that they are not providing services to children and so don’t need to do child risk assessments etc. This is a narrow use case for age verification to prove that your service is NOT child accessible.

      If they cannot prove that they have a way to block under 18 access using AV then the second test of whether children are actually using the service kicks in. This will trigger the extra obligations unless they can prove there is no actual child usage. Again the service could use AV here as part of its efforts to show who is using the service but this is a choice not a requirement.

      There are places where the child safety duties of care would strongly point services towards using age assurance to make sure children cannot access some content, notably 11(2) and 11(3), but this is implicit rather than explicitly stated as a mandatory requirement.

  4. Russ Russ

    Thanks again. So, in effect, and at the risk of being too simplistic, in-scope non-porn sites will be either:
    – age verified, but no requirement to carry out child risk assessments; or
    – no age verification, but child risk assessments required.

    And please, no apologies, I don’t suppose I’m the only one struggling to get to grips with this convoluted Bill. To exemplify this, I should add that I’ve been picking the brains of Dr Heather Burns on this age verification aspect, and she reaches a conclusion (“No age verification = noncompliance”) significantly different, one might say diametrically opposite, to yours:
    https://webdevlaw.uk/2022/03/18/a-quick-take-on-three-pretty-terrifying-changes-to-the-online-safety-bill/

Leave a Reply

Your email address will not be published. Required fields are marked *