Skip to content

Online Safety Bill – some futures – 25th July 2023

-- 13 min read --
Online Safety Bill front cover

The Online Safety Bill has now completed its detailed scrutiny in the House of Lords and we have a new version that we can expect to be very close to its final form.

Most of the changes during the Lords process come from the Government adding in extra provisions that they had promised to various people during the Bill’s passage through the Commons and Lords.

We can now start to reflect on what the world of internet services might look like in the UK by around 2025 as many of the new rules come into force.

I will replicate the summary from the end of this post here so that readers short of time can cut to the chase – please carry on to unpack these points.

[SUMMARY] What are the best and worst case scenarios for how this will all play out?

In the best case, the world will look like this –

  • Categorisation rules will exclude services like Wikipedia and just cover ones that most people would agree need regulatory oversight.
  • OFCOM will work with services on harm reduction plans that do not compromise core features like encryption,
  • 18+ users will have their ages verified using existing data and will just experience an occasional confirmation screen.
  • Under-18s will continue to be welcome on a wide range of services and will have easy methods to demonstrate their ages.
  • Services will tailor themselves for under-18s in ways that improve safety without diminishing their core usefulness.
  • 18+ users will be able to see and share content on a wide range of services as they do today, including edgy speech where platform-appropriate.

In the worst case, the world will look like this –

  • Services like Wikipedia will restrict or remove their products from the UK as they are unable to agree to OFCOM’s regulatory requirements.
  • Major encrypted messaging platforms choose to leave the UK rather than agree to deploy technologies that OFCOM has mandated.
  • All UK internet users have to go through time-consuming and burdensome age verification processes to access a wide range of online services.
  • Under-18s find that their choice of online services is extremely limited as many major providers decide it is too costly or risky to offer products to them.
  • Services for under-18s are so restricted that users no longer find them useful and seek to bypass controls to access 18+ versions instead,
  • 18+ users find that platforms which previously permitted more edgy speech now limit it in the UK for fear of the regulator.

[CHOICE] Will there be fewer internet services available to people in the UK? 

Yes, it is likely that some existing services will choose to withdraw from the UK market while other new ones will choose not to launch in the UK.

This is because the new regime acts like a licensing regime for certain classes of services where they will have to meet a set of obligations, and pay the regulator a fee, if they wish to operate in the UK.

It seems probable that some services faced with the decision of whether or not to sign up to these requirements will decide they are unable or unwilling to do so.

I am sure the architects of the Bill, and OFCOM as its enforcer, do not want or expect to see major mainstream services leaving the UK market, but the logic of the Bill has always been that services must comply or leave.

[SCOPE] What do you mean by ‘certain classes of services’?

This new regime mainly covers two types of service – search (ie Google, Bing etc) and ‘user-to-user’ which is deliberately defined broadly to include social media, messaging, online environments where people meet up etc.

It does not cover traditional media, general websites, blogs etc where these have a publisher responsible for most of the content rather than being mainly about user to user interaction.

The new regime also expressly excludes email and SMS from supervision while bringing more modern messaging services into scope.

There are some specific provisions related to providers of pornography that I do not plan to dwell on in this post though this subject was much dwelt on in the various Lords debates.

[BLOCKING] What happens if a service decides it does not accept these new conditions for operating in the UK?

There are likely to be different kinds of responses, broadly grouped into ‘won’t play’ and ‘can’t play’.

The ‘won’t play’ group are those who disagree with the idea that they should be regulated by OFCOM at all.

These services may simply ignore any communications and refuse to engage.

OFCOM will have a series of sanctions it can deploy including the threat of civil and criminal penalties for non-compliance with its orders, and the power to tell other companies to place restrictions on non-compliant services. 

A sensible defensive move for services that do not want to be regulated by OFCOM would be for them unilaterally to limit access to people they believe are in the UK.

They can do this by checking a visitor’s IP address and if they block anyone who seems to be using a UK IP address then this may be sufficient for OFCOM not to pursue them legally.

Where a service does not respond to OFCOM and continues to accept users from the UK then it could be prosecuted but any judgements may have little impact if the service and all of its staff are outside of the UK jurisdiction.

OFCOM may then move on to trying to disrupt the service by asking app stores to delist it for UK users, instructing UK payment services not to work with it, and ordering UK telecoms companies to block access at the network level.

An aggressive approach of prosecution and blocking is the intended strategy of the Bill towards services that are deemed to be causing significant harm to people in the UK, especially children, and which refuse to submit to OFCOM’s oversight and make improvements.

OFCOM should be able to rely on market forces to deter larger providers of general purpose services like social media from being uncooperative as they will have a commercial imperative to be able to access the UK market.

Investors and staff at reputable companies will also not want to lay themselves open to prosecution in the UK even if any penalties could not be enforced until they travel to a country where the UK authorities can reach them. 

Providers that completely ignore OFCOM are more likely to be smaller services based in wilder jurisdictions that do not see OFCOM sanctions as especially impactful on their businesses or personal lives.

If a large number of services defy OFCOM, and we might expect this especially to happen with pornography sites, there will be some interesting technical and legal questions about how to enforce blocking at scale.

[COMPLIANCE] What about the ‘can’t play’ group you referred to, how are they different and what can they do?

There are likely to be some services who do not object in principle to being regulated by OFCOM but are worried about whether they can meet all the requirements being placed on them.

In these cases, OFCOM is likely to encourage them to enter into a dialogue rather than simply leaving the UK market.

The Bill has a lot of provisions that are based on the idea of reasonable steps being taken which will vary according to the specific nature of each service.

There are also various mechanisms for services to challenge instructions given to them by OFCOM if they believe these to be unreasonable.

We might see this process of dialogue with service providers as a key area for determining the success or failure of the Bill.

Success comes if OFCOM is able to persuade reasonable service providers to adopt sensible measures that will be of benefit to their UK users – we will still get the service and it should be safer.

If we instead see reasonable service providers exiting the UK market because they believe OFCOM is asking them to do the impossible then the overall effect of the Bill may be seen as net negative by UK users.

[WIKIPEDIA etc] Does this ‘can’t play’ group include services like Wikipedia who I heard have a problem with the Bill?

The definitions for ‘user-to-user’ services in the Bill do mean that providers like Wikipedia and OpenStreetMap are in scope and there are no explicit exemptions for them.

The Bill kicks off a process of producing detailed rules for assessing services and assigning particular sets of regulatory responsibilities to them.

It is possible, but by no means certain, that this process will lead to Wikipedia and similar services being designated as not needing to do anything new to comply with the law.

This could be done through a set of rules that allows OFCOM to exempt services deemed to be low risk and/or having a defined set of characteristics.

If they do not do this then there could be real challenges for the continued operation of some valuable services in the UK given what we know about the requirements in the Bill and the operating principles of services like Wikipedia.

For example, it would be entirely inconsistent with Wikipedia’s privacy principles to start collecting additional data about the age of their users and yet this is what will be expected from regulated services more generally.

The smart solution is for a full exemption to be granted but there is plenty of scope here for things to go wrong as the UK government ties itself in knots trying to make special rules that end up not solving the problem.

I would predict that we will get through this one as the public interest in not over-regulating is so strong and blindingly obvious but there may be more pain and uncertainty along the way.

[ENCRYPTION]  There is a lot of talk about encrypted services leaving the UK, is this likely?

This is a possibility but it all depends on how OFCOM behaves.

One of the powers the Bill gives to OFCOM is the ability to order services to deploy specific technologies to detect terrorist and child sexual exploitation and abuse content.

This power can be used to order a service to deploy common image scanning tools that are already used by many platforms to identify and prevent the upload of the worst kinds of abusive material.

This kind of order would not cause a provider to leave the UK where they are willing and technically able to introduce a particular tool after being warned to do so by OFCOM.

But there may be cases where a provider believes that the technology it is being ordered to deploy would break essential functionality of its service and so would prefer to leave the UK rather than accept compliance with the order as a condition of remaining.

This scenario of a provider finding an order unacceptable is not confined to services offering end-to-end encrypted communications but these have become the most-used example.

There are organisations actively lobbying for more scanning of content on end-to-end encrypted services and proposing various solutions they believe are reasonable.

Some service providers are equally vocal in explaining why they believe the proposed solutions would create risks to the security and privacy of their users.

It seems likely that OFCOM will come under pressure to use its new powers to order encrypted services to do more to identify seriously harmful content.

What is not yet clear is whether OFCOM will go so far as to order services to deploy the kinds of technologies that some are advocating for and providers have said would be unacceptable.

If OFCOM does issue this kind of order then we should expect to see some encrypted services leave the UK market, potentially including very popular ones like WhatsApp and iMessage.

All of this will be played out in public as OFCOM will have to issue warning notices before proceeding to a final order and providers will be able to make their concerns public.

Providers will also be able to challenge OFCOM orders using legal mechanisms so the process from OFCOM wanting to make an order to a service provider actually leaving the UK could take many months.

There is a path to avoiding the loss of encrypted services if OFCOM works with service providers on ways to reduce the distribution of illegal content but does not order the use of some technical solutions.

At this stage we have to recognise this as a big unknown with the wider global context also a significant factor in how it will play out.

If the EU, US and UK were all to insist on messaging services deploying the same set of technologies then major service providers would have no option but to do this.

In a scenario where only the UK and/or EU are making demands while the US takes a different view then we may see US-based providers leaving the relevant markets.

(NB US-based providers like Meta, Apple etc cannot realistically leave their home market so whatever the US government demands provides a baseline.)

[UNDER 18s]  Will there be fewer services available for under 18s?

This effect is hard to quantify, but it seems likely that there will be a reduction in choice for under-18s as some services choose to be 18+ in the UK market.

The requirements placed on a service that is being offered to under-18s are significantly more onerous than for those only being offered to adults.

This is deliberate as a core driver for this Bill has been a wish to protect children online and this is reflected in a set of extra duties for services being accessed by UK-based children.

When a service does their evaluation of their new compliance obligations they will want to weigh up the costs of serving UK under-18s against the benefits they derive from having younger users on their platform.

If the UK under-18 user base is anyway small, but over the low threshold that the Bill sets for child safety duties, then the rational choice for a service may be to declare itself as 18+ for UK users.

They would then need to demonstrate to OFCOM that they are taking steps to prevent UK under-18s from registering with or accessing their service.

It seems unlikely that major social networks like Instagram and TikTok would take such a step as they cater extensively to teenagers who provide a lot of the buzz around their services.

But we might imagine services like Twitter and Reddit deciding that they can live with a model of 18+ content for 18+ users and that this is less costly and risky than putting in place special provisions for UK under-18s.

[AGE ASSURANCE] So children will have to prove their ages, but adults will be exempt from all this stuff won’t they?

No!  All services will need to know if they are being used by UK children or adults and the only way to do this will be to do some kind of age check on all UK users.

Even where a service has decided it does not want UK under 18s at all, they will still need to prove to the regulator that they are only being accessed by UK users aged 18+.

In some cases they may be able to demonstrate they do not have child users through other evidence, for example an online community for retired people that is clearly only useful and interesting to this demographic.

But most services are more general purpose and so will need to collect data from their users that provides some assurance they are in fact 18+.

We will get a clearer understanding of what OFCOM will accept as evidence for a service being 18+ in some guidance that will be issued later.

We can reflect today on several scenarios that could play out for how this will be done for general purpose social media services.

The simplest solution would be for proof of age to be provided by the major services where most people in the UK already have accounts and store significant personal data. 

This is not an infrastructure that exists today but you could imagine on signing up to a new service being asked if it could ping one of your existing services to confirm you are 18+.

From a user perspective this could be quite unobtrusive and importantly would not involve providing any new personal data.

We can expect some age assurance technology providers to advocate for their own solutions to be included in a limited set of approved options in OFCOM guidance and they have a direct commercial interest in such a status.

If the only approved options are ones that most UK internet users are not already using then the challenge will be to get everyone to sign up to them – both individuals and providers.

We can expect this space to be a major area of commercial contention as OFCOM works through the options.

I think likely that that comprehensive coverage and ease-of-use will win out and large existing providers will offer age assurance as a service either independently or in partnership with specialist age assurance services.

Absent such an adoption by large service providers, we may see a repeat of previous efforts which failed to take off as they are too clunky, expensive and/or problematic for user privacy.

[TERMS OF SERVICE] Will providers need to update their terms of service?

Yes. You can expect most of the services you use to ask you to agree to some updated terms of service if you live in the UK.

There are a lot of detailed new requirements on service providers in the Bill that they will have to spell out in their terms or risk complaints and legal action.

These include making sure the rules on what content is allowed and prohibited are very clear, having appropriate references to privacy and freedom of expression, explaining appeal and redress procedures, and offering special treatment to politicians and journalists.

Many services have terms that already include these kinds of things but they will need to revisit them with new guidance from OFCOM in hand to ensure they are compliant.

It seems likely that the overall impact will be for services to need longer documents than they have today and that there will be special addenda to global terms for UK users only.

[SPEECH] How will the adult user experience be different?

Not much. At least for most users of the major social media and search services assuming that the proof of 18+ process has been made easy.

This is because most of the large providers already prohibit a much wider range of speech than the Bill will require them to and they already have a lot of the required measures in place.

They are likely to want to ask users to sign up to new terms and may need to show information to them at various points to comply with the regulation.

A hoped-for outcome of the Bill is that platforms will improve the processes and responses for when people do need to report harmful content.

And for some special types of users, notably journalists and politicians, there may be some new ways of handling their content that were not there before, or at least not consistently.

Where things get more spicy is with services who have terms that permit more edgy content reflecting their culture and values.

These services will have to implement more new features than the services that are less permissive as the Bill requires them to provide users with controls for some kinds of content if they permit these under their terms.

They may also find themselves pulled into more debates about whether content is merely offensive or also illegal which could, over time, lead to them taking a more restrictive approach.

[UNDER 18s] Will the experience for under 18s change more radically than that for adults?

Yes, If the Bill has one overriding goal, it is to reduce the exposure of under-18s in the UK to content and behaviour online that will harm them.

The rules for how you need to treat children are quite detailed in the Bill and will be followed by pages of guidance on what services are expected to do to demonstrate their compliance.

This will include measures to understand the age of child users that are likely to include profiling methods like image and behavioural analysis.

And once a user has been identified as a child within a particular age bracket then services will be expected to tailor the content and features available to them to a very significant extent.

Exposing children to the wrong kinds of content, as defined in the Bill, will be a cardinal sin in the eyes of OFCOM that will trigger regulatory interventions.

A robust compliance plan is likely to lead to material changes to what under-18s see on most online services from the status quo today – this is an intended and desired outcome of the Bill.

[SUMMARY] What are the best and worst case scenarios for how this will all play out?

In the best case, the world will look like this –

  • Categorisation rules will exclude services like Wikipedia and just cover ones that most people would agree need regulatory oversight.
  • OFCOM will work with services on harm reduction plans that do not compromise core features like encryption,
  • 18+ users will have their ages verified using existing data and will just experience an occasional confirmation screen.
  • Under-18s will continue to be welcome on a wide range of services and will have easy methods to demonstrate their ages.
  • Services will tailor themselves for under-18s in ways that improve safety without diminishing their core usefulness.
  • 18+ users will be able to see and share content on a wide range of services as they do today, including edgy speech where platform-appropriate.

In the worst case, the world will look like this –

  • Services like Wikipedia will restrict or remove their products from the UK as they are unable to agree to OFCOM’s regulatory requirements.
  • Major encrypted messaging platforms choose to leave the UK rather than agree to deploy technologies that OFCOM has mandated.
  • All UK internet users have to go through time-consuming and burdensome age verification processes to access a wide range of online services.
  • Under-18s find that their choice of online services is extremely limited as many major providers decide it is too costly or risky to offer products to them.
  • Services for under-18s are so restricted that users no longer find them useful and seek to bypass controls to access 18+ versions instead,
  • 18+ users find that platforms which previously permitted more edgy speech now limit it in the UK for fear of the regulator.

And now for a summer break. The Bill returns to the Lords for its final stage on Wednesday September 6th.

11 Comments

  1. Mark Mark

    Thank you for the part saying how this is effectively the equivalent of licensing sites to show in the uk I don’t think people have really realised this. I live I wishful hope this bill isn’t implemented, it’s so obvious it going to harm the way we use the Internet a not even have the desired outcome. Plus some people like me like going on forums rather than social media but now ùk based one I go on could be shut down because of this.

    • James James

      Only ‘categorised’ services will be licensed – mainly the biggest ones. Govt says no more than 40. No licensing anywhere else.

      • I used the word “licensing” to describe a world in which companies pay a fee, have to follow detailed guidance, and will be actively supervised by a regulator. This will definitely be the experience for the largest user to user and search providers (category 1 and 2A in the Bill). What is less clear is how the 25,000 odd other in scope user to user services will be treated (category 2B) in the Bill. There might be little change for many of them but it is equally possible that significant numbers will have to pay fees and be supervised closely, albeit with fewer duties than the category 1 services. This is all up for debate in the secondary legislation.

  2. Alex Alex

    I believe you are underestimating the worst case scenario.

    I foresee the creation of a UK splinternet (or Internet Brexit/the great UK firewall). Considering estimates of 100,000 user-user services, most of which are overseas and none of which have been regulated by OFCOM. The combination of government-sanctified age-assurance snake oil services (a mandated monopoly) and onerous compliance of prior restraint content filtering forces a significant portion to cease business in the UK.

    UK citizens will be motivated to bypass the new restrictions. Either because of their encounters with unprecedented of internet censorship, regulation and filtering or because there is a major data breach of an identity service provider (or MITM/impersonation attack/other flaw). Leading to an UK internet experience that is progressively worse, which itself leads to a progressively “darker” internet as users move to VPN or otherwise anonymise their traffic.

    I also note there’s no mention of weaponised takedowns as we’ve seen with DMCA. Or how the UK will need to constantly regulate new services which pop-up to replace the censored ones.

    OFCOM has been given an impossible task. Asking the tech community to “nerd harder” to create multiple logically impossible features. From age-verification that does not significantly increase the risk and cost of all services ; to encryption breaking, but only to verify that an ever changing list of prohibited content is not ever transmitted.

    I look forward to this law being used as a cautionary tale in how to turn your country into an internet backwater.

    • Mark Mark

      It’s for this very reason I hope this bill won’t be enacted. I have been told the uk has a history of saying they’ll make tech laws or making tech laws then just not implementing it after it become’s so clearn it’s not going to work and say “oh well we tried”. That’s what I was told and I hope that’s what will happen with this.

  3. Vainius Vainius

    Thank you – great commentary!

    Are there other examples (in or outside UK) where the regulator is given very broad powers by the law, but the hope/expectation is that they will be “reasonable” in how they will use it?

    It feels like a flawed model. I would very much prefer the laws to set clear limits to government power, rather than rely on “goodwill” of some agencies.

    • Lorna Woods Lorna Woods

      OFCOM already has some very broad powers in relation to direct content regulation in the broadcasting regime. Section 319 Communications Act requires OFCOM to produce a code which is binding on broadcasters as regards acceptable content. The provision includes a requirement to maintain generally acceptable standards as regards content that is harmful or offensive (see here: https://www.legislation.gov.uk/ukpga/2003/21/section/319). Here is the Code: https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code. Ofcom’s approach seems to be quite painstaking in terms of trying to be clear as to what’s acceptable or not, and allows broadcasters some flexibility within the rules as to how they approach issues. In terms of enforcement, it seems to try to engage with broadcasters first, rather than going straight in with heavy fines (and it has clear procedures around how it will impose financial penalties). Its approach to its duties under s 319 have been challenged before the courts, for example in relation to its approach to harmful covid-related content, but the courts have found Ofcom’s processes to be lawful and compliant with the requirements of Article 10 ECHR (freedom of expression).
      You might say other regulators have wide margins of appreciation too – the Competition and Markets Authority for example.
      What is crucial is that they are and remain independent from political and industry direction. As regards OFCOM and the Online Safety Bill, this has been the subject of some debate in the Lords (and also in the Commons).

    • The EU is currently bringing in a similar model under the Digital Services Act that will have a range of regulators including the European Commission itself carrying out the supervisory role.

      I am personally comfortable with regulators able to make detailed rules as long as 1) they are truly independent, 2) there are all the right legal checks and balances in place so they can be challenged, and 3) this includes being required to respect human rights to privacy and freedom of expression. I think these are true for OFCOM today but we need to make sure UK government does not row back on judicial review and human rights commitments.

  4. The EU funded http://www.euCONSENT.eu to develop and then pilot an interoperable online age assurance solution which would deliver the outcome you are otherwise resorting to “major services” (by which we assume you mean Google, Meta, Apple etc.) to deliver.

    This innovative network allowed a user to prove their age to social media plaftform A, and then re-use that same check with platforms B,C,D etc. It was based on the existing architecture for the European digital identity eIDAS 1.0. So, for example, might sign up to Facebook today, and be refered to “AGECHECKERINC” to do an age check using facial age analysis. Tomorrow, when you want to open a TikTok account, even if they use on of AGECHECKERINC’s competitors to do their age checks, that competitor will be alerted that you’ve already done an age check (if you allowed AGECHECKERINC to leave a cookie on your device) and instead of asking you to do a fresh age check, it will simply ask AGECHECKERINC to confirm you are old enough to use TikTok. You don’t have to share any personal data with TikTok or its age verification provider – they just get a “yes” or “no” answer from AGECHECKERINC.

    For children under 13, if a social media platform does allow them to open an account, they will need parental consent under GDPR rules. The same network allows a child to re-use the connection established to their parent or legal guardian the first time they needed consent, in much the same way as the age process described above.

    This approach does not give major services a monopoly on securing age-assured audiences for social media platforms, advertisers and online retailers, which would otherwise be a huge increase in their market powers.

    As it only trades “Pass” or “Fail” messages, it is also privacy-preserving. Users have a wide range of choice about which method of age assurance they prefer, and can also select a provider they trust, or which has been selected for them after due diligence by the first social media platform they wish to join.

    We expect all this means the more optimistic scenario you describe can easily be the reality.

Leave a Reply

Your email address will not be published. Required fields are marked *