Skip to content

Online Safety Bill – on Age Verification – 8th April 2022

-- 10 min read --
Online Safety Bill 2022, Clause 11(3)

It is common for people to propose that online services should be required to verify their users’ ages in the interests of improving safety for children.

Other people argue that online services should collect the minimum possible amount of information and they have concerns about the implications of introducing systems to verify ages and/or identities.

This post looks at how age verification is handled in the published version of the UK Online Safety Bill 2022.

Will the Online Safety Bill force online services to verify the ages of their users?

The Bill does not say “thou shalt implement age verification” but (and this is a very big but)…  it will be very hard for regulated services to comply with all the duties the Bill places on them without knowing the age of their users.

The Bill requires services to carry out child risk assessments and put in place special protections for children and it says they may do this “for example, by using age verification, or another means of age assurance”.

NB in one place only the word ‘assurance’ is used and in another only ‘verification’ but the most common formula uses both words – we will look at the difference between them in a moment.

The phrase “for example” does a lot of heavy lifting here in side-stepping this being an explicit strict mandate on platforms to use specific tools.

Age assurance techniques are referred to as the obvious way to fulfil each duty but the wording leaves open the possibility that services might find other ways to meet their obligations to keep children safe.

So, we might describe the current wording as a “very muscular steer” for regulated services to introduce age assurance, or face being penalised for failing to meet a range of duties, while just falling short of saying they absolutely must do this.

NB we can anticipate there will be amendments proposed during the Parliamentary debate on the Bill to remove the words ‘for example’ which would have the effect of making age assurance mandatory, and this will no doubt spark a lively discussion.

This phrase in the Bill refers to both age verification and age assurance, what is the difference between these?

There are lots of different ways an online service might try to understand how old a user is and the term ‘age assurance’ covers all of these methods.

Some services use self-declaration where people are asked to enter their date of birth and this data is then used to enable or disable age-specific features.

This has the advantage of simplicity, and is a step up from having no information at all about user ages, but the level of assurance can be low as it depends on people’s behaviour and whether or not they choose to provide accurate information.

‘Age verification’ is a type of age assurance at the other end of the confidence spectrum where the assurance is provided by using a 3rd party source for proof of an individual’s age rather than just trusting the user’s own claimed age.

Common forms of 3rd party proof of age are credit cards, government issued documents, and other official sources of data such as educational institution records.

There is also a market for intermediaries who carry out the primary verification of an individual’s records and then offer to act as the 3rd party age verifier for multiple online services (usually for a fee).

In between the softer and harder ends of the assurance spectrum we find methods like ‘age estimation’ where personal data related to a user is analysed to make an educated guess about their age.

These estimates may not seek to assign an exact age to people but rather place them into age ranges, for example that they are likely to be under 13, 13-17, 18+ etc.

Age estimation may be used as an alternative to requiring people to self-declare their age or as a complement to this as it helps a service to identify those people who appear to have falsified their date of birth.

Why is age assurance/verification seen as a useful tool for online safety?

There are three main uses for age verification in the context of online services.

The first is to prevent under-age users from accessing services that a society has determined are not appropriate for people to use until they have reached a certain age.

The second is to prevent specific groups of younger users from accessing some types of inappropriate content within a service where this content is acceptable for other users of that platform (adults and/or older children).

The third is to be able to offer special features to children that will help them to use a service safely such as enhanced user support or policies in simple child-friendly language.

Are there examples in the Bill of using age assurance to exclude users?

The key requirement the Bill places on pornographic services fits squarely within the first type of use as they are steered towards using age verification to exclude under 18s from any access to their services (Clause 68(2)).

The language for pornographic services still includes the ‘for example’ phrase but notably leaves out other forms of age assurance so the intent is for these services to use formal verification rather than any ‘weaker’ ways to inform themselves about user ages.

Search and user-to-user services are not required to exclude under 18s but there is an interesting twist to the legislation that might encourage some services to do this as a way (legitimately) to avoid some regulatory burdens. 

The Bill says that if a service can demonstrate that it is not accessible by children then it is, quite logically, not required to carry out child risk assessments or meet other child-specific obligations. 

The Bill explains (Clause 31(2)) that the ‘only’ way that a service provider can conclude it is not accessible by children is for it to have in place ways to exclude under 18s and again refers to age verification as the example of such an exclusionary mechanism.

It is unlikely that many of the mainstream user-to-user services and search engines will want to exclude all under 18s but services that are only interested in having 18+ users may consider introducing age verification to simplify their regulatory compliance.

These 18+ user-to-user and search services would in effect adopt similar gatekeeping processes to pornographic services and would still have a range of general duties under the Bill but avoid the child-specific ones.

And what about the other uses for age assurance – content restrictions and special features?

Age-related content restrictions and special measures to protect children are at the heart of the new duties that user-to-user services and search engines will have to meet if they are accessible to children (Clauses 11 and 26).

The obligations are described in general terms on the face of the Bill but the detailed requirements will be service-specific as they depend on children’s risk assessments that each service must carry out looking into how their own specific service might pose a risk to under 18s.

The creation of these risk assessments will itself push services to use some form of age assurance as they are required to take into account “the user base, including the number of users who are children in different age groups”.

If age assurance can make the internet safer for children, why would we not just welcome it and move on?

The concerns around age assurance largely relate to privacy and identity rather than proof of age per se, and these concerns vary according to the type of age assurance systems that are being put in place.

The greatest concern comes with systems where every user of a service has to provide sensitive personal information for age verification purposes.

This exposes people to the risk that this sensitive personal information may be mishandled or abused by the service provider or otherwise be used against their interests.

An extreme example of this would be if each user had to provide a copy of an official document like a passport whenever they wish to sign up to any of the thousands of services that will be regulated under this Bill.

This would create a huge privacy risk and would also be likely to have a significant impact on the market for online services as people would be hesitant to sign up to new services unless they appeared highly trustworthy.

There are different privacy concerns around age estimation techniques that work on the ‘back end’ as this is can be a very intrusive form of data processing that users feel is being done against rather than for their personal interests.

While we may as a society decide that this data processing is necessary for safety reasons it should be handled with care if we are to be consistent with our general approach to data protection rights.

A response to these privacy concerns has been the growth of a market for intermediaries who offer to reduce the circulation and processing of sensitive information by acting as a verification agent for multiple services (these can be commercial entities, not-for-profits or government agencies).

The use of intermediaries does not alleviate all the privacy concerns as people still need to provide sensitive personal information to them for the initial age verification.

Intermediaries may also create new privacy risks if they are able to build up an aggregate picture of people’s use of different services by holding records of where they have been asked to provide an age-verified token.

The debate about age verification is closely linked to the debate about anonymity/identity on the internet as it necessarily involves the collection of identity data in the verification process and I will return to this in another post.

Who will decide if any age verification provider is doing its job properly?

Ofcom looks likely to become the de facto indirect regulator of age verification services in the UK as it will produce guidance for pornographic services on how they can meet their age-gating obligations.

Ofcom will produce this guidance in consultation with the UK’s data protection authority and relevant experts setting the standards that pornographic services and third party age verification services they use will have to meet. 

So what do you expect the world to look in in 2023 when this has come into force?

The 25,100 online services that the Government has estimated are covered by the regulation will need to have good information about their users’ ages when the Bill comes into force.

Pornography services seem likely to move to using credit card based age verification for UK users en masse, either directly asking users for their card details or getting some kind of ‘adult verified’ token from cross-platform verification services (which they may themselves own).

Some pornography services have expressed concerns about there being a ‘level playing field’ for access to UK users so we might expect them to push Ofcom hard on blocking any competitors who do not start to verify age.

Many user-to-user and search services will already be collecting age by asking people to enter a date-of-birth and these services will need to discuss with Ofcom whether their current model is sufficient or if they need to add additional age estimation or verification tools.

This process of dialogue between the regulator and services using different techniques will feed into the development of UK age assurance standards that will be reflected in guidance and ‘case law’ as Ofcom rules on what specific services are doing.

Some user-to-user and search services may be attracted by the idea of becoming 18+ only in the UK as this will mean they do not have to configure their services to restrict content for some users on an ongoing basis.

We might imagine services like Twitter and Reddit at least exploring the 18+ option as well as many smaller services that have a strong free speech bias.

Where user-to-user and search services implement age verification, either as a choice or because Ofcom insists they do, and they want to allow continued access for under 18s they will not be able to use credit card verification as this only works well for 18+.

These services will look at the pros and cons of asking for additional ID directly when users sign up versus using third party age verification services with key factors being the cost of each method and how much friction it introduces into the sign-up flow.

We should expect to experience significant variation in what we are asked to do as services test different options for capturing user age and see which methods work best for them (and are permitted by the regulator).

Over time, this might settle into a standard pattern where most services accept age-verified tokens from a small number of ‘approved’ providers if this is a cost-effective and reliable way for them to meet the regulatory standard,

Overall we should anticipate more friction in terms of being asked to provide data as part of sign-up flows and/or being asked to sign up to third party verification services to get age-verified tokens.

And we may see some services opting out of the UK market altogether as they see compliance with the Bill generally and the child safety provisions specifically as too complex and risky for their businesses.

Could all this be handled at network level rather than by each individual online service?

It is a little-remarked-upon feature of the internet in the UK that the mobile networks generally restrict access to 18+ services unless and until a subscriber has asked for full access and passed a credit card check.

It is also common for publicly accessible WiFi services to have filters in place to block access to 18+ services (as best they can).

Home broadband connections in the UK typically have filtering available at network and/or router level which is configurable by the account owner.

As online services look to verify ages for the purposes of the Online Safety Bill, some of these network level controls may prove useful for individual services needing proof that someone is 18+.

For example, if you have passed a credit card check to open 18+ access on your phone then this might offer a method also to prove you are 18+ to the services you access on that device.

Will all this work?

Well, that really depends on the criteria we set for success.

If we expect the Bill to make sure no child user ever accesses services and content that are unsuitable for their age group then all our experience tells us this is not going to happen.

But we certainly can expect some kind of deterrent effect that will keep a significant number of children away from inappropriate content.

The key question that the regulator will have to wrestle with is where they want to strike the balance between friction and privacy risks for all UK internet users and the strength of that deterrent effect for child users.

For maximum deterrence, you could require services to demand several different proofs of identity and back this up with highly intrusive scanning technology to weed out cheats as well as banning VPNs and other circumvention technologies.

Going this far in the name of safety would likely create a backlash as people feel these are excessive restrictions in a free society and would not pass muster in a country that is also committed to freedom of expression.

It seems unlikely though that the status quo will be deemed sufficient or why legislate at all so we can expect the regulator to ratchet up the friction to a point that they feel is aligned with public and political expectations.

We can hope to tease out some sense of where this point might be during the debate in Parliament but the structure of the legislation is such that we will only know where this will land once it is in force and being enforced.

3 Comments

  1. This a very good summary of the issues.

    As the trade association for Age Verification providers, may we add a couple of points?

    1 – our members adopt a “double-blind” approach, so the AV provider does not disclose the identity of the user to the age-restricted website being accessed, and no record is kept of which sites each user is checked by.

    2 – regulation is essential, and we would expect the ICO to take the lead on ensuring privacy-by-design and data minimisation are rigorously applied by all AV providers. We require this in our own code-of-conduct, but that is really only an articulation of the legal provisions in UK GDPR.

    3 – Age estimation techniques can be highly privacy-preserving. Firstly, some operate on the user’s own device, with no data leaving it in the first place. Secondly, the data required to perform, for example, estimation based on a facial image cannot be reverse engineered into the actual image of the user. The algorithm’s work by mapping the actual image and turning it into numbers representing patterns it identifies on the face; those numbers cannot be used either to create an image or for facial recognition to spot the same person in another image.

    4 – you can use age estimation techniques to give a very high level of assurance, if you set the age at which you test sufficiently far above the age-restriction. So for adult content, for example, you might use AI to test for 23, and be extremely confident thart anyone who passes that test is definitely not in fact under 18. There could be a 0.1% chance the AI gets this wrong but that is an enormously high success rate compared to any real-world form of age check (audits of supermarkets find 85% effectiveness, for example). Obviously, those estimated under 23 will then need to select an alternative method for proving their age, but the majority of adults can just rely on estimati0n, and we expect this to satisfy regulators.

    Thanks for focusing on this issue!

    • J J

      Please. I replied to your email a month or so back explaining to you just how what you just said was BS and how to better protect kids and human rights online without it and all I got back from you was silence.

  2. Russ Russ

    An excellent article.

    Good to see such an endorsement of age estimation techniques, but the aspect that concerns me about the whole age verification spectrum is how, after a user has been age-verified (let’s assume by a reputable/accredited provider), what the user is supposed to show/display to regulated sites. What exactly will be the form of the ‘licence’? And how will it be mutually recognised? The draft Bill seems to be completely silent on this important matter.

Leave a Reply to J Cancel reply

Your email address will not be published. Required fields are marked *