Skip to content

Online Safety Bill, an opt-in model for identity verification – 19th April 2022

-- 11 min read --
Online Safety Bill 2022, Clause 57 – identity verification

We have heard calls from various quarters for people to have to verify their identities before being able to post content on social media.

In this post I will look at how this debate is reflected in the published text of the UK Online Safety Bill.

First, to set some context, who has been calling for identity verification and what harms are they trying to prevent?

UK politicians like Diane Abbott MP who have experienced extensive and vile online abuse have argued that ending anonymity is necessary to catch those responsible for this harm.

The main organisations representing the sport of football in the UK have written to Twitter and Facebook calling for an ‘improved verification process’ for all users as a way to combat racist abuse against players and officials.

The US social psychologist Jonathan Haidt makes similar calls for identify verification in a recent article in order to improve the public political sphere.

A common thread is the notion that online anonymity is problematic, but there are some important differences in what they are trying to achieve that we can usefully explore as we consider the UK proposals.

So, will the Online Safety Bill meet these demands and require people to verify their identities when using online services?

No. Not as such, but some regulated services will be required to have identity-related features and the regulator will be setting some standards for identity verification.

To unpack this, the UK Government is not proposing that all people who use regulated services have to verify their identity as a condition of signing up, ie there will be no mandate for universal identity verification.

But they are introducing a new legal duty just for larger user-to-user services to offer all of their adult users a way to verify their accounts on an opt-in basis (clause 57).

NB the Bill imposes some duties on all ‘user-to-user’ (social media, video sharing, messaging etc) services with additional duties just for larger platforms (the exact criteria for sorting the smaller Category 2B sheep from the larger Category 1 goats are still to be determined).

The Bill does not set out in detail what it means to ‘verify’ an account but we can expect the regulator, Ofcom, to issue guidance later which will go into what constitutes an acceptable form of verification (clause 58).

The primary use of this verification is for the purposes of ‘user empowerment’ as these large platforms will also be required to offer their users ways to control interactions with and/or see content from unverified accounts (clause 14(7)).

In short, the Bill orders large platforms to create an opt-in system for adults only that is less about using identity verification to catch the ‘bad guys’ and more about giving people the tools to prove they are ‘good guys’ and filter content so they only see stuff from other ‘good guys’.

Does this mean everyone in the UK will get the right to have a ‘blue tick’ on services like Twitter and Instagram?

Many social media platforms already offer a way for some people to verify their accounts and flag these verified accounts with a symbol – a common way to talk about this status is ‘blue tick’ as both Twitter and Instagram use this flag.

These verification processes were created in response to the problem of people creating fake accounts of notable people and organisations for deceptive purposes.

The people being impersonated naturally complained to the platforms and ordinary users were also upset when they found that they were not connecting with the real thing but subject to a deception.

It is possible that platforms will choose to use their existing blue tick systems for regular users as a way of complying with the UK legislation but they may conclude that this is not the right tool and run parallel systems.

They would continue to operate a blue tick system for notable people only everywhere, including for UK users who meet the criteria, while offering a different verification method for regular people just in the UK.

The regular user system might use a simpler method of verification than the blue tick processes which tend to require significant documentation to prove both identity and notability (and may also have criteria for activity levels and good behaviour),

It might involve a different visible flag to show the UK user has not been through a blue tick verification but a different UK-specific process.

Or it may not involve a visible flag at all if a platform believes it can meet its obligations by offering users the ability to toggle on and off the features related to non-verified users and doing all the work on the back end.

Can we come back to mandatory universal identity verification and dig into this some more?

We first need to recognise two different aspects to verifying identity on social media services – the private (does the platform know who you are) and the public (do other users know who you are).

Looking at platforms today we see a mix of models in play –

  1. Unverified Chosen Identity – you can choose the name under which you present yourself and the platform does not seek to verify your identity, eg a typical Twitter user.
  1. Unverified Real Identity – the platform requires you to use your real identity as a condition of access to the service but does not seek to verify this (unless it suspects you are lying), eg a typical Facebook user.
  1. Verified Chosen Identity – the platform allows you to present yourself as you wish but does require some information linking you to a real world identity, eg a small scale advertiser on many platforms.
  1. Verified Real Identity – the platform is careful to verify that you are who you present yourself as and requests information about your bona fides, eg a large corporate entity, political figure or celebrity on many platforms.

Supporters of mandatory identity verification see this as having beneficial effects through increased Punishment and Deterrence – these are related but not identical.

When it comes to speech that has already been defined as criminal in the UK there is a frustration when people seem to be able to say these things with impunity online.

In many cases it is already quite possible for law enforcement agencies to find out the real identity of a person behind a social media account – if this does not happen this may be related more to resources, as it can take a few steps, than technical impossibility.

But it seems reasonable to suppose that it would make law enforcement’s job easier for some cases of criminal speech and so more people might get punished if platforms collected more verified identity information.

If it is widely publicised that more people are being identified and punished then the assumption of supporters of a change is that this will act as a deterrent to other people who might otherwise have posted criminal speech.

These effects on criminal speech are assumed to be realisable whether or not individuals have to represent themselves in their real identities to other users as data collected privately by the platforms will still help prosecutors.

In this world, I might create an account called ‘CrazyBadBoy’ and not disclose my real identity to other users of the service, but the fact that the platform knows I am actually Richard Allan and has other identifiers for me would be expected to act as enough of a deterrent.

Those wanting to change the behaviour of speakers through law enforcement are then drawn to models 3 and 4 where identifiers are being collected and they may be less concerned about whether people use real names or pseudonyms.

There is also an argument for model 2 having a moderating effect on speakers if you assume that people are actually using their real identities, even if not verified, and that they will be more careful because of this.

Facebook has long argued for this moderating effect in support of its policy to require users to present themselves under their real names when challenged by those who want to be able to use pseudonyms on the platform.

It sounds like quite a few people have called for this verification to be mandatory so why is it not in the Bill?

There are political and principled reasons (sometimes but not always the same thing) for why the costs vs benefits equation for mandatory universal identity verification on user-to-user services is not the best option.

What are the politics in play here?

On the political front, the current UK government has quite a sizeable libertarian faction who are more broadly hostile to mandatory identity requirements, for example opposing the introduction of compulsory identity cards in the UK.

While many members of the UK Parliament would support mandatory universal identity verification, as evidenced by MPs like Diane Abbott who have called for this, there would also be resistance on the basis that it is ‘just not British’ to ask people to produce identity documents unless and until they are suspected of having done something illegal.

If the Government were to pursue the mandatory universal path then this would mean tens of millions of people in the UK receiving messages from several popular platforms ordering them to verify their identities ‘by order of your Government’ or lose access to the service.

However sympathetic they are initially to the idea of improving online safety and preventing harassment of others, for many people this would be an interference too far and they would become hostile to the new regime.

As is common in politics, the current governing party in the UK may well have tested this proposition with voters and seen that it is a political step too far.

It will be interesting to see whether other parties, or indeed backbenchers in the governing party, try to amend the Bill in favour of universal mandatory verification as they assess there to be support from their voters for this.

The UK’s international reputation could also be a factor as universal identity verification for online access has, at least to date, often been associated with countries that the UK would wish to criticise and set itself apart from on human rights grounds.

And the principled reasons not to do this?

There are both privacy and freedom of expression grounds for opposing universal identity verification. 

The privacy arguments are quite obvious and material if the law made it so that millions of people in the UK were suddenly required to upload proof of their identity to all of the large social media services they use.

With age verification, there are some mitigating measures that could be put in place to minimise the sharing of sensitive identity information but these do not apply when the whole purpose of the exercise is for platforms to hold a record of someone’s actual identity.

The measures now in the Bill allow people to choose whether they wish to verify their identity using whatever (Ofcom and ICO-approved) process a platform has put in place so importantly this is being set up as something they will be freely consenting to do.

Had the Government chosen the mandatory route then this would have opened up a lot more questions about compatibility with data protection legislation and whether individuals can be forced to disclose specific information as a condition of access to a service.

We are seeing a lot of pushback against online services for collecting excessive personal data as part of their business activities and it is not a stretch to see the inconsistency with passing laws, however well-motivated, requiring them to collect more data.

The freedom of expression arguments are more indirect but potentially represent even more of a material change than the privacy concerns given the amount of data already held by online services.

An express purpose of the identity verification process would be to create a climate in which more people are deterred from uttering harmful speech, ie success is defined as an increase in self-censorship as people hold back for fear of being identified and punished.

This may seem like a reasonable goal as long as we are talking about the worst kinds of illegal speech where there is a broad societal consensus that nobody should be saying these things.

But, with some exceptions, there is typically not a bright line between what is actually illegal and other forms of ‘bad’ speech (I will post later on illegal and harmful speech definitions).

It seems quite possible that the deterrent effect of identity verification would extend far beyond the stated target of preventing illegal speech and mean that people hold back from expressing a wide range of legitimate opinions for fear of the potential consequences.

There may be some support for such a chilling effect from those who champion a ‘healthier discourse’ in the online environment but it would be hard for a UK Government to sustain a policy if it were shown to have such a stifling effect on legal speech.

Can you make a prediction about what is likely to happen if the Bill passes as currently worded and how we will see things change?

This is always the fun bit of these posts on the Online Safety Bill where I have to get out my crystal ball.

As with much else in this legislation, we will need to see the later guidance to understand the effects in detail but here are some examples of where we may see challenges and disputes.

First, we will need to see what Ofcom defines as acceptable forms of verification and the extent to which the guidance mandates specific technical solutions vs describing general approaches.

For example, services like Facebook or LinkedIn where people generally have a high proportion of connections who know them in real life may want to use social verification to meet their new regulatory obligations.

The language in the Bill suggests that social verification should be considered sufficient as it says the ‘verification process may be of any kind’ and goes out of its way to point out that it ‘need not require documentation’.

So an easy route for platforms and users might be for someone to ask a number of contacts to confirm their identity and their account can then be flagged as ‘verified’ and their posts treated accordingly.

But we will need to see the guidance to understand the detailed criteria around how many connections are needed of what types to achieve the right level of assurance for social verification.

The platforms have been working on their verification mechanisms for years for use on subsets of their user accounts and there is a world in which the regulator largely leaves the details to their expertise.

This will still mean platforms having to build larger scale systems than they have at present so that millions of UK users can use them, but this would be less of a burden if they have discretion to use more light weight and cost effective methods.

Second, we need to understand what the requirement to restrict access only to verified accounts means in practice.

This will be novel for many platforms and could require significant re-engineering of their services to use verified status as a primary factor in decisions about content restrictions and distribution.

We will need to see whether there is actual demand for users for this ‘verified only’ view of the world as opposed to it being something that legislators think they should want.

If it turns out to be a heavy lift from an engineering point of view and is used only by a very small minority of people then it risks becoming a Cinderella feature that may be maintained for regulatory compliance purposes but without imagination or real commitment.

Third, we need to consider the international dimension of how these measures will apply.

This is a more general consideration for the Bill as platforms have to choose whether to implement regulator-mandated features in only the country where this is legally required or to apply them more broadly.

This in turn will depend on how reasonable a platform sees the new requirements – if they largely align with what they do already then it may be easiest for platforms to have a single global standard. 

But if the UK requirements are misaligned with the platform’s global approach then we may see a UK-specific implementation which will open up another set of questions.

For example, if a platform believes that ‘blue tick’ verification of notable accounts is working well for users globally and would be less useful if diluted by the inclusion of verified regular users then this will steer them to build a two-tier system,

They will then need to consider whether the new regular user verification should be offered just to people in the UK or globally and this will depend on how onerous it is and feedback on whether it is actually useful (and used).

Similarly, when building systems to restrict content from unverified accounts platforms will need to decide on the treatment of content from outside the UK where different verification systems may be in use.

If a platform decides not offer regular user verification in other countries then this could have a significant differential impact on how people in those countries engage with UK users.

Where a UK user switches to a ‘verified only’ view of the world this will include content from verified regular UK users but would mean only content from notable non-UK accounts is shown as regular non-UK users would have no way to opt-in to verification.

And if a number of countries adopt different standards for verification and rules for how platforms need to treat verified users then we could quickly end up with a mess of conflicting obligations and significant variability in content distribution.

These cross-border questions are always challenging for global platforms and are often under-estimated when national laws are being debated.

So how would you summarise all this?

The UK Government has, I think reasonably, concluded that universal identity verification is not a necessary and proportionate measure to impose on social media platforms.  

The Government does though see benefits in making a distinction between verified and unverified accounts and wants to encourage platforms to recognise this distinction on an opt-in basis.

Two new duties will be imposed on large platforms that will steer them to make more use of identity verification – 1) that verification should be offered to all UK adult users, and 2) that content from verified and unverified accounts should be treated differently.

This creates the conditions for an experiment that may help us understand the effects of increased identity verification, assuming people in the UK do actually use these new features.

After a period of experimentation we may see the Government happy with the opt-in model, or we may see renewed pressure to move to a mandatory model for large services and/or for the opt-in model to be extended to other services.

Leave a Reply

Your email address will not be published. Required fields are marked *