There is a lot of critical commentary about the fact that the UK Online Safety Bill includes regulation of content that is ‘legal but harmful’.
A common refrain is ‘well if the content is really harmful, it should be made illegal, but if it is not harmful enough to be made illegal, it should be left out of the regulation.’
In this post, I will look at why the UK Government wants their new law to cover this class of content and explore how this might work in practice.
The first thing that people often ask me is “well what is this content that is harmful but legal”.
As we look at all the provisions in the Online Safety Bill it is helpful to shift from generalised concepts and look at real world examples of content that ‘user-to-user’ services have to deal with.
A good example for ‘legal but harmful’ content is the so-called ‘Tide pod challenge’ where memes circulated on the internet that caused harm to teenagers by encouraging them to consume toxic laundry detergents.
This particular phenomenon was most high profile in the US but variations of this genre of meme pop up in all countries including the UK episodically.
Make it Illegal?
Some people may feel that the correct response is to try and make these forms of speech illegal once a link to harm has been established but there are practical and principled reasons to reject this approach.
From a practical point of view, there would be a need to find legislative language that is specific enough to define the intended offence precisely while not being so specific that the law is redundant as soon as the meme evolves.
You can imagine a number of potential variants if you were to try and make Tide pod challenge speech illegal –
- a specific law banning content promoting the ingestion of laundry detergent, or
- a broader one prohibiting the encouragement of consumption of all toxic substances, or
- an even more generalised restriction aimed at capturing all forms of harmful pranks such as choking or jumping not just those involving eating things.
There would be challenges with all of these approaches in avoiding definitions that prohibit speech that, however distasteful, should be legal in a free and democratic society (viz Jackass and a fair chunk of YouTube content).
But even if the practical issues of coming up with a human rights compatible text could be solved and a law crafted that renders these forms of speech illegal, there are other questions about whether this would be the right approach.
We would then have to ask as a matter of principle whether we believe that prosecuting young people for sharing this kind of content is what we want as a society given the potential consequences for them.
As well as any immediate punishment, a criminal record would have a range of effects on a person’s ability to work and travel and these may feel like very excessive consequences for an impulsive act of teenage daftness.
Once we worked through these implications I think a majority of UK politicians (including myself) would arrive at the conclusion that we should not make this kind of content illegal, even if it has been shown to be harmful.
So we are going to have to live with a range of speech that sits in this ‘manifestly harmful but not appropriate for criminalisation’ category over the long term and take a position on how it should be treated.
There are I think three coherent positions we might take that I can illustrate with some literary quotes to help characterise them – Duffers, Caesar and Ui.
BETTER DROWNED THAN DUFFERS IF NOT DUFFERS WON’T DROWNReply from Daddy, Swallows and Amazons
At the start of the book ‘Swallows and Amazons’ by Arthur Ransome, a group of young children ask if they can go sailing on their own.
Their mother suggests they write to their father, who is away for work, asking him for his views and he sends a short telegram colourfully giving his assent and suggesting he sees exposure to some danger as good for his kids.
Applying this ‘better drowned than duffers’ approach to legal but harmful content would lead us to argue that nobody should seek to remove it, neither the government nor the platforms.
People holding this position might respond to concerns about the content being harmful by pointing to education as the best way to mitigate this rather than seeing censorship as the solution.
This is the most ‘pro free speech’ position and would mean not just removing any restrictive provisions from the Online Safety Bill but also encouraging platforms not to act against any type of legal content.
The Bill already imposes a duty on regulated services to ‘have regard to the importance of protecting the rights of users and interested parties to freedom of expression within the law’ (Clause 29(2)).
Supporters of this position might want to see this duty given pre-eminence such that it would provide people with grounds for complaint to the regulator if their legal Tide pod challenge type content was removed by a platform.
Render Unto Caesar
Render unto Caesar the things that are Caesar’s, and unto God the things that are God’sMatthew 22:21, The Bible
This phrase from The Bible has come to be used as a nice way of describing how powers can co-exist in different realms, in this case where someone owes duties to their religion and to a secular state.
When it comes to internet platforms there are also two sets of ‘laws’ in operation, those of the state within which a user lives and those established by the platform for all of its global users.
There is a coherent position in which we wish to see a harm dealt with but that we wish to maintain a clear separation between these different regimes and to avoid situations where there is confusion over whose rules are being applied.
In this world, we would not be seeking to protect legal but harmful speech, as in the previous position, but would rather look to a platform’s own mechanisms to restrict it as necessary without involving the regulator.
Those platform mechanisms can be very powerful as they often come under pressure from users, advertisers, the media and activist groups to act when something is causing harm, even when it is not illegal.
This is what we saw with the Tide pod challenge with both YouTube and Facebook removing this content in 2018 even though there was no legal requirement to do so and no regulator had powers to make them act.
In this position, we might leave some of the requirements in the Bill for platforms to carry out risk assessments for legal but harmful content, but we would not grant the regulator power to direct their handling of this content.
Who is for me? And let me incidentally add: Whoever is not for me is against me, And let him face the consequences. Now you’re free to vote.Speech by Arturo Ui in The Resistible Rise of Arturo Ui
In Bertolt Brecht’s comic parable of the rise of the Nazis, ‘The Resistible Rise of Arturo Ui’, the eponymous lead character is a gangster taking over US towns.
He prides himself on offering people ‘choices’ while showing them the harm that will come if they do not follow his (oh so very reasonable) preferred path.
[NB In using this to introduce the last of my three positions I hope it will be taken in the intended spirit of a vivid illustration of concerns rather than as a literal accusation that anyone intends to run a protection racket!]
In this position, there is a strong desire for platforms to deal with legal but harmful content and the regulator is given tools with the express intention of enabling them to put pressure on platforms to do more.
This may not be as immediate and direct as the ‘big stick’ powers that the regulator has in respect of illegal content, but the regulator is certainly expecting some action and can impose sanctions if unsatisfied.
In this scenario, platforms will find it difficult to demonstrate that they met their duties of care if they do not place effective restrictions on significant amounts of legal but harmful content when this appears in their risk assessments and/or in directions from the UK Government.
Which Way Will We Go?
I do not see any strong political appetite in the UK for the ‘better drowned than duffers’ position though some more libertarian MPs may attempt to argue for it.
The majority of UK politicians do want to see something done when content is circulating that seems clearly linked to some form of real world harm, and most will end up recognising why we can’t always ‘just make it illegal’.
The text of the Online Safety Bill as drafted opens up a range of positions between the ‘Caesar’ and ‘Ui’ models depending on how it is implemented and especially on the directions that are given by politicians.
We can imagine a future Tide pod challenge situation arising where some kind of dangerous practice is being encouraged online and there are cases of clearly linked harm to some people in the UK.
This could be largely left to the platforms where they would be expected to pick this up in their risk assessment exercises and to demonstrate to the regulator how they intend to mitigate the problem.
The regulator could feel there was little need to intervene as platforms are anyway incentivised to do the right thing under their own terms and conditions and in response to pressure from their users.
At the other extreme, we might see a more political and directive response where the Secretary of State feels they have to designate the content as being of concern and instruct the regulator to order platforms to respond.
This may be little more than political theatre if the platforms are anyway dealing with the content in ways that align with the instructions coming from the Government and regulators.
But there may be scenarios in which platforms feel compelled to impose more or different restrictions from those that they believe are appropriate because they fear the consequences.
We have to recognise that there is a material risk of ‘back door censorship’ where Government is able to exercise control over speech in the UK without effective checks and balances as it claims these are platform decisions.
And for platforms there is a risk of being caught between Scylla and Charybdis as they are criticised for ‘their’ decisions to restrict legal content but are being advised that they risk regulatory sanctions if they do not do this.
In case you are wondering, my own position is to prefer the ‘render unto Caesar’ model – I am not neutral on restricting some legal but harmful content but think this should largely be a matter for platforms while Government focuses on actual illegal content.
I hope this was a helpful walk through some of the issues raised by the idea of regulating of ‘legal but harmful’ content and I will come back later to dive into the specifics of how the Online Safety Bill intends to do this.