Skip to content

Cutting Off Your S.230 Nose To Spite Your Conservative Face – 28th May 2020

Last updated on June 1, 2020

-- 8 min read --
Fig 1 – Executive Order threatening social media

There is much excitement today about the US President threatening to move against social media companies after Twitter attached a factcheck to two of his tweets.

This includes a review of a piece of US law called ‘Section 230’ with the implied threat that this review may lead to this legal provision being weakened or scrapped altogether.

There are lots of people who live, eat and breathe ‘Section 230’ but here is a quick explainer for readers who are coming to this new.

Section 230 and Friends

One of the special things about internet services is that they make it easy for people to share their own content.

This content has not been commissioned or edited or checked for truth or legality by the platforms and it is not owned by them.  

Policymakers early on recognised that if platforms could be held legally liable for this user-generated content as soon as it was shared then this could hold the sector back.

So they created laws – s.230 of the Communications Decency Act in the US, and the eCommerce Directive in the EU – to make sure that platforms were not immediately liable for content that people shared through their services.

This exemption does not mean platforms can never be held accountable for illegal content but they should have few problems if they behave reasonably.

The US s.230 protections are most robust, while in the EU platforms effectively have a defence of ignorance, ie unless and until they become aware that a particular item of content is illegal they are not liable for it.

This is sometimes talked about as being a great benefit to the platforms, and it is certainly true that these laws make it much less risky to run a user-generated content service, but the real beneficiaries are all of us who use these services.

We can sign up with services and start sharing our content without a lot of upfront barriers and delay as platforms review what we are saying.

In technical terms, we mostly live in a world of ‘post-moderation,’ where platforms will look at our content only if it is reported to them, rather than ‘pre-moderation’, where everything has to be approved before it goes live.

Time for a Change?

If these laws did not exist it would not necessarily mean the end of the world for user-generated content services but there would be a significant impact.

If platforms are liable for content from the moment it is posted then we can expect there to be many more legal actions where people sue platforms for harms they believe have been caused by bad content.

For example, under the EU regime, if someone today tells a platform that content is libellous then the platform is protected from further action if it removes this content quickly.

In a world without protections, the platform might be required to pay damages for all the time the content was available even if they had no knowledge that the libel was happening.

Under the US s.230 system, it is relatively easy (and cheap) for platforms to get cases dismissed by showing the content is from a third party whether or not they have been notified it is illegal.

US law is so free speech friendly that platforms might be able to avoid liability even without s.230, but cases would be likely to become more complex and therefore costly and there could be unpredictable outcomes.

The costs of fighting cases and any negative court judgements could become very significant especially in countries where there is a litigious culture and a tradition of class actions, like the US.

You may not have much sympathy if the only problem would be platforms having to spend more on lawyers and settling claims.

You may even see this is as a good thing if you believe that harms are being left unpunished and platforms are too slow to act on harmful content.

There is a community of policy makers who follow this logic and argue for the law to be changed as a way to get more ‘good censorship’ by incentivising platforms to remove more bad content.

There has also been a vocal campaign from some in the classic media industry for social media platforms to be given full publisher liability because they see the impact as beneficial to them.

The interest for classic media organisations is in having a bigger slice of a smaller social media pie, ie they want more attention to be paid to their output and for it to face less competition from ‘amateur’ content.

The fact that media organisations already have the tools in place to do their own pre-moderation means they are likely to be seen as lower risk content producers by platforms.

If a platform has to prioritise where to apply its content review resources to minimise its legal liability then the rational approach is to give a ‘free pass’ to content from publishers like the BBC and New York Times.

So mainstream media organisation content should still be able to get out onto platforms quickly with little moderation.

Where content is more ‘edgy’ or just unknown because it comes from more fringe organisations or from individuals then platforms will need to apply more controls to keep the legal risk manageable.

This may mean more restrictions on who can use a service – if a platform thinks a certain type of user could lead to significant legal risk then they may reject them altogether.

Or it may mean delays and frustration for a user as they have to wait for a platform review before their content can go live.

All of this is likely to lead to less user-generated content overall being produced with restrictions falling most heavily on those who are not ‘mainstream’.

From a classic media organisation’s point of view, a reduction in user-generated content volumes and delays in its publication would be a win as their content would gain more relative prominence.

There is a world in which larger platforms have deep enough pockets to manage this increased legal risk and figure out how to reduce risk enough through aggressive policing of users and pre-moderation of content.

The ‘winners’ in this world will be classic media organisations and those who are sharing low risk content while the losers would be anyone who the platforms assess to be higher risk.

This effect of slowing down the adoption of new services was what the drafters of these laws wanted to avoid and it still holds true.

It is also a world in which smaller platforms may find that a single negative judgement is so expensive that it puts them out of business.

This is not to play the old ‘what about the start-ups’ card as systems evolve and you can imagine a market for legal insurance developing to help start-ups manage this new risk.

But this again would have an impact on speech as insurers would be likely to require start-ups to follow more restrictive policies as a condition of cover.

The Politics

The situation I have described is one in which classic media organisations and those who want more restrictions on speech have incentives to push for removing platform liability protections.

Absent from this list of beneficiaries are (in US terms) ‘conservative’ speakers who feel they are being over-censored and who despise the ‘mainstream media’.

So, if this measure would not actually be in their interests, then why is it being raised?

A reasonable conclusion to draw is that the US President and his allies do not in fact want the outcome that would occur if s.230 were removed but rather want to use this as a stick to beat platforms.

The message is simply ‘we will make your businesses more difficult and expensive if you do not give us what we want’.

And what they want is for platforms never to interfere with ‘their’ content.  

This creates an ‘interesting’ political dynamic.

The left sees the threat of s.230 reform as way to get platforms to remove more content, while the right sees the same threat as a way to get them to remove less.

Like a frustrated Goldilocks, platforms move between ‘too hot’ and ‘too cold’ and are never able to land on ‘just right’.

Political factions on both sides claim to want ‘neutrality’ and ‘fairness’ but each sees these values as synonymous with the platforms moving into their corner.

And there is no obvious institution that could rule on where platforms should be and have all factions respect this judgement.

Content Standards

Complaints of ‘unfair’ content restrictions of ‘conservative’ content are another major factor in the mix alongside the labelling of the US President’s tweets.

I have written before about how platform content standards can have an effect that is experienced by some as partisan even if this is not their intent.

We often see these complaints of unfair treatment explode into the political domain and the media without acknowledging the remedies that already exist.

Most of the major platforms have review and appeal mechanisms that mean decisions can be checked for correctness.

These may not overturn a result so the user remains dissatisfied but from a due process point-of-view it is important that they exist and function well.

We need to remind ourselves that the relationship between a user and a platform is set out in a contract, that includes the user agreeing to conditions including compliance with the Content Standards.

Where someone thinks that the platform is reneging on this contract by unfairly penalising them then there are well-established legal remedies available.

Legal actions can of course be costly but we do see people regularly challenging platforms in many jurisdictions around the world either individually or in class actions.

Where a platform has behaved unfairly, then we can expect courts to order corrective action, and this does happen leading to both individual user remedies and platform-wide ones such as changes to contractual terms.

This is the normal way to test contractual relationships and business processes and there is no reason to think it is broken here even if people who lose cases naturally feel otherwise.

‘Must Carry’ Obligations

Some people disagree with the principle that platforms should have their own content standards at all and instead argue that they should permit all ‘legal’ content.

One way to do this would be a legal requirement that platforms ‘must carry’ content according to some criteria that would override any platform rules.

The key problem if such a model were to be implemented is not with the speaker but with the audience. 

People have a significant degree of control over who they follow on social media but the networks by design facilitate onward sharing to broader audiences.

If someone were given the ‘right’ to share (legal) pornography or hate speech then it is highly likely that this content would reach other users who feel they have a right not to be exposed to that content.

There is a risk that laws would punish platforms if they do not allow someone to share harmful (but legal) speech while simultaneously penalising them for exposing people to that same speech.

Over time, it might be possible to develop systems that recognise content types so accurately that both rights could co-exist but this would be a very different world of closed circles not the open platforms we see today.

Given the systems we currently have, the balance of public and policy maker supports seems to be in favour of protecting the audience even if that means placing some limitations on speakers through Content Standards.

As well as these practical challenges to implementing an ‘any legal speech’ approach today, it would be a major shift in general legal principles to say that a service can no longer define its own contractual terms.

Some people argue that large platforms are ‘essential public services’ and so should have the discretion to define their own terms taken away from them.

There is a wider debate about essential services that I will not attempt to cover in this post but the big question for setting Content Standards that this approach creates is ‘if not platforms, then who?’

Conclusion

So we seem to be in a position where those who claim to support the rights of more ‘outsider’ voices are calling for a legal reform that would most likely lead to more not fewer restrictions on those voices.  

We can construct conspiracy theories about why this might be the case – “they want more crackdowns as their supporters will get more fired up if restricted more” – but I hate conspiracy theories. 

I am more of an Occam’s Razor person and the simplest explanation here is that the US President sees the threat of repealing s.230 as a big stick to wave at platforms and is hoping the threat will do the job so he does not have to follow through.

If the threat does not work and they decide to get more serious about reforming s.230 then they may use this to create some kind of ‘must carry’ provision to penalise platforms for removing content from ‘their’ side.

Given this has become so partisan, policy makers on each side would pile in trying to shape rules that protect content from ‘their’ side and restrict content from the ‘other’ side.

Whatever we think of the status quo, and there are a lot of areas where platforms could make improvements that I write about, it feels a whole lot worse to me to have political factions crafting rules to favour themselves.

The only benefit there is to this is transparency – this will be nakedly and overtly partisan.

But it means any hope of neutrality will go out of the window and we will see governments moving platforms to the right or left as they win or lose power. SAD!

One Comment

  1. Good read, sir. I especially liked this synopsis: “The left sees the threat of s.230 reform as way to get platforms to remove more content, while the right sees the same threat as a way to get them to remove less.

    Like a frustrated Goldilocks, platforms move between ‘too hot’ and ‘too cold’ and are never able to land on ‘just right’.”

Leave a Reply

Your email address will not be published. Required fields are marked *