German Chancellor Angela Merkel has attracted a lot of (mostly favourable) attention for criticising the recent removal of Donald Trump’s social media accounts.
As it is well known that she has strong political disagreements with the outgoing US President, her comments have been interpreted by some as a powerful argument for free speech.
Merkel coming to the defence of Trump might appear to be an example of someone following the maxim that “I disapprove of what you say, but I will defend to the death your right to say it.”
Now, I am an admirer of Chancellor Merkel for her political skills and record steering Germany through some very difficult times, but the record of her Government has certainly not been one of defending speech they disapprove of at home.
Donald’s German Alter Ego
Let us imagine for a moment that the Drumpf family had not emigrated to the US and that their rambunctious offspring, Donald, had tried to promote his brand of populist politics in Germany instead.
[NB Yes, I know the Drumpfs changed their name back in the 1600s so were firmly ‘Trump’ when they emigrated, but please allow me some poetic license in adopting this name for my imaginary German politician.]
Each day, Drumpf would stir the pot by posting a heady brew of attacks on journalists, news media organisations, politicians, companies, segments of society (generally not White), political leaders in other countries etc on social media
Many of the claims he makes are easily proved to be false, others are simply offensive, and a fair few could be interpreted as inciting violence.
And each day, his opponents would complain to the platforms about these attacks and…. in many cases, the platforms would quickly take them down (or at least make them invisible to anyone who lives in Germany).
In other words, Angela Merkel might not have to worry too much about German Donald Drumpf having a Twitter account because there would be significant controls over the content it could push into people’s feeds.
Now, this may seem odd to those of you who have diligently filed reports with various platforms about US Donald Trump’s posts and been disappointed to get replies to say that the posts don’t go against their standards so they are not willing to remove them.
In some cases, platforms have attached labels to President Trump’s content but they have rarely removed it altogether.
So why would this be different in Germany when platforms generally claim to have global standards?
Well, this is largely thanks to a law brought in by Angela Merkel’s own government with the intent of forcing platforms to remove more content.
This law was created in the run up to the last German Federal Election in 2017 with the support of politicians in the CDU-SPD coalition who saw it as helpful to counter the growth of far right populist individuals and organisations.
This law is called the Network Enforcement Act (also known as the Netzdurchsetzunggesetz, or ‘NetzDG’ to its friends) and is often talked about as an anti-hate speech law, but it is much more than that, it is an anti-Drumpf law.
That’s quite a bold claim to make so let’s tease this out some more.
Notice and Takedown On Steroids
In the EU, unlike in the US, the law does not give platforms full protection against liability for content that they carry.
Platforms are instead granted a limited protection that only lasts until they are made aware of the existence of some kind of illegal content – ‘ignorance is bliss’ in this model.
Once someone has told a platform that something is illegal they should, in theory, act quickly to remove it (if they agree it is in fact against the law) to avoid facing any penalties themselves.
In practice, it can be quite hard to pursue cases against platforms even where they have been put on notice but failed to act against allegedly illegal content.
In some cases, especially for defamation, someone is motivated enough to drag a platform into a court case and ensure they feel some pain if they did not stop the harm by removing the offending content promptly.
But in many other cases, for example with hate speech, the harm is more general and, without a specific victim, it is not clear who would take this to court and seek sanctions against the platform.
This was the concern that faced Germany politicians in 2015-16 during what came to be known as the ‘migrant crisis’ – they felt that platforms were not acting when notified about illegal content in Germany, and that this was fuelling social disorder and the rise of the populist far right.
We might describe the law they came up with as ‘notice and takedown on steroids’, as it sticks to the existing model where platforms only become liable when notified about illegal content, but dramatically raises the stakes for them if they fail to act on those notices.
The legal innovation it uses to do this is that significant penalties can be imposed on platforms if they don’t meet specific requirements for how they process notices of illegal content, including provisions on timeliness of action and reporting on their work.
What it does not do is change the underlying definitions of what constitutes illegal speech in Germany which remain as they ever were in the Criminal Code.
So, if you were thinking that the NetzDG is a law which makes online hate speech illegal, that would be inaccurate, it is rather a law that puts pressure on platforms to remove content that other provisions in German law already define as illegal (including but not limited to hate speech).
When we look at what those provisions are in German law that the platforms are now super motivated to enforce against, this is where the fun stuff starts for Donald Drumpf.
A Quick Tour of Illegal Speech
The Network Enforcement Act covers a range of different offences in the German Criminal Code, but let’s look at the ones that might be most relevant if a German were to emulate Donald Trump in their online speech.
There is of course a hate speech provision, which is positioned as a measure to prevent social conflict – ‘a disturbance of the public peace’.
Section 130 Incitement of masses (1) Whoever, in a manner which is suitable for causing a disturbance of the public peace, 1. incites hatred against a national, racial, religious group or a group defined by their ethnic origin, against sections of the population or individuals on account of their belonging to one of the aforementioned groups or sections of the population, or calls for violent or arbitrary measures against them or 2. violates the human dignity of others by insulting, maliciously maligning or defaming one of the aforementioned groups, sections of the population or individuals on account of their belonging to one of the aforementioned groups or sections of the population incurs a penalty of imprisonment for a term of between three months and five years.
So, if statements denigrate or threaten groups of people on the basis of their nationality then these are going to have to go.
And this law can be used by any group in society that is subject to hatred, not just your typical protected categories, so harsh language against political movements like BLM or ‘antifa’ may also cross the line here.
Next up, we have Insult, which may kick in when our irascible Donald Drumpf has a go at individuals rather than groups of people.
Section 185 Insult The penalty for insult is imprisonment for a term not exceeding one year or a fine and, if the insult is committed by means of an assault, imprisonment for a term not exceeding two years or a fine.
If a victim of Drumpf’s abuse wanted this taken down, they would go to the platform claiming that the insults were criminal under Section 185 and there is a good chance that this would lead to the removal of the content.
The plain text of the law does not help us much in defining exactly when an insult crosses the line to become criminal so platforms have to seek advice from lawyers who will look at case law and take a view on whether something is ‘likely illegal’ or ‘likely legal’.
Where the content is ‘likely illegal’, which is a fairly low bar from advice notes I have read over the years, then the platform knows that a failure to act will expose it to the threat of serious sanctions under NetzDG.
So, it seems likely that anyone railing around calling people they don’t like stupid and ugly, or worse, is not going to pass muster in Germany.
Next up, we have ‘malicious gossip’ which is intended to be a strong disincentive against making negative claims about people, unless you are really confident of your ground.
Section 186 Malicious gossip (üble Nachrede) Whoever asserts or disseminates a fact about another person which is suitable for degrading that person or negatively affecting public opinion about that person, unless this fact can be proved to be true, incurs a penalty of imprisonment for a term not exceeding one year or a fine and, if the offence was committed publicly or by disseminating material (section 11 (3)), a penalty of imprisonment for a term not exceeding two years or a fine.
Here, the law gives a basis for complaint to anyone who feels that someone is talking badly about them, for example claiming that that you are ‘corrupt’ or ‘incompetent’ in performance of your job.
You may have noticed the nice twist in this provision that it is on the speaker to back up their claims by proving what they have said is true.
So, if Drumpf posts something about you with the intent of damaging your standing, and you report this to the platform as malicious gossip, then the platform would have to ask Drumpf to back up the claim.
If Drumpf cannot or will not defend his claim about you with facts to support it then the platform is most likely to take that content down.
Finally, we can look at the ‘classic’ defamation provision which is available in German law as a criminal offence.
Section 187 Defamation Whoever, despite knowing better, asserts or disseminates an untrue fact about another person which is suitable for degrading that person or negatively affecting public opinion about that person or endangering said person’s creditworthiness incurs a penalty of imprisonment for a term not exceeding two years or a fine, and, if the act was committed publicly, in a meeting or by disseminating material (section 11 (3)), a penalty of imprisonment for a term not exceeding five years or a fine.
If we imagine that Drumpf takes a dislike to the people running German elections, and falsely claims they have rigged machines or makes up stories about their ownership when these are clearly contradicted by the publicly available facts, then he is going to be exposed under this law.
Of course, we have seen that US Trump’s team are also being sued for defamation by Dominion Voting Systems and the right to claim defamation is rather more universal than a particularly German thing.
But when you combine defamation law with NetzDG you do get something more powerful than when relying on defamation law alone.
If a German equivalent to Dominion Voting Systems filed NetzDG reports to platforms claiming defamation then there would be a strong incentive for them to remove this content without delay.
In the US, platforms have broad exemptions for liability even where the content proves to be defamatory so they have discretion over whether they want to remove it.
In a typical EU notice and takedown regime a platform is potentially liable once notified, but this risk only becomes material if the case comes to court and the defamation and the platform’s part in it are proven.
Under NetzDG, the risk of sanction is much more immediate and direct for a platform as the regulator can fine the platform for their handling of defamation claims, whether or not these later turn out to be well-founded.
This makes it unlikely that a platform would do anything other than swiftly remove claims that appear defamatory on their face, which would include many of the claims made by US Trump in relation to the election.
So, when we add all this together then we get to a place where Drumpf may not have much content left.
The platforms, operating under the terms of NetzDG, have removed attacks on groups in society, insults where the named individuals have complained, malicious claims about people where he can’t back these up with facts, and anything that appears defamatory.
Drumpf might be able to maintain his account and his feed is still there, but it is an anodyne beast that is not at all like that of his US counterpart.
At this point, you may be horrified by the extent to which German law would fillet the content produced by someone like President Trump, or delighted by what I have described and thinking ‘where can I get me one of them NetzDGs?’
Whatever your position on these specific restrictions, you may still consider that Angela Merkel is standing up for an important principle – that governments should decide the limits on speech not private companies.
And this is what I will explore in the last section of this post – the relationship between platform speech standards and those set by governments.
Platforms will be successful when they meet the expectations of their user communities and this goal will be reflected in the content standards they implement.
In some countries, there is a genuine choice – between local standards that politicians have debated and set down in law, and global standards that platforms have created and linked to their terms of service.
Germany is just such a country that has extensively debated and agreed its own definitions of what speech is permitted and prohibited, and you could imagine a German platform feeling comfortable about adopting the ‘German legal standard’ rather than developing its own rules.
Applying this standard would be likely still to support a safe and orderly environment on the platform, as, if anything, it may be more restrictive than the standards a commercial entity would cook up for itself.
[NB We should note that translating legal standards into operable rules would be a significant challenge as case law is unlikely to cover all the scenarios faced by a platform and can be unclear or contradictory.]
A similar choice will be on offer in other countries that have well-established speech laws but levels of comfort about deferring to these local legal standards may vary quite widely depending on the regime making the laws.
It may be tempting to try and roll this principle over to the US, and argue that platforms should apply the ‘American legal standard’ which is assumed to be a rule favouring nearly unlimited free speech as defined in the First Amendment.
But this would be to apply a false equivalence – while other countries have created a rivalrous code in their laws restricting speech, the US has expressly forbidden its legislators from creating any such code.
In countries such as Germany, the debate between government and platforms centres on ‘my code or yours’, while in the US it is between platform codes and a (deliberate) void – there is no equivalent ‘my code’ in US law.
Those who drafted the First Amendment feared what Congress might do if it had the power to regulate speech and so deprived legislators of this capability precisely so that the US would not end up with laws like the ones adopted by Germany and many other countries.
What the First Amendment does not seek to do is prohibit others, who do not have the unique powers of Congress, from imposing their own restrictions as long as these do not have the force of law.
If we look at one of the other areas covered by the First Amendment, freedom of worship, we see that there is an intent to ensure that people can follow any religion not just a set of orthodox religions defined by Congress.
This has created the space for new faiths like Mormonism to grow in the US without the fear that more established churches like Episcopalians and Catholics might lobby Congress to have them outlawed.
But what the First Amendment does not do is insist that established churches adopt Mormon beliefs or give a right to Mormon preachers to speak in their places of worship.
As we consider speech on the internet, the goal of the First Amendment is not realised by having all platforms permit all speech, but by ensuring that platforms that take an unorthodox approach to speech are not shut down.
Success is that there should be a space for platforms like Gab and Parler, if Americans want these to exist, rather than that Facebook and Twitter are prohibited from creating and enforcing their own rules.
Some US Rules
Given that the US is fiercely opposed to legislators setting speech rules, and prefers for this power to rest with individuals and organisations, it is worth looking beyond the platforms to see how other public spaces are managed.
In my last post, I drew an analogy between football clubs and online platforms as both being private managers of public spaces, and in this vein it is interesting to look at the rules governing US sports venues.
We see that Major League Baseball, the National Football League, the National Basketball Association and Major League Soccer all have rules governing acceptable speech in their venues – there are no ‘anything goes’ First Amendment rights when the public is in these spaces.
The Major League Soccer code includes language that is very similar to the restrictions we find in platform standards :-
The following conduct is prohibited in the Stadium and all parking lots, facilities and areas controlled by the Club or MLS:
Displaying signs, symbols, images, using language or making gestures that are threatening, abusive, or discriminatory, including on the basis of race, ethnicity, national origin, religion, gender, gender identity, ability, and/or sexual orientation
They have their own version of a political advertising ban :-
Displaying signs, symbols or images for commercial purposes or for electioneering, campaigning or advocating for or against any candidate, political party, legislative issue, or government action
And the most serious sanction for breaching the code is ejection from match venues potentially on a permanent basis :-
Any fan who violates any of these provisions may be subject to penalty, including, but not limited to, ejection without refund, loss of ticket privileges for future games and revocation of season tickets. Any fan who (1) conducts themselves in an extremely disruptive or dangerous manner (“Level 1 Offenses”), or (2) commits multiple violations during a 12-month period, may be banned from attending future MLS matches for up to a year or longer. Level 1 Offenses include, but are not limited to, violent behavior, threatening, abusive or discriminatory conduct or language, trespassing or throwing objects onto the field, use or possession of unauthorized pyrotechnics or other dangerous Prohibited Items, or illegal conduct.
Perhaps even more appositely for politicians, we can look at the standards that the Library Of Congress has created for people who want to comment on their website.
Again, we find restrictions on hate speech and insults with the penalty being that you may be ‘ejected’, ie blocked from making further comments, and we can find similar provisions in the terms of countless US websites.
The picture this paints is of an orthodox position where private owners of publicly-accessible spaces in the US will routinely prohibit hate speech and insults using language that is closer to that in German law than it is to anything in US law.
The penalty for breaching these speech rules is not criminal but one of exclusion from the spaces controlled by an organisation.
The fact that the First Amendment exists means that, while this may be an orthodoxy, it is not mandated, and that Congress is not able to prohibit others from following a more heterodox approach where they permit speech that the mainstream does not allow.
And this context provides a different lens through which to consider the question of how major platforms are treating Donald Trump and others on the populist right.
We should not see platforms as violating an assumed ‘unlimited speech’ orthodoxy but rather as being really quite mainstream in terms of US norms.
The question then becomes one of whether some or all platforms should be required to accept the unorthodox position of permitting speech that would not be allowed in a public sports stadium or on other websites, and, if so, what is the rationale for forcing them to depart from the norm?
The fact that platforms may be big does not stand up on its own as a reason for imposing special requirements on First Amendment grounds – some religions and broadcasters are giants in their respective fields but remain self-governing.
And while there may be a viable theory for substituting the government’s judgement for that of the platforms in other countries, to do so in the US would itself seem to go against the intent of the First Amendment.
Whatever the outcome of this rather theoretical debate, on a more practical note I wish any of my readers who are in the US, whoever they voted for, a smooth transition of power this week and successful new administration.