Last updated on July 4, 2022-- 14 min read --
We have seen a lot of discussion over the last week about decisions taken by internet platforms in respect of high profile content shared by the US President.
Much of the interest focuses on the personalities of the leaders of the companies as these individuals have shown themselves to be closely involved in decisions about content.
Some people are unhappy with the specific decisions that have been made, criticising either Twitter for adding various kinds of flags to content, or Facebook for not applying similar treatments on their platform.
Others have concerns about the principle that decision-making should be in the hands of the individual CEOs of platform companies.
There has been plenty of ink spilt in the debate about the rights and wrongs of these specific decisions and I will not add to that here.
I have instead been thinking about how platforms should structure their processes for making decisions more generally.
These are the five steps that I think platforms need to take.
[NB Platforms are not starting from scratch and a lot of this may already be in place or familiar to them. I wanted to develop a model step-by-step for a general audience but am not trying to teach people I know who work hard on this stuff every day to suck eggs!].
STEP 1 – EMBRACE YOUR REGULATOR-NESS
Some are born regulators, some achieve regulator-ness, and some have regulator-ness thrust upon them.
The reality is that tech companies are not born to be regulators and nor do they set out with the express goal of achieving this exalted state.
It is rather more common for platforms to experience being a regulator as something that has been thrust upon them.
This may be especially true for platforms growing up in the techno-libertarian culture of Silicon Valley where regulation is seen as the enemy of innovation.
When companies are in their start-up phase they can focus on building technology and there may be little pressure on them to act as regulators of the use of that technology.
But if they are successful and start to have significant social and economic impact then questions of how they are regulated will become correspondingly important.
While the current debate is focused on user-generated content regulation, platforms also have important roles to play as regulators of privacy, advertising, commerce, and elections.
My first draft title for this step asked platforms to ‘accept’ their status as regulators, but I revised this to ‘embrace’ as mere acceptance (especially if it is grudging) will not mean this is done well.
Success will come from platforms being as excited about getting the regulation of their services right as they are about building products in the first place.
[OK, that may too much to ask as technology companies primarily exist to build technology but it’s what they call a ‘stretch goal’ in the business and makes a point].
STEP 2 – SEPARATE LEGISLATION FROM ENFORCEMENT
There is not enough politics in the rule-making process, and too much politics in the enforcement process.
This is perhaps the most important of all the steps I am describing and is something that is standard practice in government.
Governments create a set of rules and then empower an independent regulator to enforce those rules.
A regulator may have some secondary powers to interpret the rules but is not able to make significant changes or go outside the boundaries that have been set for it.
The government does not get involved in individual decisions by the regulator but may update the legislation if it is consistently unhappy with decisions under the rules as currently set.
This separation is really important for good governance as it forces governments to adopt a more rigorous process than when everything sits within a single entity.
The government has to write down the instructions it gives to the regulator and is fully accountable for the rules that the regulator is enforcing.
The regulator has to demonstrate that it is effectively enforcing the rules it has been given and is fully accountable for its performance in doing so.
Most importantly, this allows for there to be quite different cultures in each of these two arenas, one intensely political and one studiously neutral.
STEP 3 – DEVELOP A POLITICAL LEGISLATIVE PROCESS
When platforms create rules, we can think of this as a ‘private legal code’.
This is different from government legislation and indeed from ‘private law’, and there is no intention here to create equivalence of status with platform rules.
But there is a lot to learn from the discipline of creating legislation that is applicable to the exercise of creating platform rules.
It is often not helpful to talk of platforms as ‘countries’ and their leaders as ‘presidents’ as this overstates their importance, but there is value here in a comparison with government law-making.
The model for creating legislation that I think best fits is that of a Parliamentary system as this reflects how companies are organised.
The CEO of a company is like a Prime Minister in being ultimately responsible for everything that happens within their organisation.
They usually have a team of senior executives who run separate departments of the company, like Ministers in a Cabinet.
And it is the agenda of the CEO and executive team that will shape how it makes its rules, just as a PM and Cabinet will define a government’s legislative agenda.
There are critics outside companies who do not want CEOs to have power over rule-making, and there may even be CEOs who would like to divest themselves of this power, but this is not realistic.
Prime Ministers can be more or less engaged in specific policy areas and use delegation extensively but they are accountable for everything their government does.
We do not expect or want a Prime Minister to reply ‘nothing to do with me’ when there are serious issues in any part of their government.
Legislation is Political
We all recognise that drawing up legislation is an intensely political process and that legislation codifies the political intent of its creators.
The private legal codes drawn up by platforms similarly reflect the political intent of their leadership.
This is unavoidable and so should be recognised rather than wished away.
Political intent in this context is broader than a simple consideration of favouring one party over another but it rather incorporates a broad range of attitudes and values.
An important benefit of recognising that there is political intent in the rule-making process is that this intent can then be captured and made part of the instructions being given to the regulator.
There are various mechanisms for doing this in a Parliamentary process.
During the passage of legislation we will usually hear multiple statements by Ministers clarifying what they want a law to do.
In the UK, it is common for the government to produce explanatory notes to accompany legal text.
In the EU process, there are formal written ‘recitals’ in legislative instruments that explain how the legal clauses should be interpreted.
Whichever method is preferred, by the time we get to implementing a regulation, there is usually a significant body of material in the public domain that explains to the regulator and the public what the intent of the law is and how the government expects it to be interpreted.
An equivalent process would help everyone better to understand what platforms are trying to do with their rules rather than leaving people to apply their own interpretations.
We can look at an example of how this might work for hate speech rules (which can have significant partisan political effects so interpretation is a big deal here).
HEADLINE POLICY: We prohibit negative stereotypes of people based on their protected characteristics.
EXPLANATORY NOTE A: Our intent is for hate speech policies to protect historically disadvantaged groups while not being overly restrictive. This rule is meant to prohibit statements like ‘women are sluts’ or ‘Jews are thieves’ but not to prohibit statements like ‘men are rapists’ or ‘whites are racists’.
EXPLANATORY NOTE B: Our intent is for hate speech policies to create a less hostile environment and so we want to prohibit all kinds of attacks citing protected characteristics. This rule is meant to prohibit statements like ‘women are sluts’ or ‘Jews are thieves’ as well as statements like ‘men are rapists’ or ‘whites are racists’.
There are arguments for and against either the A or B model set out here.
The key thing is to spell this out so it can be applied by the regulator, understood by the public, and challenged with the company leadership where people disagree with it.
Platforms may want to argue that rules are not political because they are based on criteria like harm that they believe to be objective.
But these decisions are never truly apolitical any more than debates around public legislation restricting speech can be apolitical.
Structurally, it makes sense for the process of drawing up legislation to be in a policy function within a company and headed up someone with political experience in the company leadership, the ‘Minister for Content’.
The legislative phase is the time for other considerations such as the business impact of particular policy options to be brought in, just as a range of interested parties will pitch in to a government legislative process.
There may also be operational considerations with different options being presented and these are valid issues to explore at this stage.
This may include seeking information from the regulator asking them how they would go about enforcing on different versions of the rules and what resources they would need.
This is categorically not the regulator deciding what the rules should be but rather acting as expert adviser to the legislators who do have that power.
Having had their say as the rules are being drawn up, other interested parties may have views on how they are enforced but should not get involved in decision-making by the regulator.
There are several tried and tested methods for developing legislation that platforms can learn from as they develop their own processes.
Adversarial models are a good a way to test legislation effectively when the real authority sits with particular individuals in an organisation.
In this process, the executive who ‘owns’ the content rules would draft and propose changes (after research and consultation), and then others would try to knock these down.
There are other more consensus-based models but these may not be as effective in identifying unintended consequences before a rule is put into operation.
More or Less Detail
It seems obvious that it would helpful for users if they had more detailed texts and explanatory notes of platform rules, so why hasn’t this happened?
There are strong incentives under US law for platforms to keep commitments as vague as possible in order to reduce their legal risk.
The line of attack that concerns platforms was set out in the US President’s recent Executive Order which talked about going after platforms for “Unfair or Deceptive Acts or Practices”.
In crude terms, the safest position in the US is to say ‘we may take your content down whenever we feel like it’ as it is hard for anyone whose content is later removed to claim the platform had not met its public commitments.
If platforms publish legislative texts that describe in detail their rules and the intent behind them, this would create more opportunities for people to claim gaps between public commitments and actual outcomes.
The legal framework in the EU creates some risk in the opposite direction, ie platforms may be penalised for not giving specific enough commitments under data protection and consumer law.
If the key test in the US is ‘did you live up to your promises’, EU law may also ask ‘did you make enough promises for your relationship with your users to be considered fair’.
This is a case of ‘choose your poison’ for platforms.
US legal advice is likely to continue to steer platforms to make only broad brush public commitments, and the effect of this recent Executive Order will only be to heighten concerns.
But legal pressures elsewhere, especially in the EU, are likely to require platforms to provide more detail so they will need to do this either voluntarily or in response to adverse judgements.
And while broad brush policies mean that company executives have significant discretion, this is a double-edged sword that can do real damage to their reputation with any party that dislikes how they exercise that discretion.
On balance, I would argue that the only path that will work for platforms over the long-term is one where they publish detailed rules and explanatory notes.
The way to keep the legal risk under control is to minimise the gaps between these specific public commitments and their actions by building a great in-house regulator.
STEP 4 – BUILD A NEUTRAL REGULATOR FUNCTION
Once the rules have been made through a political process as described above, they should be handed over to others to enforce.
There is a lot of literature on the case for having independent regulators enforce laws and how they should work, for example in the OECD Paper ‘Being an Independent Regulator’.
An important benefit for platforms is that this model allows for the development of two quite distinct cultures.
Once the rules have been argued over in a highly political environment, they can then be handed over to a neutral regulatory function for enforcement.
The crafting of the rules in areas like speech is necessarily and unavoidably political, but the application by a regulator of rules that someone else has created can (and indeed, should) be apolitical and technical.
People and Politics
We sometimes see attention paid to the political views and activities of people who work at platforms when people are unhappy with their decisions.
This is something that government has long had to deal with and platforms can learn from the protocols they have developed around political neutrality.
For example, the UK Government has published a collection of policies for civil servants that explain how officials are expected to behave.
People who work in regulatory functions may be asked to limit their partisan political activity with restrictions increasing for more senior staff.
This does not mean that people cannot have political views but there is a professional requirement to avoid both actual bias and any impression of bias.
As platforms separate out their legislation and enforcement functions, staff can be placed on the side of the fence that best suits them.
People who want to be involved in the political debate around shaping policies should sit on the legislative side.
It may be fine for these people to be active in politics as nobody is claiming their work is apolitical.
Platforms would need to consider questions of political balance across any community of overtly partisan staff working on legislation, but they would not have to silence this group or have them pretend they are not political.
People who want to sit on the regulatory side, on the other hand, would have to sign up to a code that includes refraining from public political activity.
This does not mean that people cannot ever cross over between the two worlds and we see this in government where people move from politics into roles in regulators.
But people usually have to set aside their political careers while they are in regulatory roles, and where they have had a high profile in party politics they may never be accepted as a sufficiently neutral regulator.
The attitudes platforms should look for in people working as regulators are those that we find in a neutral civil service or judiciary rather than in party politics.
The ideal head of a regulator is both a strong personality and a deferential one which can be a hard combination to find.
They need to be strong in defending their decisions, including when they have upset people in government, but deferential to the fact that they have a specific mandate that comes from government.
These do not have to be huge structures and smaller companies may only have a handful of people in each function, but it is important that they are different people who are suited to working within the relevant culture.
The regulatory function is likely to be less glamorous than the legislative function just as in governments Ministers usually have a higher profile than heads of regulators.
We also see that independent government regulators may sometimes sit under the Ministry which created their legislative mandate.
Their independence comes from their legal mandate and the prevailing culture rather than their technical position in the machinery of government.
Platforms might separate the two functions out, for example placing rule-making within a policy function while their enforcement sits within a legal compliance function.
Structural separation like this might increase trust in the regulator function, but the most important consideration is that the entire company understands and is committed to their independence.
STEP 5 – ESTABLISH A DELIBERATIVE PROCESS FOR CHANGES
Where governments have established independent regulators, there is no doubt that political authority ultimately rests with the government who can change the regulator’s mandate or scrap it altogether.
Similarly where platforms create a regulatory function, their authority can be amended or taken away by the CEO and executive team.
This is a reality that should not be wished away, but rather we can learn from how these relationships work for governments and public regulators.
It is common for governments to express their frustration about decisions made by their own regulators and for them to try to exert influence privately or publicly.
But there is, at least in well-run administrations, an understanding that there are limits to how regulators can interpret rules and that there is a need for politicians to take a problem back when those limits are reached.
Governments may decide to live with something being not ideal if it is low priority, or they may decide to update legislation to change the instructions to the regulator with varying degrees of urgency.
In many Parliamentary systems, a government with a working majority could push through legislative changes very quickly, but they will usually want to take some time to allow for deliberation and scrutiny.
This idea of taking time to change something that is not working can be counter-cultural for people working in technology where the preference is to fix bugs in code as quickly as possible.
As a result, we see companies scrambling to change rules on the fly when faced with an unwelcome outcome from enforcing existing policies.
Unfortunately, the maxim that ‘hard cases make bad law’ often comes into play and so changes made in haste to address one problem can create other, bigger problems.
I can illustrate this with a situation from my work at Facebook that received some attention a few years ago.
Legislate in Haste, Repent at Leisure
Facebook had strict policies on the amount of ‘gore’ that could be shown in photos and videos shared on the site.
These policies led to the platform removing some photos of victims of the Boston Marathon bombing in April 2013.
This upset some people in the US who thought this was ‘censoring’ important reportage of a terrorist attack, and their objections led to the policy being changed to allow more graphic imagery.
The change in policy was then applied broadly and previously banned videos of beheadings by Mexican cartels were allowed to remain on the site as compliant with the new, more permissive, rules.
These appalling videos caused widespread concern in the UK that had not been anticipated when the policy was changed.
Policy was eventually revised again in October 2013 to remove the beheading videos, but only after significant criticism that continues to resonate to this day.
Platforms are often under heavy immediate pressure over content decisions and may not be able to sustain a long wait before changing their rules.
But they may get better eventual outcomes and more public respect if they show they have a consistent deliberative process in place and explain how they will use this.
What the public often sees today is a platform saying :-
‘this is all fine and we do not intend to change anything’, followed by…
‘ok, you win we will make some changes’, and then a while later…
‘oops, we didn’t expect this to happen when we made those changes…’
Wash, Rinse, Repeat.
If platforms have pre-determined processes in place then they can set expectations from the outset, saying for example :-
‘we intended this rule to work this way, see our reasoning here [text from legislative process]’, or
‘we did not intend the rule to work this way and it has gone into our urgent review process which will take [x] days to produce a result [link to description of how urgent process works]’, or
‘this rule is not working exactly how we intended, but it is not causing serious problems so it will go into the regular review process and may change at some point in the future [link to regular policy update process]’.
This approach would feel more orderly for everyone concerned than scrambling to either make an exceptional decision or change important rules in the heat of the moment.
[NB this may again be a ‘stretch goal’ given both the fix-it-now culture of tech companies and the relentless pressure of modern public scrutiny. Governments have similar challenges with 24/7 scrutiny and are arguably also increasingly prone to making bad knee-jerk legislation].
This post describes how I think platforms can best structure their own internal regulatory mechanisms, and this begs the question of where external entities might fit in.
I have sketched out how the relationships might work in this drawing and describe them below.
I have included a layer for Facebook’s External Oversight Board as this is also being much discussed at the moment.
It is not directly applicable to other platforms but they may engage with other self-regulatory bodies in a similar way.
External Oversight Board
As I understand it, the primary role of the External Oversight Board that Facebook has just established is to oversee their in-house regulator function, ie to make sure their decisions are actually consistent with the rules [A].
They may also advise the company on areas of policy that they believe are flawed and this would feed into the company’s internal legislative process [B].
In a model of separated functions, they would be talking to the regulator about individual decisions but to the company leadership about policy changes.
Some people are asking questions about whether the legislative function could be outsourced entirely to a body like the External Oversight Board over the long-term.
This would be equivalent to a Prime Minister saying they are no longer responsible for key elements of government policy, and we have to consider whether this would be either realistic or a net positive from an accountability point of view.
The other major external players with a role in this are governments as both legislators and regulators.
Governments will normally want to speak to the legislative function within companies, ie they will be concerned about the rules that the platform has set [C].
They may try to influence the platform’s legislative process by persuasive methods like summoning the CEO or other company executives.
Or they may create public law with the express intent of overriding the platform’s internal legislation as the German government did with NetzDG.
As well as this legislator-to-legislator engagement between senior people in governments and companies, we are starting to see interest in regulator-to-regulator engagement [D}.
The UK has described a model for regulator-to-regulator engagement in its Online Harms White Paper which may create a mandate for a government regulator, Ofcom, to work with regulatory teams within platforms.
These new models are still in a developmental phase but it seems likely that they will work best when both governments and platforms have clarity about when something is legislative and when regulatory.
This may move us on from some of the mismatched conversations we see today.
When politicians and company execs discuss specific cases, this can be illustrative of a problem, but it often comes at the expense of getting on to debate general principles.
And there may be little value in regulators pressing their counterparts in companies to take actions that run counter to the platform’s rules – better to escalate this quickly to the legislators on both sides.
Summary :- platforms should aim to become ‘good’ regulators. Key to this is separating out rule-making and enforcement functions within companies. There is a lot learning that platforms can take from governments about how to do this well.