Skip to content

What Comes Before How – 16th Nov 2020

-- 7 min read --

The main opposition party in the UK, the Labour Party, is calling for emergency legislation to ‘stamp out dangerous anti-vax content’.

I have no doubt that this is a good faith effort to deal with what is likely to be a real societal problem as we start to roll out new vaccines against Covid-19.

But there is a real risk of effort being directed towards the wrong part of the problem – focusing on the legislation rather than on trying to define what we mean by ‘dangerous anti-vax content’.

I have been guilty myself, in an earlier post on this subject, of being overly concerned with enforcement mechanisms rather than on the substance of what speech would actually be prohibited.

Enforcement mechanisms certainly are important but we risk missing the target if we build these without first having thought long and hard about what we intend to enforce against.

Politicians can make the same mistake that is made in information technology when we think we can summon up a ‘new IT system’ to fix a problem.

One of the first things you learn on an IT course is the importance of defining requirements before you build anything.

You can of course keep evolving your software as you go, especially with the tools we have today which allow for much more flexibility than previous generations of technology.

But you will still screw up if you buy and build stuff without having first invested time and effort in defining the problem you are trying to solve for, and in designing and testing a range of potential solutions.

Building an Anti-Anti-Vax System

We need to start with a problem statement that explains why intervention may be justified, and sets out the goals we might achieve through this intervention.

I am going to use illustrative figures here so please treat these in that spirit.

[NB I have seen various numbers on attitudes to vaccines floating around from surveys but am not aware that there is a commonly agreed set of data for this – for a real world target govt would need to pick their source].

We first establish a number for the Baseline percentage of people who would take up a Covid-19 vaccine independently of any information they consume online – in this model I will set this ‘B’ number at 75%.

We then need to estimate the extent to which the consumption of information online might act as force to Increase and Decrease this number – I will set ‘I’ at +5% and ‘D’ at -10%.

Using these numbers the effect of consuming online information is a net fall in the number of people taking up the vaccine from 75% to 70% as the impact of anti-vax information is assumed to be greater than that of information being pushed out in support of getting vaccinated.

To repeat the health warning, these are made-up numbers but useful to illustrate how we might try and understand the problem before designing solutions to address it.

The concern that underlies the Labour Party proposal is that the effect of online information is net negative and that it move even further in this direction, so that D might grow to -15% or -20% as disinformation takes off.

This fear that anti-vax activity online might contribute to vaccination rates of 60 to 65% rather than 75% plus provides the justification for government intervention.

We should remember that this is not a one-way street and part of the solution should be to see how online channels can be used to increase vaccine take-up.

The ideal scenario from a government point-of-view would be for online activities to be entirely in this positive terrain so that, for example, they might move from their baseline to 85 or 90% take up of the vaccine with clever public information campaigns

But in this post I want to look at ways to understand and reduce the effects of content on making people less likely to take up the vaccine.

We might expect the positive uses largely to look after themselves, and it is the negative ones where we need to do the hard thinking and act differently.

Limiting the Negative

There are three elements to designing a system aimed at reducing the extent to which online activity acts as a force to decrease the take-up of a vaccine.

  1. WHAT are the types of content that we think may undermine the effort to vaccinate people?
  2. WHICH treatments are most appropriate for these various content types?
  3. HOW do we ensure that platforms apply the treatments we have said are necessary?

This post has been prompted by a fear that we, in the political world, may become entirely focused on the last of these stages and not invest the time that is needed in the first two stages.

This partly stems from the old saw that “if the only tool you have is a hammer, you will start treating all your problems like a nail”, but also reflects a tendency to underestimate the complexity of these questions.

At its most simplistic, the response to Q1 may be ‘anti-vax content is obvious, isn’t it’, and to Q2 that ‘we just need to get it off the platforms’.

Well, no, and not necessarily.

You may be thinking that this is a tech company person trying (once again) to over-complicate what are simple questions so that platforms have an excuse not to do anything.

I wish that were the case as it would make my life easier as a legislator, but I continue to see a failure to recognise genuine complexity as a real barrier to progress even now when I have some distance from working for a platform.

Instruction Sets

Content review systems, whether carried out by people or by software, need sets of instructions that reviewers are asked to follow.

It would be possible to operate a ‘man on the Clapham omnibus’ model where reviewers are told to remove anything that ‘looks off’ to them – humans can do this naturally, and we are increasingly using AI techniques to teach machines to act like us (biases and all).

For anti-vax, the instruction to reviewers in this kind of model could be just to remove anything that they think ‘might discourage someone from taking up the vaccine’.

This is simple but would be likely to lead to very inconsistent outcomes as individual reviewers see the same content differently.

It would also likely result in a lot of content being removed as it instructs reviewers to act on their suspicions.

My experience from many years of content decisions is that we have an innate tendency to want to remove borderline bad content, and that having precise rules to follow acts as a way to keep these instincts in check.

In other words, we will in general be more ‘conservative’ and less ‘free speechy’ if allowed to follow our personal instincts.

This may be seen as a good thing by those who want to see more content removed and whose definition of what should be taken down is ‘stuff that I find offensive’.

More typically platforms want to give their reviewers – of both the flesh and tin kinds – detailed instruction sets that describe classes of problematic content in terms of particular text and images, and set out specific protocols for handling each of these.

We can sketch out some examples for how this might work in the anti-vax area :-

IF CONTENT IS <masquerading as a news story> AND <has been factchecked> THEN <apply false label> AND <reduce distribution>.

IF CONTENT IS <advocating a medically dangerous practice> AND <is not satirical> THEN <remove content> AND <issue warning to user>.

IF CONTENT IS <the Bill Gates microchip conspiracy theory> AND <is not otherwise violent> THEN <leave content as is>.

These are themselves over-simplified examples and you may disagree with some of the suggested outcomes but they illustrate the detailed nature of the rules that might be needed in this area.

If there is one thing I want to convey in this post it is that we should spend more time on this kind of deep dive into WHAT the types of anti-vax content are and WHICH treatments should be applied to them than we do on HOW this will all eventually be enforced.

Importantly, this work can start now as classifying content, and describing potential treatments, is not necessarily dependent on there being new legislation in place.

And the process of fleshing out these definitions would be very helpful in the legislative process as it would mean we have a better idea of what we are asking any new enforcement agency to do as we construct it.

Fig 1 – Facebook (top) and Twitter (bottom) treatment of election results

Platforms have been developing some interesting protocols for how to treat forms of harmful misinformation for some time and, in the case of the recent (ongoing!) US election they tried to do this in advance of a known impending problem – that a party might make false claims about the result.

It is very much in the public interest to do similar work in ahead of the time when anti-vax content will be critically impacting on people’s decisions in taking up the various Covid-19 vaccines.

We should not see this as a competition between platform self-regulation and legislative compulsion.

If progress can be made in understanding harmful content in more granular detail then this will be useful under any and all types of enforcement regime.

Having a clearly articulated set of rules is the best way for us to understand how platforms see acceptable and unacceptable speech on their services, and the same goes for governments.

Government rules are made public through legislation and case law, while platforms usually publish at least their high level rules and may elaborate on these when forced to respond to particular decisions.

In theory, government speech rules should be more transparent, but in practice it can often be as hard to understand precisely what speech is legal and illegal in a country as it is to understand the detail of platform rules.

Next Steps

It is not very satisfying just to describe a problem, however intellectually stimulating or politically convenient that can be.

Once you have explained why you think a particular course of action is inadequate in some respect, you should spend at least as much time on developing what you think is a better path forward.

In that spirit, I plan to attempt my own answers to the questions I have posed about WHAT the types of problematic anti-vax content are, and WHICH treatments should be applied to these.

I want to look at the UK in particular as, while there is content in the global misinfo-sphere that is important everywhere, these controversial debates do have a strong local flavour and responses will need to reflect this.

I am hoping there are some good models out there already from the worlds of academia and fact-checking and would welcome pointers to useful sources.

If you are also interested in working on this or can direct me to other places where this work is already being done then do please get in touch via email to ricallan [AT] regulate.tech.

Leave a Reply

Your email address will not be published. Required fields are marked *