Skip to content

Journalists v Moderators – 22nd July 2020

-- 9 min read --

The title of this post is not a set-up for a bad movie, or an inter-departmental sporting contest, but rather reflects some of my recent learning from talking with journalists.

This has helped me to think about the differences between what classic media and social media platforms do in practical terms.

The short summary of this is that classic media employs journalists to edit content IN to their platforms, while social media companies employ moderators to edit content OUT of their platforms.

I will explore the difference between ‘IN and OUT editing’ in this post as a contribution to the ongoing debate about whether social media platforms should be treated as ‘publishers’.

Hey, I’m On The Telly

I was asked to appear on a UK TV news program recently to talk about advertisers boycotting social media platforms.

The interviewer challenged me with the fact that the person who committed the awful terrorist attack in New Zealand had been on Facebook, while she said that someone like that would never be allowed on her news program.

This got me reflecting on how news journalists actively and deliberately decide which voices should be heard on their platforms, in contrast to social media platforms that are (generally) by default open to all comers.

My experience of what it took to appear on this news show was instructive as it involved conversations with several of the program staff.

They wanted to walk through what I was going to say and would presumably have politely found a way to say they did not need me if they had not felt I would add to ‘their’ story.

They warned me that the item might be bumped off the program if there were other stories, ie more important voices needed to be heard that night.

They arranged the logistics for the appearance and had people to choreograph every step of what would be a 4 minute section of the program. 

This is how journalism works and it produces what can be a very high quality product for helping people to understand about a topical issue.

For print journalism, there is not the need to bring live voices in as there is for broadcast, but the same disciplines of deciding what to cover, who to talk to, and how to frame an issue are the stock in trade of any editorial team.

We can best understand this form of editing voices IN by considering what a classic media publication would look like if the editorial team took no action.

Absent editorial decisions, a newspaper or broadcast show would be empty – if somebody from the outlet has not actively sourced content, then there is none!

For social media platforms, by contrast, if no editorial decisions are taken then it will continue to fill up with all of the content that users choose to share.  

An ‘unedited’ social media platform will not be devoid of voices but a cacophony of the voices of anyone who chooses to show up.

In a community where people are all well-behaved then no or little editing may not be problematic as the content will stay within acceptable limits

Unfortunately, we more typically find that some people behave in ways that are problematic for other users and/or cause harm to society and so need to be edited OUT of the platform.

Going back to the question in the TV interview about the terrorist in New Zealand, the response from platforms was to look at how they could get better at editing OUT white supremacist voices.

Persona Non Grata

Another conversation I had with a journalist shed light on a further important difference between journalism and moderation.

I asked an experienced producer of a show if they maintained a ‘banned list’ of people who would never be allowed to appear.

I was thinking they must work in a similar way to social media platforms who do create such lists of ‘personal non grata’, ironically often adding people following hostile stories in the classic media. 

The media producer response was no, they did not maintain a banned list as such, but rather took editorial decisions for each story on which voices to include and how they might challenge these voices.

While a moderator is only considering whether or not to ban a particular voice, a journalist is deciding whether they might include them but then robustly go after them in the piece.

The journalist can do this because they set the terms for the story they are creating, and the media outlet has full control as the publisher here.

When people use social media platforms, they get to tell stories on their own terms, and they, not the platforms, are the publishers in the journalistic sense of deciding on the messages they want to convey to their audience.

So we can identify some essential features of journalistic content – the creator decides which voices to feature and how they want to frame those voices, including using their own voice to challenge people as appropriate.

For social media platforms, there is no pre-selection of the voices (at least for most open platforms), the framing is chosen by the voices themselves (within the technical possibilities of the platform), and the platform does not use its own voice to challenge the speaker (though other users may do).

We see this difference play out when decisions by both types of media become contentious.

Social media platforms find themselves under pressure over the decisions they have taken about which voices they will or will not edit OUT, most notably in the recent rows over how to treat the voice of President Trump.

Classic media run into controversy over the voices they have chosen to bring IN, as we saw recently with the Tom Cotton opinion in the NYT, or with the decision by the BBC to have a far right politician on a political show in 2009

Where Are We Headed?

There is significant pressure, especially in the US in the run-up to their Presidential election, for platforms to be more active editors of voices from what I would approximately call the ‘populist right’.

It is hard to find the exact term but the manifesto of the Stop Hate for Profit campaign captures the elements of the cluster of views people have in mind when it calls on platforms to root out and remove these voices :-

Find and remove public and private groups focused on white supremacy, militia, antisemitism, violent conspiracies, Holocaust denialism, vaccine misinformation, and climate denialism. 

Stop Hate for Profit Recommendations

The major platforms have always banned out-and-out terrorists and overtly racist and neo-Nazi voices.

Social media platforms have also already moved their lines over the years so they often now exclude a wider range of far right individuals and organisations such as Britain First in the UK and Infowars in the US.

But, the call to action we see above from the Stop Hate for Profit campaign, as well as from other activists, is looking for platforms to restrict a broader range of voices.

If platforms agree – whether willingly or under pressure – that they need to intervene more then they may choose to do more journalistic editing, or be more expansive in their moderation, or apply some mix of both of these.

The first lesson from journalism would be for platforms to consider whether to do more to manage which voices can come IN, rather than letting everyone IN and then deciding who to push OUT.

It seems unlikely that today’s large, open platforms would make the choice to become ‘invite only’ by choice though this is a model that new platforms might adopt if they want to build smaller, more homogeneous communities.

Some of the measures in the regulatory debate are aimed precisely at requiring platforms to do more gatekeeping of who gets to be IN and I have discussed these in a previous post.

Assuming that open access is broadly going to continue, and that there will be voices that platforms wish to control, then platforms can choose to act as moderators or journalists for particular types of content.

Looking at recent actions by Twitter, we can see examples of both of these approaches.

Section of Twitter factcheck page for postal voting.
Fig 1 – content sourced by Twitter to challenge Trump tweet

When the US President posted inaccurate content about postal voting, Twitter applied a journalistic treatment.

The platform sourced a range of content on the subject of postal voting and attached this to the original tweet in order to change its framing.

Unlike classic media, the platform did not proactively invite the US President to give his views on postal voting, and the use of a discreet label makes the story less wholly owned by Twitter, but this is more journalism than moderation.

In other cases, such as the action announced today in respect of promotion of the so-called QAnon conspiracy, Twitter if following a more usual moderation strategy that is aimed at suppressing rather than reframing these voices.

As well as closing down ‘bad’ accounts, platforms can make it impossible for any of their users to share links to off-platform content when they want to dial suppression up to 11.

Platform Journalism v Moderation

Platforms have largely favoured moderation over journalism to date.

A practical consideration is the belief that it is possible to do moderation at the scale required for a large platform, but that it would not be possible to apply journalistic treatments at scale.

Given the disparity of effort between pushing a ‘DELETE’ button and sourcing and attaching journalistic content to a social media story, it seems likely that moderation will continue to be the predominant tool for platforms.

But the balance feels like it is shifting at present, with more critics questioning the scaled moderation efforts of platforms, and more demand for platforms to promote journalistic content in causes like battling Covid-19.

As well as the technical challenges of working with content at scale – where innovation may create new opportunities – legal and regulatory questions are also critical for platforms.

The legal risks of a moderation decision are quite well known.

If a platform suppresses a voice and they object, then it will rely on the fact that it is a private entity enforcing a contract that the other party signed up to and which includes compliance with content standards.

This is not bullet-proof and there have been cases outside the US where platforms have been ordered to restore voices that they had removed (I know of at least two from Israel and Germany).

Platforms may increasingly have to demonstrate to courts that they have behaved ‘reasonably’, having clear policies and providing users with accurate information etc, but the principle that they can refuse to offer a service seems quite robust.

If a platform decides not to suppress a voice, then there can be a range of associated risks depending on the harm that the voice is alleged to be creating.

In some cases, eg copyrighted material, there is a well-established and known risk of financial penalties that applies in the US and many other countries.

This creates a very strong incentive for platforms to make sure they do not fail to suppress voices they have good reason to believe are infringing copyright law.

In other cases, the association between failure to suppress content and the risks to a platform is less clear.

Developing mechanisms to penalise platforms for not suppressing voices engaged in illegal (or in the case of the UK as-yet-undefined harmful speech) is a focus of much of the regulatory activity at the moment, eg NetzDG in Germany, FOSTA-SESTA in the US, the proposed Online Harms Bill in the UK, the Avia Bill in France etc.

As well as these targeted measures, there are debates in both the EU and US about platform exemptions from liability for content that will have a significant impact on how platforms assess risk in this area.

When it comes to social media platforms getting into journalism rather than using more moderation then this is both an older and a newer field from a regulatory perspective.

It is older in the sense that if this means platforms are actively commissioning and distributing content of their own volition, then here they are behaving like classic media publishers and much of the same regulation will apply.

It is newer in that some of the techniques being used, eg when a platform creates a link to content from a third party factchecker, are unlikely to have been explicitly catered for in existing regulation.

When platforms give special access to factcheckers that they have selected this is clearly more like journalism than moderation, but it does not necessarily make sense to treat it in exactly the same way as classic media content.

This connection is being made by those calling for platform protections to be reduced precisely in response to the factchecking of the US President’s comments on postal voting (the s.230 debate in the US).

This is a legally and politically contested space, but the effect of some potential reforms would be to make journalistic interventions more risky for platforms and drive them back towards up-down moderation.

Open Questions

The most pressing question for platforms right now, with a particular focus on Facebook, is whether they agree in principle that they should intervene more against voices that cluster around the ‘populist right’.

There has previously been pressure to act on other kinds of harmful voices with the focus moving between different platforms, and we can expect there will be more shifts of emphasis depending on current events.

A key strategic question for platforms is when and how to use journalistic interventions vs ‘stay up or take down’ moderation.

We see platforms using both methods today but this can feel more reactive than the product of a holistic assessment of what the ideal (and sustainable) solution is over the long-term.

There is a (perennial for Silicon Valley tech) risk that models will evolve that reflect the particularities of the US situation but are a poor fit elsewhere.

Regulation may play a significant part in shaping platform strategies, with most of the action at the moment increasing incentives for platforms to do more moderation rather than journalism.

Summary:- journalists decide which voices should be allowed IN to a space while moderators decide which voices should be pushed OUT. Social media platforms are using both kinds of intervention though not necessarily describing them in these terms. This post explores the differences between the approaches and some of the relevant regulatory questions.

Leave a Reply

Your email address will not be published. Required fields are marked *