The role of platforms in moderating content is a frequent topic of my posts, but it is important also to consider the ways in which people who use online services have a hand in moderating content.
Some of this moderation activity is particular to each individual – we post content and we may later edit or remove it based on feedback we receive from others or our own second thoughts.
Some is more structured so that people who administer spaces within social networks such as pages or groups may moderate content according to community rules that they apply in addition to any rules governing the platform.
For some platforms, such as Reddit, there has been a strong emphasis on community rather than platform moderation and this has often worked well.
These forms of self and community moderation tend to attract less interest than the moderation systems operated by the platforms themselves, even though they may account for high volumes of actions on content.
To make this real, I want to share some text from a thoughtful friend who wants to see good online discussion on issues relevant to the city of Bath where he lives.
Note from William Heath, reflecting on moderating a local Facebook group :-
What might a moderation policy for online local political discourse look like?
The UK has a massive political agenda to discuss. The country faces the immediate twin shocks of pandemic and Brexit, with an ominous climate crisis always looming. There is unlikely to be another general election for four years. We’re going to need effective local political engagement as communities face the urgent challenge of building back better.
Social media – particularly online communities – should provide just the tool we need for that. But the reality of local political discourse online – on Twitter or in Facebook groups – is all too often a disappointing spectacle of entrenched positions, name-calling, ad personam attacks. Councillors and MPs – particularly women and minorities – report a level of constant underlying harassment, plus an unmistakable level of overt threat.
Where is this headed? One option is to accept that this is just the nature of things and that we have to live with a high level of anger and abuse. But local communities in Britain would be better served by more effective discussion and engagement.
What we need are fora where people engage courteously, consider evidence and different points of view, and evolve their opinions. And a political environment where women and minorities can feel confident about engaging on an equal basis.
So what principles might help moderate local community discourse in a productive and politically neutral way? This is an attempt to frame up the issues and offer a moderation policy for discussion.
Communities need robust democratic debate. Partisan people might not enjoy their side being criticised, but communities need effective political discourse.
Legitimate opposition to specific policies which are open to discussion (eg support for business; planning policy; climate measures) keeps politicians on their toes and hones their ideas. That necessarily includes criticism from people who will never change their minds, and who will use any issue to bash away at their political opponents. Yes, this can be tedious and predictable, but there’s no law against it, and it has a role.
Moderators might want to step in at the point where language becomes offensive. And there is a distinction to be made in a community discussion between people whose views are entrenched, and people whose role is deliberate trolling: sowing division, and deliberate provocation to try to draw a reaction.
We also see behaviours that seek to stifle rather than stimulate debate from people at the more extreme ends of legitimate politics. They hold their beliefs so strongly that they do not even want to hear from those who hold a different opinion but will rather try to shut them down.
That’s where there’s a judgement call for a moderator: they need to protect their space from deliberate troublemaking, or speech which has the effect of silencing people who are marginalised or simply of a different view.
This becomes a pretty straightforward decision when a moderator is faced with reckless misinformation, deliberate disinformation, and gaslighting. Eliminating these is key for healthy political discourse. It may seem like a hopeless task given the volume of wrong information, but there are plenty of fact-checking tools and services available and it should be possible to establish a discipline in a closed community where lies and personal attacks are unacceptable and stick out like weeds in a well-tended garden.
We might identify these four levels, with corresponding responses from a moderator:
|1||Legitimate local political debate (maybe robust, sometimes disagreeable)||Encourage respectful substantive robust dissent, fact-checking, use of evidence. Discourage tedious repetitiveness. Celebrate the political debate in your community.|
|2||Unacceptable behaviour: offensive language, dehumanisation, personal attacks||Make continued participation in your forum dependent on change in behaviour to more like #1 above. There may be a legitimate difference of valid views behind the superficial unacceptable behaviour. People who behave badly may yet be valuable if they moderate their behaviour.|
|3||Persistent trolling, hateful extremism, advocacy of views incompatible with human rights and democratic values, eg racial supremacy etc. Spreading the “Overton window” with chilling effect||Get them out of the discussion. Debate won’t ever help them; they won’t ever help the debate.|
|4||Unlawful behaviour: harassment, doxxing, advocating violence||Report to police. All constabularies will have specialists in hate crimes and counterterrorism.|
A possible moderation policy for those in charge of spaces for local political discourse might look like this:
1. Welcome to our online forum. You’re welcome to contribute views, present facts, discuss and disagree. 2. You’re especially welcome if any criticism you make is accompanied by a suggestion for how things could be better, if you're ready occasionally to change your mind or to accept you may be mistaken. 3. Please be respectful and use moderate language even if you feel strongly on an issue or take a dislike to another participant. Please show the same courtesy you would if meeting other people face to face in your favourite local cafe or restaurant. 4. Please do not share any disinformation or anything actionable (libellous or unlawful). Please do basic fact checking before sharing rumour or possible misinformation. Where you make factual assertions, try to quote sources where possible. 5. This community will not tolerate personal attacks, or criticism based on gender, race, age, ethnicity, disability. Any such material will attract a single final warning, with second-time offenders banned. 6. Trolling (deliberately provocative behaviour with no purpose other than to cause a reaction) and harassment will lead to a ban. 7. Any undermining of democratic processes (such as intimidation of elected representatives) or of people’s human rights (such as but not limited to advocacy of race superiority or of lawbreaking) means immediate ban without warning. 8. Anything unlawful - including doxxing, incitement to violence, hate speech - will lead to a ban and report to police. 9. The moderator may seek advice from community participants before taking any of these measures, but in all these matters the moderator’s decision is final.
This is one example of how someone is approaching this challenge as a contribution to thinking about how people, rather than platforms or governments, can regulate the online spaces they use.
I will explore further in another post how these different models of self, community, platform and government moderation might each come into play in different scenarios.