[Note for new readers:- this is a follow on post to one where I talk about partisanship in the context of misinformation. I hope it also works on a standalone basis but would encourage you to read them both.]
An especially fraught (and therefore interesting!) subject as we consider internet regulation is how online services affect the political process itself.
We may feel that governments and/or platforms are imposing rules that can swing the political balance in one direction or another and this is a big deal.
Our instincts tell us there must be partisan effects yet there is still little solid information about exactly when and where they are happening.
This uncertainty is unsurprising given how recently new technologies like social media have come into widespread use.
A wise political scientist friend once told me that we are still trying to understand the effects of television on the political process as a reminder that real understanding takes serious research and time.
There is a particular challenge in getting to an agreed truth in this area as many of the people most interested in these effects will bring their own partisan biases to the party.
We are seeing important independent research coming out of many institutions around the world and I will be looking at this during my Fellowship at the Reuters Institute for the Study of Journalism.
But there are also decisions being made right now by platforms and governments that have to be made based on what we know today.
In this post, I will develop a concept of assessing policies for their ‘Likely Partisan Effects’ as a contribution to that decision-making framework.
This is not intended to be a substitute for the detailed research that I hope will help us to understand ‘Actual’ partisan political effects over time.
Research findings may confirm some of my ‘Likely’ effects while disproving others and I will gladly replace my conjecture with real data as it becomes available.
As always, I like to break big problems down into more manageable pieces and for this one I have sketched out five places where we can usefully look for partisan effects.
Note that this model reflects the elements that I consider to be important for social media rather than for other type of online service where other models may be more useful.
A quick note on definitions before we proceed.
My use of ‘partisan’ here is not limited to contests between formal political parties but includes any area where different factions are putting forward competing views for how society should be organized.
The most familiar examples are those where the factions are formally declared to be political organizations, such as parties, and they are overtly competing to win power in elections so they can implement their ideas.
But there are many other types of factions that are active in promoting their views in more or less formal ways, eg non-governmental organisations, local citizens’ groups, and sub-groups within parties.
Please also read the word ‘political’ into ‘partisan’ as I am not concerned here with other partisan disputes such as over which sports team is best, or between different interpretations of a religious doctrine.
For each section, I will summarise where I see the scope for partisan effects and will follow this with some reasoning and examples.
TECHNOLOGY – LIKELY PARTISAN EFFECTS
The widespread take-up of social media in a society will affect the partisan balance vs the status quo ante.
The technology reduces existing barriers to political action by less well-resourced factions.
It may also assist factions excluded by political establishments.
It will not have a consistent partisan effect in terms of the traditional left-right balance.
The specific local effect will depend on the political stance of those who were previously excluded viz-a-viz those in the establishment in each country.
TECHNOLOGY – DESCRIPTION
The question here is whether there is something intrinsic to social media technology that means it will necessarily favour certain factions over others when it comes into widespread usage.
Social media has three superpowers, we can use it to 1) share information with other people, 2) consume information that has been shared with us, and 3) build communities of interest.
These things are all possible in a society where there is no social media but the new technology makes them cheaper and easier.
Sharing of information and building communities are core competences of political factions so we can expect there to be a significant impact on the business of politics whenever these are made cheaper and easier.
But will this necessarily be a partisan effect?
Even in nominally open democracies, there are barriers to some factions participating in political debates – typically resource constraints or legal restrictions that limit politics to a defined ‘establishment’.
The low cost of using social media as compared with traditional media means that its arrival is likely to have an effect on the balance between the resource-constrained and resource-rich.
The history of the 20th century provides us with many examples of the resource-constrained getting information out in spite of the challenges, eg the publishers of Samizdat in former Soviet states.
This reminds us not to to overstate the importance of removing technical costs as compared with activist beliefs and personal courage, but nor is a transition from the use of printing equipment to publication via smartphone trivial.
Where access is limited by regulations that only allow certain factions to engage in political activity, then access is not automatically enabled by the simple availability of a cheap new medium.
But in practice, challenges around enforceability that I described in an earlier post, and gaps in the law may mean that existing barriers to participation are either legally defective or easily bypassed on social media.
The Arab Spring is perhaps the standout example where we saw factions that had relatively few resources and who were not authorised by their governments use the newly available technology to advance their agendas.
The importance of social media in those events is itself a matter of some debate and I will write later on ‘Hosni Mubarak, My Part in His Downfall’ (NB this IS sarcasm, per the late, great Spike Milligan).
But even absent a full analysis, it seems reasonable to say that the widespread take-up of social media in countries like Egypt and Tunisia had a partisan effect which was (at least initially) in favour of less well resourced and/or formerly excluded factions.
USERS (AS CONSUMERS OF CONTENT) – LIKELY PARTISAN EFFECTS
Actions taken by users in selecting who to follow on social media platforms have very significant partisan effects.
Where social media feeds are dominated by user-selected content then user actions are correspondingly the most important consideration for partisan effects on the platform.
Where users consume more content selected by a) platforms, or b) advertisers, this reduces the significance of user actions in favour of platform or content producer actions respectively.
USERS (AS CONSUMERS OF CONTENT) – DESCRIPTION
I have had many conversations where I felt there were misunderstandings about why we see the content we do on social media.
I want to address those misunderstandings but if this is all familiar to you or you dislike over-extended metaphors then you may want to skip ahead.
Language brings associations with it and the choice of the word ‘Newsfeed’ to describe how social media information is presented may itself contribute to the confusion by inviting comparisons with ‘Newspapers’.
In reality, most of the content we see in a social media Newsfeed is not news (in the traditional news media sense), and its composition has absolutely nothing to do with the way that editors put together a newspaper.
The two baskets of food in this image help illustrate the difference (and, yes, the fact that I am writing this during the Covid-19 lockdown is responsible for food analogies being top of mind).
On the left, we have a hamper of goodies from a posh British shop called Fortnum and Mason.
When you order this hamper, the staff at the shop select what goes into it from a limited set of products that have been commissioned by buyers in that shop and it is shipped in this identical form to all purchasers.
On the right, we have the kind of shopping basket that we take around a supermarket ourselves.
You select what goes into this basket from a wide range of products that the supermarket has placed on the shelves, and each person is likely to take away a different set of goods by the time they reach the checkout.
As you have no doubt already figured out, I have put forward the hamper as analagous to newspaper websites and the supermarket basket as analogous to social media feeds.
This difference is critical to understanding the actions that are creating partisan political effects.
When it comes to the hamper, the key decisions about what is in it are made by the shop providing it and I can simply take it or leave it.
This can be a great model where my tastes align with those of the shop and it may lead to me discovering new foods when the shop throws in something I would never have selected for myself.
[NB fans of journalism might take this opportunity to point out the superior quality of the goods in the hamper and I would agree that the ‘hamper’ model of traditional news media often delivers great quality and consistency].
Returning to our core theme, we would recognise that any ‘bias’ in the selection of goods in the hamper is down to the shop and not down to the consumer’s actions.
Similarly, if I visit the websites of UK newspapers ‘The Guardian’ or ‘The Daily Mail’, I will get a left or right wing view of the world because of editorial decisions made by those publications.
Most people understand these biases and choose which newspaper website to visit on the basis of their own political leanings.
In many cases people will consistently choose only one of these two news suppliers and would be horrified if the wrong ‘hamper’ landed on their doorstep.
For the supermarket basket, the responsibilities sit quite differently.
The supermarket certainly has a role to play in terms of the suppliers it buys from, how it arrange goods on the shelves, and its decisions to promote some products with special offers.
But the key determinants of any ‘bias’ in my shopping basket are the individual choices that I have made when selecting what to buy.
Likewise, I may end up with a ‘Guardian’ or ‘Daily Mail’ feel to my social media feed but this is based on content I have selected, creating the ‘editorial line’ myself rather than delegating this to any third party.
This interpretation of the user’s role in bias on social media may not be popular with those who see a much stronger role for the invisible hand of platforms.
And I certainly do not want to be dismissive of the role that platforms also play in filling the ‘basket’ of content we see in our social media feeds.
As well as content pulled from sources that we have selected, it is common to find content that has been pushed into the feed either because the platform recommends it or because a content producer has paid to promote it.
We can do a quick exercise to weigh up the relative importance of user actions vs platform actions vs content producer actions by looking at our feeds on different services.
Scan the next 20 (or 50 if you have time) pieces of content in your feed and count the percentage that were from your connections (user action) vs recommendations or sponsored content (platform or producer action).
Where the proportion of user selected content is very high, then we might reasonably say that any partisan bias in the feed is primarily the result of choices you have made as a user.
Where the proportion of recommended and/or promoted content is very high, then platform and advertiser actions may be more significant factors in partisan bias than your user actions.
These relative proportions will vary from user to user and platform to platform and will change over time but they are a key metric for any discussion about ‘correcting’ partisan bias on social media.
USERS (AS PRODUCERS OF CONTENT) – LIKELY PARTISAN EFFECTS
It seems likely that an imbalance in the quantity and quality of content produced by different factions will have some partisan political effect.
There is a range of claims about publisher actions causing partisan effects but more research is needed to understand the nature and significance of them.
Regulation is a significant external factor for political content production especially during election campaigns.
USERS (AS PRODUCERS OF CONTENT) – DESCRIPTION
There is a wide range of people producing political content for distribution through social networks from individuals with local personal observations to massive party or government-backed campaigns.
If content production were evenly distributed between different factions then we might assume that all the players would cancel each other out so that no one party gains a particular advantage.
But if there are significant variations in quantity and/or quality of content being produced by different factions then this may translate into a partisan effect.
The impact of this activity is the subject of a great deal of research by political scientists and I am not going to pretend to have all the answers here.
Research is especially important in teasing out questions of correlation and causation because of the tendency to try and ‘rationalise’ political outcomes.
Any of you who have been active in election campaigns will recognise the need to find a post-hoc justification for why you won or (or more typically) lost a particular contest.
This is not meant to be a critical observation of politicians who I respect enormously for putting themselves through the mill at election times (NB this IS NOT sarcasm) but does mean a lot of the ‘received wisdom’ about campaigns is based on emotion, including real grief at a loss, rather than data.
The most discussed examples where victories have been credited to content producers on social media are the UK Brexit Referendum and the US Presidential Election in 2016.
The assertion is that these results would have gone the other way if it had not been for the quantity and quality of partisan content created by the campaigns and/or state-sponsored foreign factions.
It is not possible fully to prove or disprove these assertions after the event, but we now see increased monitoring and research before and during important election campaigns that will help with future analyses,
Where platforms now provide libraries of political advertising, like these from Google and Facebook, this will help with analysis of the subset of political content production that factions pay to promote.
There are also efforts to evaluate non-advertising content being produced by some factions, eg studies of the far right, but this can be challenging given the sheer scale of content in circulation.
The other key piece of the jigsaw is to improve our understanding of the effects of people consuming political content and we are now seeing useful research like this recent investigation into the effects of ‘fake news’.
Regulation already plays a significant role as most countries apply some form of specific legal requirements to political communications.
There is a lot of interest in how these regulatory requirements need to be updated for social media including new transparency requirements.
We can expect to see significant changes in how political content producers can use social media and in reporting about their activities as legislative changes roll out.
PLATFORMS (BEHAVIOUR) – LIKELY PARTISAN EFFECTS
‘Content agnostic’ behavioural controls may have unintended indirect partisan effects.
The nature of these effects will depend on how different factions want to use platforms which may in turn reflect the specific local political environment.
Increasing levels of control may impact less well-resourced and extra-establishment factions, rolling back some of the benefits these groups felt from the initial spread of the technology.
PLATFORMS (BEHAVIOUR) – DESCRIPTION
There are a number of areas where platforms seek to influence the behaviour of people who distribute content across their services.
These behavioural tools are typically framed as being ‘content agnostic’ and not intended to create specific partisan effects.
We can look at examples of these behavioural features – identity management, engagement signals, and anti-spam tools – to see how they might cause indirect partisan effects.
There are many ways for platforms to collect and present identifying information about their users.
At one end of the spectrum, platforms might ask for a single unverified identifier like an email address or phone number at sign-up, and allow people to represent themselves under whichever identity they choose.
By contrast, a platform could ask for multiple identifiers, including official documents, and insist that people represent themselves only using their real verified identity.
As identity requirements ramp up, it seems likely that these will act as a greater barrier to factions whose views attract hostility, and those who find it harder to produce any required documentation.
Most users of social media will not be able to read all the of the content produced by their connections each day so platforms provide mechanisms to order all of their content for them.
These ordering systems assess each content item to see how likely it is to be of interest to a user so they can put it in the right place in the feed.
Returning to my food analogy, this is where the supermarket rearranges the shelves for each customer so that the products that person normally buys are in the most prominent locations.
A commonly used signal for something being more interesting is when the system sees that many other people are engaging with that content.
Where some factions are able to achieve greater levels of engagement for their content this is likely to boost their prominence on the platform vs that of other factions with lower engagement.
In the political context, there are concerns that the use of engagement signals in content ordering systems gives undue prominence to factions that produce more extreme content as this provokes more reactions from users.
Platforms generally place a high priority on preventing unsolicited (and unwelcome) communications as these can create a very negative experience for users.
These anti-spam tools identify people who are communicating inappropriately by looking at signals like the volume and rate at which messages are being sent, and patterns of message rejections and complaints.
Political factions have a core goal of building their supporter bases and this can bring them into conflict with anti-spam rules as they reach out to people beyond their existing known contacts.
We can expect there to be variations in the way that different factions use communication tools, in the size of their known supporter lists, and in the reaction of recipients to unsolicited messages.
These variations mean that anti-spam tools may result in partisan political effects which could be very significant if, for example, they result in the suspension of service only to some parties during an election.
Good Guys and Bad Guys
I selected these three examples as they are all areas where platforms are being asked to make changes in response to political misinformation.
If the ‘enemy’ is someone using anonymous accounts to push outrageous content to all and sundry, then it makes sense to increase identity requirements, tweak feed algorithms to stop rewarding sensationalism, and get tough on unsolicited communications.
Many social media users will find these measures very attractive and uncontroversial if they protect us from what we see as junk.
But they will also impact on legitimate factions who are reluctant to reveal their identity because they are under threat, who are trying to shock people into paying attention to important stories, and who need to reach out to new people as they have few existing contacts.
Universal application of behavioural controls is both a strength – they seem fair in principle – and a weakness – the outcomes can in practice create sympathetic ‘victims’ where the rules feel insensitive and oppressive.
As a result, we sometimes see platforms being called upon to apply more controls to defeat the ‘bad guys’, and then criticised when those same controls impact on the ‘good guys’.
One way for platforms to address both criticisms is to create criteria for exceptions to the general rule and this usually means discriminating between ‘good’ and ‘bad’ content which leads us on to the final section.
PLATFORMS (ON CONTENT)
There is a deliberate and explicit intention to have partisan political effects in some elements of platform content policies.
Other elements of content policy are likely to have an indirect effect where they suppress speech that is part of political discourse.
The scale of these effects will depend on the breadth of factions who associate with the banned entities and/or use the prohibited forms of speech.
When we look at platform content policies we see our first examples of explicit, intentional partisan effects.
Platforms commonly prohibit content production by political factions that actively promote violence, including but not limited to terrorist groups.
They may also disqualify factions following specific political ideologies, typically neo-Nazis, from using their services.
These restrictions are not necessarily aligned with lists of prohibited organisations made by governments around the world and so will include both legal and illegal factions.
Where bans relate to marginal factions with little broader support in society then any partisan effects of these bans are likely also to be marginal.
But where banned factions have links with more mainstream organisations then there can be a wider collateral partisan effect which the examples below help to illustrate.
The left side of Figure 3 is a photo from the Durham Miners’ Gala, a major event in the calendar of the UK Labour Party.
A group is marching past the party leadership holding yellow flags adorned with an image of Abdullah Öcalan, who is both a convicted terrorist and an icon to Kurdish nationalists.
Platform policies prohibiting content supportive of terrorism suggest this content should be removed as it appears to be celebrating a well known terrorist (whose faction is banned in many countries including the UK).
In this case, removal will impact a wider community of people who support the mainstream Labour party as well as the smaller faction of supporters of Ocalan who are the intended target.
There are similar associations between supporters of Kurdish nationalism and mainstream political parties in other European countries.
Turning to the right side of the image, we see a former leader of the UK Independence Party (UKIP) shaking hands with a far right activist, Tommy Robinson, after appointing him as an adviser.
Facebook banned Tommy Robinson from their platform because they judged him to be promoting hatred in violation of their policies.
As well as UKIP, there are many factions both in the UK and further afield who are sympathetic to this individual so this ban on a specific faction may lead to some of their content being removed and have wider partisan effects.
The extent to which a ‘persona non grata’ is integrated into other political factions becomes a key determinant of the partisan effects of their being banned.
Other elements of content regulation that are likely to have partisan effects are those directed at hate speech.
The partisan impact can be very direct so that, for example, a faction called “Let Women Drive” might be permitted while one called “Stop Women Driving” is not as the latter group are calling for the exclusion of a protected group.
Partisan effects are to an extent inevitable when the aim of rules – whether through law or platform policies – is to protect defined categories of people from specific kinds of attacks.
These rules will necessarily have a disproportionate effect on factions that are prone to making the prohibited types of attack.
The effects may be felt differently across the political spectrum as factions that fall foul of hate speech rules are often (but not always) associated with the far right.
This can lead to complaints of bias when there is not similar enforcement against radical groups on the far left of the political spectrum.
It is worth considering for a moment why platforms would prohibit ‘Bash the Gays’ but not ‘Bash the Rich’?
There are two quite different rationales that commonly underpin restrictions on political speech – whether in law or platform policies – that I would characterise as ‘anti-oppression’ and ‘anti-sedition’.
Supporters of anti-oppression measures see certain forms of speech as exacerbating the oppression that is experienced by some groups in society.
It is the fact that the protected groups are deemed to be suffering oppression that creates the compelling case for restricting speech in order to protect them.
It is unlikely that ‘the rich’ would ever fall into this category of being an oppressed minority by any reasonable interpretation.
Anti-sedition rules on the other hand are intended to protect the social order from any threat and this might include protecting the rich and powerful.
In each case, the test is whether restrictions on speech are ‘necessary in a democratic society’ but the focus is on different societal needs.
There is often more sympathy for anti-oppression rules and more skepticism about anti-sedition rules from people on the left of the political spectrum, while these feelings are reversed for people on the right.
This divide means that the attitude of platforms towards anti-oppression and anti-sedition measures variously will be interpreted in partisan terms.
There may be a temptation to think that political balance can be achieved by trading measures off against each other but a more principled approach would be to continue considering each of these rationales on their own merits.
There are examples of governments developing regulation with the purpose of creating their own partisan effects in each of the areas I describe above and I will look at some of these in my next post.
To close this post, I want to leave you with an example of how we might apply a model of Likely Partisan Effects to a particular decision.
This does not lead to a single right answer but may allow us to have a more informed and open dialogue about the real trade-offs with different options.
Example 1 – Assessing the Likely Partisan Effects of restricting foreign political advertising
One response to concerns about inappropriate interference in elections is to ban the purchase of political adverts targeted at a country by people who are not resident in that country.
Such a ban could be implemented through platform behavioural controls or through government legislation or both.
If implemented comprehensively and effectively, such a ban would :-
— 1. Reduce participation by factions working directly for foreign governments from abroad.
— 2. Reduce participation by international non-governmental organisations where they do not have a local branch.
— 3. Reduce direct participation by overseas branches of domestic political factions.
— 4. Add friction to the purchase of political adverts for domestic political factions who will be asked for documents to prove they are not foreign.
The Likely Partisan Effects of each of these impacts will vary from country to country and may result in the measure being more controversial than at first glance or in calls for specific exceptions.
Impact 1 is typically the primary goal of such a ban while the other effects are incidental.
Impacts 1 and 2 together will be seen as non-partisan where there is a broad consensus that the foreign governments and NGOs have no legitimate role in an election.
But there are also scenarios where some domestic factions seek support from external NGOs and governments as a counter-weight to what they see as oppressive domestic controls over elections.
This variable political landscape may lead to calls for differential treatment of ‘good’ and ‘bad’ foreign interference which would add another dimension to the partisan effects.
Impacts 3 and 4 are felt by legitimate domestic factions and their partisan effects will depend on any differences in how these factions operate in each country.
Where patterns of partisan allegiance in diaspora communities differ from domestic party strengths, as is sometimes the case, then limiting the activities of overseas branches may have more impact on specific factions.
And enhanced identity checks to prove local residence may disproportionately affect factions that are otherwise legitimate but uncomfortable or unable to comply with those checks.
There is nothing in this process that leads to a predetermined outcome and in the example above the ‘right’ option may remain that of a comprehensive ban with no exceptions after doing this analysis.
But by recognising that there are likely to be partisan effects and surfacing these we will have a more honest and informed debate whenever we are considering measures that impact the political realm.
Summary :- I look at where partisan bias might happen with the use of social media. I use examples to describe the mechanisms that may cause partisan effects. I propose that we conduct analyses of the Likely Partisan Effects of interventions by both platforms and governments.