How Political Bots Worsen Polarization

Do you always know who you are dealing with? Probably not. Do you always recognize when you are influenced? Unlikely. I found it hard to pick up on human signals without succumbing to my own predisposed biases. In other words maintaining “an open mindset” is easier said than done. A recent study found this to be true in particular for dealing with political bots.

tl;dr

Political bots are social media algorithms that impersonate political actors and interact with other users, aiming to influence public opinion. This research investigates the ability to differentiate bots with partisan personas from humans on Twitter. This online experiment (N = 656) explores how various characteristics of the participants and of the stimulus profiles bias recognition accuracy. The analysis reveals asymmetrical partisan-motivated reasoning, in that conservative profiles appear to be more confusing and Republican participants perform less well in the recognition task. Moreover, Republican users are more likely to confuse conservative bots with humans, whereas Democratic users are more likely to confuse conservative human users with bots. The research discusses implications for how partisan identities affect motivated reasoning and how political bots exacerbate political polarization.

Make sure to read the full paper titled Asymmetrical Perceptions of Partisan Political Bots by Harry Yaojun Yan, Kai-Cheng Yang, Filippo Menczer, James Shanahan at https://journals.sagepub.com/doi/full/10.1177/1461444820942744 

Illustration by C. R. Sasikumar

The modern democratic process is technological information warfare. Voters need to be enticed to engage with information about a candidate and election campaigns need to ensure accurate information is presented to build and expand an audience, or a voter base. Assurances for the integrity of information do not exist. And campaigns are incentivised to undercut the opponent’s narrative while amplifying its own candidates message. Advertisements are a potent weapon in any election campaign. Ad spending  on social media for the 2020 U.S. presidential election between Donald Trump and Joe Biden is already at a high with a projected, total bill of $10.8 billion driven by both campaigns. Grassroots campaigns are another potent weapon to decentralize a campaign, mobilize local leaders and impact a particular (untapped) electorate. While the impact of the coronoavirus on grassroots efficacy is yet to be determined, these campaigns are critical to solicit game-changing votes.

Which brings me to the central theme of this post and the paper: bots. When ad dollars or a human movement are out of reach bots are the cheap, fast and impactful alternative. Bots are algorithms to produce a programmed result automatically or human induced with the objective to copy and create the impression of human behavior. We have all seen or interacted with bots on social media after reaching out to customer service. We have all heard or received messages from bots trying to set us up with “the chance of a lifetime”. But do we always know when we’re interacting with bots? Are there ways of telling an algorithm apart from a human?

Researchers from Indiana University of Bloomington took on these important questions in their paper titled Asymmetrical Perceptions of Partisan Political Bots. It explains the psychological factors that impact our perception and decisioning when interacting with partisan political bots on social media. Political bots are used to impersonate political actors and interact with other users in an effort to influence public opinion. Bots have been known to facilitate the spread of spam mail and fake news. They have been used and abused to amplify conspiracy theories. Usage leads to improvement. In conjunction with enhanced algorithms and coding this poses three problems: 

(1) social media users become vulnerable to misreading a bot’s actions as human. 
(2) A partisan message, campaign success or sensitive information can be scaled up through networking effects and coordination of automation. A certainly frightening example would be the use of bots to declare to have won the election while voters are still casting their ballot. And
(3) political bots are by virtue partisan. A highly polarized media landscape, offers fertile ground for political bots to exploit biases and overcome political misconceptions. That means becoming vulnerable isn’t really necessary; a mere exposure to a partisan political bot can lay the groundwork for later manipulation or influence of opinion.

Examples of low-ambiguity liberal (left), low-ambiguity conservative (right) profiles used as stimuli. Identifiable information is blurred.

The research is focused on whether certain individuals or groups are easier to be influenced by partisan political bots than others. This recognition task depends on how skillful certain individuals or groups can detect a partisan narrative, recognize their own partisan bias and either navigate through motivated reasoning or drown in it. Motivated reasoning can be seen as in-group favoritism and out-group hostility, i.e. conservatives favor Republicans and displease democrats. Contemporary detection methods include (1) network-based, i.e. bots are presumed to be inter-connected – detecting one exposes connections to other bots. (2) Crowdsourcing, i.e. engaging experts in the manual detection of bots. And (3) feature-based, i.e. a supervised machine-learning classifier is trained with categorization statistics of political accounts and is constantly matching against inauthentic metrics. These methods can be combined to increase detection rates. At this interesting point in history, it is an arms race between writing code for better bots against building systems to better identify novel algorithms at scale. This arms race, however, is severely detrimental for democratic processes as they are potent enough to deter or at least undermine confidence of participants at the opposing end of the political spectrum.

Examples high-ambiguity liberal (left), and high-ambiguity conservative (right) profiles used as stimuli. Identifiable information is blurred.

The researchers found that knowingly interacting with partisan political bots only magnifies polarization eroding trust in the opposing party’s intentions. However, a regular user will presumably struggle to discern a political bot from a politically motivated (real) user. It leaves the potential voter base vulnerable to automated manipulation. To overcome this manipulation, the researchers focused on identifying the factors that make up the human perception when it comes to ambiguity between real user and political bot as well as recognition of the coded partisanship of the bot. Active users of social media were more likely to establish a mutual following with a political bot. These users happened to be more conservative while the political bots they chose to interact were likely conservative too. Time played a role insofar as active users who took more time to investigate and understand the political bot, which they only saw as a regular-looking, partisan social media account, were less likely to accurately discern real user from political bot. In their results, this demonstrated (1) a higher chance for conservative users to be deceived by conservative political bots and (2) a higher chance for liberal users to misidentify conservative (real) users for political bots. The researchers conclude that 

users have a moderate capability to identify political bots, but such a capability is also limited due to cognitive biases. In particular, bots with explicit political personas activate the partisan bias of users. ML algorithms to detect social bots provide one of the main countermeasures to malicious manipulation of social media. While adopting third-party bot detection methods is still advisable, our findings also suggest possible bias in human-annotated data used for training these ML models. This also calls for careful consideration of algorithmic bias in future development of artificial intelligence tools for political bot detection.

I have been intrigued with these findings. Humans tend to struggle to establish trust online. It’s surprising and concerning that conservative bots may be perceived as conservative humans by Republicans and conservative humans may be perceived as bots by Democrats. The potential to sow distrust to polarize public opinion is nearly limitless for a motivated interest group. While policymakers and technology companies are beginning to address these issues with targeted legislation, it will take a concerted multi-stakeholder approach to mitigated and reverse polarization spread by political bots.

One thought on “How Political Bots Worsen Polarization”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.