How COVID19 Will Impact The 2020 US Presidential Election

The way Americans choose their President is complicated. Foreign election interference in previous elections as well as a polarized, partisan political arena at home only made it more complicated to elect the leader of the free world. In the 2020 U.S. presidential election, the coronavirus pandemic completely upended how presidential campaigns rally supporters and how to run for public office at large. An underfunded and stripped of its human capital Postal Service is facing an unprecedented volume of mail-in ballots. This leaves me with the question: What is the impact of COVID on the race for the White House? How are both campaigns using it (or not) to drive their political pitch to capture voters? An answer might be a recent data memo by researchers from the University of Leeds – School of Politics and International Studies.    


The impact of COVID on the upcoming November 2020 US election will be an important topic in the coming months. In order to contribute to these debates, this data memo, the final in our summer 2020 series on COVID, considers this question based on an analysis of social media discourse in two week-long periods in late May and early July. We find that only a very small proportion of tweets in election-related trends concern both the election and COVID. In the May period, there was much evidence of conspiracy-style and misinformative content, largely attacking the Democrats, the seriousness of COVID and postal-voting. Tweets also showed that the stances of the Presidential nominees towards the coronavirus has emerged as a major point of political differentiation. In the July period, tweets about COIVD and the election were dominated by the influence of a new anti-Trump Political Action Committee’s viral videos, with the hashtags associated with these videos found in 2.5% of all tweets in election-related trends across the period. However, this criticism was not mirrored in the wider dataset of election-related or political tweets in election-related trends. Criticism of Trump was frequent across all time periods and samples, but discourse focused far more on Trump especially in the July period in which tweets about Trump outnumbered tweets about Biden 2 to 1. We conclude that these patterns suggest the issue of COVID in the US has become so highly politicised that it is largely only one side of the political spectrum engaging with how COVID will impact the US election. Thus, we must ask going forward not how COVID will impact the process and outcome of the election but rather how COVID will be used as a political and campaign issue in the coming election.

Make sure to read the full data memo titled COVID’s Impact on the US 2020 Election: Insights from Social Media Discourse in the Early Campaign Period by Gillian Bolsover at 

Chip Somodevilla (Getty Images) & Gerry Broome (AP Photo)

We had a good run in the first three months of 2020. Then the COVID pandemic spread across the globe. Most industrial nations had to shut down their economies with rigorous lockdown and shelter-in-place policies paralyzing its citizens and economies alike. COVID also impacted the democratic processes of many developed nations. For example, New Zealand rescheduled their national election until late 2020. Elections for the Hong Kong city legislature were postponed until 2021. At least 88 elections were impacted in one way or another by COVID. This data memo focuses on the 2020 U.S. presidential elections, which will not only decide a highly polarized presidential campaign between incumbent President Donald Trump and his democratic challenger Joe Biden, but also elect the entire U.S. House of Representatives, a third of the U.S. Senate and decide numerous races for public office. It takes a snapshot from the campaign trail comparing social media data discussing the elections and COVID in conjunction with either candidate and it does demonstrate how the pandemic not only impacts the 2020 U.S. presidential elections but how COVID is used as a political weapon to advance campaign objectives.

For most people across the globe, Mid-March marks the beginning of indoor restrictions, facemasks, social distancing and the loss of minimal certainty that our socio-economic environments have to offer. It was a pivot-point for political operatives, who in past elections were able to rely on a candidate’s charisma at in-person political rallies. COVID forced both campaigns to adhere to public health guidelines. Republican and Democratic campaign events were either cancelled or moved indoors and online. Supporters and candidates alike were confined to choppy Zoom calls on small screens. Studies have shown that these national crises usually create an increase in favorability for an incumbent president, e.g. the popularity of George W. Bush spiked following the terrorist attacks on September 11. They also tend to benefit Republican or conservative leaders more in particular in conjunction with a patriotic narrative. This effect is known as the ‘Rally-Round-The-Flag’ effect. However, its impact is subject to media coverage and proliferation of the political narrative that is spun up by the incumbent’s campaign. And Trump as an experienced social media operative stood to benefit from the COVID pandemic: the U.S. economy was in good shape in Q1 2020, the tactical assassination of Qasem Soleimani did not start another war in the Middle East nor did the impeachment proceedings cause any apparent political damage to his reelection campaign. The administration’s track record could have been far worse. Despite these advantages, however, Trump’s often erratic behavior prompted social media platforms to more restrictively apply content policies to public figures and politicians. Twitter started labelling Trump’s tweets as misinformation or removed tweets for violating Twitter’s content policies.


The researchers focused on a snapshot of both campaigns by analyzing Twitter data in the months of May and July. To avoid limitations or cognitive biases, the researchers took sample data from all trending topics within the U.S. They found a small number of trends solely focusing on elections while a large number of trends concerned general politics. COVID and the election was found to be a significant topic of discussion. However, the researchers found little evidence for trends directly discussing correlations between COVID and the election. Few example cases demonstrated hyper-partisan, authoritarian labelling to divide an in-group from an out-group. For example, the unrestricted proliferation of baseless conspiracy claims against the Democratic party in conjunction with misinformation, as seen below: 


Other examples appeared to offer evidence but misunderstood it. Therefore shifting responsibilities away from the administration: 


The majority of these hyper-partisan examples played out on the conservative end of the political spectrum. Democrats were found to capitalise on fewer opportunities to link administration failures to mitigate COVID with the election because Democrats treated finding a scalable policy solution as a serious health issue not to be trifled with. This imbalance between the two political campaigns created a further polarization of the issues: Democrats moving campaign activity online was used by Republicans as further ‘evidence’ for a conspiracy. Social media users reacted to this behavior by retracting support for both candidates. Anti-Biden content dropped as well while Trump remained at a consistent negative level. Trump, however, generated twice as much consistent traffic, which drove more voter attention to his campaign. Despite consistent criticism of the administration’s handling of the pandemic, the negative sentiment towards Trump did not create tangible support for Biden. And visibility can also be an indicator of success in the election. As the incumbent, Trump has the advantage over Biden. The researchers therefore conclude that a hyper-partisan narrative surrounding COVID and elections have been successful in using COVID as a political weapon to undermine the solutions-oriented approach driven by the Democratic party, which further entrenched both political camps at extreme ends making this election even more polarized. In summary, it can be argued that Republicans and Trump utilized COVID as a means for political gains. Democrats on the other side focused more on the issue of crisis management, which was spun by conservatives into a polarised partisan attack outside of scientific realities.


Political Warfare Is A Threat To Democracy. And Free Speech Enables It

“I disapprove of what you say, but I will defend to the death your right to say it” is an interpretation of Voltaire’s principles by Evelyn Beatrice Hall. Freedom of expression is often cited as the last frontier before falling into authoritarian rule. But is free speech, our greatest strength, really our greatest weakness? Hostile authoritarian actors seem to exploit these individual liberties by engaging in layered political warfare to undermine trust in our democratic systems. These often clandestine operations pose an existential threat to our democracy.   


The digital age has permanently changed the way states conduct political warfare—necessitating a rebalancing of security priorities in democracies. The utilisation of cyberspace by state and non- state actors to subvert democratic elections, encourage the proliferation of violence and challenge the sovereignty and values of democratic states is having a highly destabilising effect. Successful political warfare campaigns also cause voters to question the results of democratic elections and whether special interests or foreign powers have been the decisive factor in a given outcome. This is highly damaging for the political legitimacy of democracies, which depend upon voters being able to trust in electoral processes and outcomes free from malign influence— perceived or otherwise. The values of individual freedom and political expression practised within democratic states challenges their ability to respond to political warfare. The continued failure of governments to understand this has undermined their ability to combat this emerging threat. The challenges that this new digitally enabled political warfare poses to democracies is set to rise with developments in machine learning and the emergence of digital tools such as ‘deep fakes’.

Make sure to read the full paper titled Political warfare in the digital age: cyber subversion, information operations and ‘deep fakes’ by Thomas Paterson and Lauren Hanley at

MC2 Joseph Millar | Credit: U.S. Navy

This paper’s central theme is at the intersection of democratic integrity and political subversion operations. The authors describe an increase of cyber-enabled espionage and political warfare due to the global spread of the internet. They argue it has led to an imbalance between authoritarian and democratic state actors. Their argument rests on the notion that individual liberties such as freedom of expression put democratic states at a disadvantage compared to authoritarian states. Therefore authoritarian states are observed to more often choose political warfare and subversion operations versus democracies are confined to breaching cyber security and conducting cyber espionage. Cyber espionage is defined as

“the use of computer networks to gain illicit access to confidential information, typically that held by a government or other organization”

and is not a new concept. I disagree with the premise of illicit access because cyberspace specifically enables the free flow of information beyond any local regulation. Illicit is either redundant for espionage does not necessarily require breaking laws, rules or customs or it is duplicative with confidential information, which I interpret as synonymous with classified information. Though one might argue about the difference. From a legal perspective, the information does not need to be obtained through illicit access.

With regard to the broader term political warfare, I found the definition of political warfare as, 

“diverse operations to influence, persuade, and coerce nation states, organizations, and individuals to operate in accord with one’s strategic interests without employing kinetic force” 

most appropriate. It demonstrates the depth of political warfare, which encompasses influence and subversion operations outside of physical activity. Subversion operations are defined as 

“a subcategory of political warfare that aims to undermine institutional as well as individual legitimacy and authority”

I disagree with this definition for it fails to emphasize the difference between political warfare and subversion – both undermine legitimacy and authority. However, a subversion operation is specifically aimed to erode and deconstruct a political mandate. It is the logical next step after political warfare influenced a populace in order to achieve political power. The authors see the act of subversion culminating in a loss of trust in democratic principles. It leads to voter suppression, reduced voter participation, decreased and asymmetrical review of electoral laws but more importantly it poses a challenge to the democratic values of its citizens. It is an existential threat to a democracy. It favors authoritarian states detached from checks and balances that are usually present in democratic systems. These actors are not limited by law or civic popularity or reputational capital. Ironically, this bestows a certain amount of freedom upon them to deploy political warfare operations. Democracies on the other hand uphold individual liberties such as freedom of expression, freedom of the press, freedom of assembly or equal treatment under law and due process. As demonstrated during the 2016 U.S. presidential elections, a democracy generally struggles with identifying political warfare initiated by a foreign (hostile) state from certain segments of the population pursuing their strategic objectives by leveraging these exact individual freedoms. An example from the Mueller Report 

“stated that the Internet Research Agency (IRA), which had clear links to the Russian Government, used social media accounts and interest groups to sow discord in the US political system through what it termed ‘information warfare’ […] The IRA’s operation included the purchase of political advertisements on social media in the names of US persons and entities, as well as the staging of political rallies inside the United States.”

And it doesn’t stop in America. Russia is deploying influence operations in volatile regions on the African continent. China has a history of attempting to undermine democratic efforts in Africa. Both states aim to chip away power from former colonial powers such as France or at least suppress efforts to democratise regions in Africa. China is also deeply engaged in large-scale political warfare in the Southeast Asian region over regional dominance but also territorial expansion as observed in the South China Sea. New Zealand and Australia recorded numerous incidents of China’s attempted influence operations. Australia faced a real-world political crisis when Australian Labor Senator Sam Dastyari was found to be connected to political donor Huang Xiangmo, who has ties to the Chinese Communist Party. Therefore, China having a direct in-route to influence Australian policy decisions. 

The paper concludes with an overview of future challenges posed by political warfare. With more and more computing power readily available the development of new cyber tools and tactics to ideate political warfare operations is only going to increase. Authoritarian states are likely to expand their disinformation playbooks by tapping into the fears of people fueled by conspiracy theories. Developments of machine learning and artificial intelligence will further improvements of inauthentic behavior online. For example, partisan political bots will become more human and harder to discern from real human users. Deep fake technology will increase sampling rates by tapping into larger datasets from the social graph of every human being making it increasingly possible to impersonate individuals to gain access or achieve certain strategic objectives. Altogether, political warfare poses a greater challenge than cyber-enabled espionage in particular for democracies. Democracies need to understand the asymmetrical relationship with authoritarian actors and dedicate resources to effective countermeasures to political warfare without undoing civil liberties in the process.

How Political Bots Worsen Polarization

Do you always know who you are dealing with? Probably not. Do you always recognize when you are influenced? Unlikely. I found it hard to pick up on human signals without succumbing to my own predisposed biases. In other words maintaining “an open mindset” is easier said than done. A recent study found this to be true in particular for dealing with political bots.


Political bots are social media algorithms that impersonate political actors and interact with other users, aiming to influence public opinion. This research investigates the ability to differentiate bots with partisan personas from humans on Twitter. This online experiment (N = 656) explores how various characteristics of the participants and of the stimulus profiles bias recognition accuracy. The analysis reveals asymmetrical partisan-motivated reasoning, in that conservative profiles appear to be more confusing and Republican participants perform less well in the recognition task. Moreover, Republican users are more likely to confuse conservative bots with humans, whereas Democratic users are more likely to confuse conservative human users with bots. The research discusses implications for how partisan identities affect motivated reasoning and how political bots exacerbate political polarization.

Make sure to read the full paper titled Asymmetrical Perceptions of Partisan Political Bots by Harry Yaojun Yan, Kai-Cheng Yang, Filippo Menczer, James Shanahan at 

Illustration by C. R. Sasikumar

The modern democratic process is technological information warfare. Voters need to be enticed to engage with information about a candidate and election campaigns need to ensure accurate information is presented to build and expand an audience, or a voter base. Assurances for the integrity of information do not exist. And campaigns are incentivised to undercut the opponent’s narrative while amplifying its own candidates message. Advertisements are a potent weapon in any election campaign. Ad spending  on social media for the 2020 U.S. presidential election between Donald Trump and Joe Biden is already at a high with a projected, total bill of $10.8 billion driven by both campaigns. Grassroots campaigns are another potent weapon to decentralize a campaign, mobilize local leaders and impact a particular (untapped) electorate. While the impact of the coronoavirus on grassroots efficacy is yet to be determined, these campaigns are critical to solicit game-changing votes.

Which brings me to the central theme of this post and the paper: bots. When ad dollars or a human movement are out of reach bots are the cheap, fast and impactful alternative. Bots are algorithms to produce a programmed result automatically or human induced with the objective to copy and create the impression of human behavior. We have all seen or interacted with bots on social media after reaching out to customer service. We have all heard or received messages from bots trying to set us up with “the chance of a lifetime”. But do we always know when we’re interacting with bots? Are there ways of telling an algorithm apart from a human?

Researchers from Indiana University of Bloomington took on these important questions in their paper titled Asymmetrical Perceptions of Partisan Political Bots. It explains the psychological factors that impact our perception and decisioning when interacting with partisan political bots on social media. Political bots are used to impersonate political actors and interact with other users in an effort to influence public opinion. Bots have been known to facilitate the spread of spam mail and fake news. They have been used and abused to amplify conspiracy theories. Usage leads to improvement. In conjunction with enhanced algorithms and coding this poses three problems: 

(1) social media users become vulnerable to misreading a bot’s actions as human. 
(2) A partisan message, campaign success or sensitive information can be scaled up through networking effects and coordination of automation. A certainly frightening example would be the use of bots to declare to have won the election while voters are still casting their ballot. And
(3) political bots are by virtue partisan. A highly polarized media landscape, offers fertile ground for political bots to exploit biases and overcome political misconceptions. That means becoming vulnerable isn’t really necessary; a mere exposure to a partisan political bot can lay the groundwork for later manipulation or influence of opinion.

Examples of low-ambiguity liberal (left), low-ambiguity conservative (right) profiles used as stimuli. Identifiable information is blurred.

The research is focused on whether certain individuals or groups are easier to be influenced by partisan political bots than others. This recognition task depends on how skillful certain individuals or groups can detect a partisan narrative, recognize their own partisan bias and either navigate through motivated reasoning or drown in it. Motivated reasoning can be seen as in-group favoritism and out-group hostility, i.e. conservatives favor Republicans and displease democrats. Contemporary detection methods include (1) network-based, i.e. bots are presumed to be inter-connected – detecting one exposes connections to other bots. (2) Crowdsourcing, i.e. engaging experts in the manual detection of bots. And (3) feature-based, i.e. a supervised machine-learning classifier is trained with categorization statistics of political accounts and is constantly matching against inauthentic metrics. These methods can be combined to increase detection rates. At this interesting point in history, it is an arms race between writing code for better bots against building systems to better identify novel algorithms at scale. This arms race, however, is severely detrimental for democratic processes as they are potent enough to deter or at least undermine confidence of participants at the opposing end of the political spectrum.

Examples high-ambiguity liberal (left), and high-ambiguity conservative (right) profiles used as stimuli. Identifiable information is blurred.

The researchers found that knowingly interacting with partisan political bots only magnifies polarization eroding trust in the opposing party’s intentions. However, a regular user will presumably struggle to discern a political bot from a politically motivated (real) user. It leaves the potential voter base vulnerable to automated manipulation. To overcome this manipulation, the researchers focused on identifying the factors that make up the human perception when it comes to ambiguity between real user and political bot as well as recognition of the coded partisanship of the bot. Active users of social media were more likely to establish a mutual following with a political bot. These users happened to be more conservative while the political bots they chose to interact were likely conservative too. Time played a role insofar as active users who took more time to investigate and understand the political bot, which they only saw as a regular-looking, partisan social media account, were less likely to accurately discern real user from political bot. In their results, this demonstrated (1) a higher chance for conservative users to be deceived by conservative political bots and (2) a higher chance for liberal users to misidentify conservative (real) users for political bots. The researchers conclude that 

users have a moderate capability to identify political bots, but such a capability is also limited due to cognitive biases. In particular, bots with explicit political personas activate the partisan bias of users. ML algorithms to detect social bots provide one of the main countermeasures to malicious manipulation of social media. While adopting third-party bot detection methods is still advisable, our findings also suggest possible bias in human-annotated data used for training these ML models. This also calls for careful consideration of algorithmic bias in future development of artificial intelligence tools for political bot detection.

I have been intrigued with these findings. Humans tend to struggle to establish trust online. It’s surprising and concerning that conservative bots may be perceived as conservative humans by Republicans and conservative humans may be perceived as bots by Democrats. The potential to sow distrust to polarize public opinion is nearly limitless for a motivated interest group. While policymakers and technology companies are beginning to address these issues with targeted legislation, it will take a concerted multi-stakeholder approach to mitigated and reverse polarization spread by political bots.