On Propaganda: Russia vs United States

The political and diplomatic relations between the United States and Russia have been in decline for the past decade. Geopolitical tensions between the two nations increased steadily leading to more and more political propaganda of their respective state media. This is also reflected in their government policy documents. These propaganda efforts resulted in a number of influence operations ranging from coordinated inauthentic behavior to create a false narrative to intentional spread of disinformation to undermine the political integrity of the other side. A recent article by researchers of the University of Sheffield and Bard College examined 135 journalistic pieces of American and Russian state media to better understand how their propaganda is portrayed in both countries. It’s an important contribution to better understand emerging public crisis, appropriate content policy response and future diplomacy.  

tl;dr

The period of growing tensions between the United States and Russia (2013–2019) saw mutual accusations of digital interference, disinformation, fake news, and propaganda, particularly following the Ukraine crisis and the 2016 US presidential election. This article asks how the United States and Russia represent each other’s and their own propaganda, its threat, and power over audiences. We examine these representations in US and Russian policy documents and online articles from public diplomacy media Radio Free Europe/Radio Liberty (RFE/RL) and RT. The way propaganda is framed, (de)legitimized, and securitized has important implications for public understanding of crises, policy responses, and future diplomacy. We demonstrate how propaganda threats have become a major part of the discourse about the US–Russia relationship in recent years, prioritizing state-centred responses and disempowering audiences.

Make sure to read the full article titled Competing propagandas: How the United States and Russia represent mutual propaganda activities by Dmitry Chernobrov and Emma L. Briant at https://journals.sagepub.com/doi/full/10.1177/0263395720966171

Credit: https://econ.st/39hEh05 & https://bit.ly/2XuWm5i

How does the United States influence its own citizens by the ways in which it represents the propaganda efforts of Russia at home? How is American propaganda portrayed in Russia? Contrary to popular belief the United States is actively conducting influence operations to disseminate propaganda in foreign countries and at home. Under the U.S. Information and Educational Exchange Act of 1948, known as Smith-Mundt Act and it’s 2012 modernization amendments, the U.S. government is free to extend propaganda efforts to public broadcasters and radio stations foreign and domestic. In Russia, the situation is quite different: state-owned media, strategic use of broadcasting and information technologies are a central feature of the current government. Recent legislation aimed to pressure opposition and restrict freedom of speech and assembly are only surface examples of Russia’s soft power approach in foreign and domestic policy. President Putin defined soft power as “promoting one’s interests and policies through persuasion”, which has translated into Russian public diplomacy initiatives that use a combination of international broadcasting and web-based social networks to engage foreign publics.

Russia’s constitution declares Russia to be a democratic state with a republican form of government and state power divided between the legislative, executive and judicial branches. However, the policy changes introduced by Vladimir Putin have effectively turned Russia’s political system into a particular type of post-totalitarian authoritarianism. Similarly, the United States political system is a federal constitutional republic of checks and balances between the executive, the legislative and the judicial branch. While the United States and Russia differ greatly with regard to their political systems, their respective mechanisms, construction and response to propaganda afford valuable insights for communication researchers. 

To start, it is important to understand propaganda and its variation in public diplomacy. The researchers elaborate on the intricacies of finding the appropriate terminology but suggest propaganda to be 

“a process by which an idea or an opinion is communicated to someone else for a specific persuasive purpose”

Public diplomacy reaches a step further by extending a state’s policy objective to an international audience through intercultural exchanges, advocacy and international broadcasting. The researchers examined 90 articles by Radio Free Europe/Radio Liberty and 45 articles by Russia Today through a critical discourse analysis in an effort to compare and differentiate between moral representation, reporting on government actions, actors and victims. They then parsed it with threats to national security and national identity. They found that both the United States and Russia view and portray propaganda as an external threat orchestrated by a foreign actor using conflict-related, binary language with no room for compromise. Both countries’ propaganda language widely used scientific and technology metaphors to create an impression of sophistication beyond the comprehension of average Joe. Mere exposure to this type of propaganda is assumed to be enough to rally citizens; actual persuasion was not apparently an objective for either U.S. or Russian state media.

While the United States understands propaganda as a foreign threat against the national security of the West, Russian documents use foreign to clarify the externality of influenced communications or misinformation without specific location labelling in the context of propaganda. The United States often portrays itself as ‘leader of the free world’ with the oldest free democracy whereas Russia is depicted as a morally isolated, neo-soviet autocracy. Russia often diametrically portrays the United States by its flawed political system redirecting the Russian audience to the shortcomings of American democracy and conflicting political leaders. Both the United States and Russia construct propaganda in similar ways using similar elements: (1) they present a national security threat to the state and its international reputation while (2) they reframe domestic political problems as foreign-induced, which then justifies a strong, determined state response. This fear-driven approach to portray propaganda as something that can only be mitigated by a strong government response tends to disenfranchise citizens, induces chilling effects that lead to censorship and undermine civic engagement.

As both the United States and Russia will likely continue to lace state media with its propaganda citizens can learn to be vigilant when interacting with content online, in particular if the overall messaging of the content presents itself in a binary fashion. To counter disinformation, policymakers must better communicate policy solutions and focus on media literacy and education. Lastly, government officials can contribute to decrease propaganda and polarization by reframing their political narrative through an infinite mindset with choices and compassion.

Advertisement

Is Transparency Really Reducing The Impact Of Misinformation?

A recent study investigated YouTube’s efforts to provide more transparency about the ownership of certain YouTube channels. The study concerned YouTube’s disclaimers displayed under the video that indicate the content was produced or is funded by a state-controlled media outlet. The study sought to shed light on whether or not these disclaimers are an efficient means to reduce the impact of misinformation.  

tl;dr

In order to test the efficacy of YouTube’s disclaimers, we ran two experiments presenting participants with one of four videos: A non-political control, an RT video without a disclaimer, an RT video with the real disclaimer, or the RT video with a custom implementation of the disclaimer superimposed onto the video frame. The first study, conducted in April 2020 (n = 580) used an RT video containing misinformation about Russian interference in the 2016 election. The second conducted in July 2020 (n = 1,275) used an RT video containing misinformation about Russian interference in the 2020 election. Our results show that misinformation in RT videos has some ability to influence the opinions and perceptions of viewers. Further, we find YouTube’s funding labels have the ability to mitigate the effects of misinformation, but only when they are noticed, and the information absorbed by the participants. The findings suggest that platforms should focus on providing increased transparency to users where misinformation is being spread. If users are informed, they can overcome the potential effects of misinformation. At the same time, our findings suggest platforms need to be intentional in how warning labels are implemented to avoid subtlety that may cause users to miss them.

Make sure to read the full article titled State media warning labels can counteract the effects of foreign misinformation by Jack Nassetta and Kimberly Gross at https://misinforeview.hks.harvard.edu/article/state-media-warning-labels-can-counteract-the-effects-of-foreign-misinformation/

Source: RT, 2020 US elections, Russia to blame for everything… again, Last accessed on Dec 31, 2020 at https://youtu.be/2qWANJ40V34?t=164

State-controlled media outlets are increasingly used for foreign interference in civic events. While independent media outlets can be categorized on social media and associated with a political ideology, a state-controlled media outlet generally appears independent or detached from a state-controlled political agenda. Yet they regularly create content concomitant with the controlling state’s political objectives and its leaders. This deceives the public about its state-affiliation and undermines civil liberties. The problem is magnified on social media platforms with their reach and potential for virality ahead of political elections. A prominent example is China’s foreign interference efforts in the referendum on the independence of Hong Kong.

An increasing number of social media platforms launched integrity measures to increase content transparency to counter the integrity risks associated with a state-controlled media outlet proliferating potential disinformation content. In 2018 YouTube began to roll out an information panel feature to provide additional context on state-controlled and publicly funded media outlets. These information panels or disclaimers are really warning labels that make the viewer aware about the potential political influence of a government on the information shown in the video. These warning labels don’t provide any additional context on the veracity of the content or whether the content was fact-checked. On desktop, they appear alongside a hyperlink leading to the wikipedia entry of the media outlet. As of this writing the feature applies to 27 governments including the United States government. 

Source: DW News, Massive explosion in Beirut, Last Accessed on Dec 31, 2020 at https://youtu.be/PLOwKTY81y4?t=7

The researchers focused on whether these warning labels would mitigate the effects on viewers’ perception created by misinformation shown in videos of the Russian state-controlled media outlet RT (Russia Today). RT evades deplatforming by complying with YouTube’s terms of service. This turned the RT channel into an influential resource for the Russian government to undermine confidence of the American public to trust established American media outlets and the United States government when reporting on the Russian interference in the 2016 U.S. presidential elections. An RT video downplaying the Russian influence operation was used for the study and shown to participants with and without a label identifying RT’s affiliation with the Russian government as well as a superimposed warning label with the same language and hyperlink to wikipedia. This surfaced the following findings: 

  1. Disinformation spread by RT does impact viewer’s perception and is effective at that.
  2. Videos without a warning label were more successful in reducing trust in established mainstream media and the government
  3. Videos without a warning label but a superimposed interstitial with the language of the warning label were most effective in preserving the integrity of viewer’ perceptions
Source: RT, $4,700 worth of ‘meddling’: Google questioned over ‘Russian interference’, Last accessed on Dec 31, 2020 at https://www.youtube.com/watch?v=wTCSbw3W4EI

The researchers further discovered small changes in coloring, design and placement of the warning label increase the viewer taking notice of it and it helps with absorbing the information. Both conditions must be met because noticing a label without comprehending its message had no significant impact on understanding the political connection of creator and content. 

I’m intrigued by these findings for the road ahead offers a great opportunity to shape how we distribute and consume information on social media without falling prey for foreign influence operations. Though open questions remain: 

  1. Are these warning labels equally effective on other social media platforms, e.g. Facebook, Instagram, Twitter, Reddit, TikTok, etc.? 
  2. Are these warning labels equally effective with other state-controlled media? This study focused on Russia, a large, globally acknowledged state actor. How does a warning label for content by the government of Venezuela or Australia impact the efficacy of misinformation? 
  3. This study seemed to be focused on the desktop version of YouTube. Are these findings transferable to the mobile version of YouTube?  
  4. What is the impact of peripheral content on viewer’s perception, e.g. YouTube’s recommendation showing videos in its sidebar that all claim RT is a hoax versus videos that all give RT independent credibility?
  5. The YouTube channels of C-SPAN and NPR did not appear to display a warning label within their videos. Yet the United States is among the 27 countries currently listed in YouTube’s policy. What are the criteria to be considered a publisher, publicly funded or state-controlled? How are these criteria met or impacted by a government, e.g. passing certain broadcasting legislation or declaration?
  6. Lastly, the cultural and intellectual background of the target audience is particularly interesting. Here is an opportunity to research the impact of warning labels with participants of different political ideologies, economic circumstances and age-groups in contrast to the actual civic engagement ahead, during and after an election   

Microtargeted Deepfakes in Politics

The 2019 Wordwide Threat Assessment warned of deepfakes deployed to manipulate public opinion. And while the 2020 U.S. presidential elections did not see an onslaught of deepfakes undermining voter confidence, experts agree that the threat remains tangible. A recent study conducted by researchers of the University of Amsterdam investigated the impact of political deepfakes meant to discredit a politician that were microtargeted to a specific segment of the electorate.

tl;dr

Deepfakes are perceived as a powerful form of disinformation. Although many studies have focused on detecting deepfakes, few have measured their effects on political attitudes, and none have studied microtargeting techniques as an amplifier. We argue that microtargeting techniques can amplify the effects of deepfakes, by enabling malicious political actors to tailor deepfakes to susceptibilities of the receiver. In this study, we have constructed a political deepfake (video and audio), and study its effects on political attitudes in an online experiment. We find that attitudes toward the depicted politician are significantly lower after seeing the deepfake, but the attitudes toward the politician’s party remain similar to the control condition. When we zoom in on the microtargeted group, we see that both the attitudes toward the politician and the attitudes toward his party score significantly lower than the control condition, suggesting that microtargeting techniques can indeed amplify the effects of a deepfake, but for a much smaller subgroup than expected.

Make sure to read the full paper titled Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes? by Tom Dobber, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese at https://doi.org/10.1177/1940161220944364

Credits: UC Berkeley/Stephen McNally

Deepfakes are a subcategory of modern information warfare. The technology leverages machine learning to generate audio-visual content that imitates original content but differs in both intent and message. Its highly deceptive appearance renders it a potent weapon to influence public opinion, undermine strategic policies or disrupt civic engagement. An infamous deepfake example depicts former president Obama seemingly calling president Trump expletives. Online microtargeting is a form of social media marketing to disseminate advertisements tailored to the specific interests of an identifiable, curated audience. Within the political context microtargeting is used to spread a campaign message to a specific audience that is identified and grouped by characteristics to either convince the audience to vote for or against a candidate. There are a number of civic risks associated with deploying deepfakes: 

  • Deepfake content is hard to tell apart from original and authentic content. While deepfake videos may signal some nefarious intent to a cautious audience, the potential impact of deepfake radio or deepfake text on voter behavior hasn’t been researched as of this writing
  • Political actors may leverage deepfakes to discredit opponents, undermine news reporting or equip trailing third-party candidates with sufficient influence to erode voter confidence  
  • Used in a political campaign deepfakes may be strategically deployed to incite a political scandal or to reframe current affairs and regain control of an election narrative

The study created a deepfake video depicting an interview of a prominent center-right politician of a large christian democratic party. The manipulated part of the otherwise original and authentic content shows the politician seemingly making a joke about the crucifixion of Jesus Christ: 

“But, as Christ would say: don’t crucify me for it.”

This content was shown to a randomly selected group of christian voters, who had identified their religious, conservative beliefs or voted for this politician in past elections. The researchers found that deepfakes spread without microtargeting the audience would impact the behavior towards the politician but not necessarily his political party. However, deepfakes tailored to a specific audience using political microtargeting techniques amplified the discrediting message of the deepfake therefore impacting both the behavior towards the politican and the political party. Interestingly, staunch supporters of the politician might be shielded from a lasting behavioral change due their own motivated reasoning (bias) derived from the politician’s ideology. For this group, the researchers argue a certain degree of discomfort or deviation from previous political ideology conveyed in a deepfake may reach a tipping point for staunch supporters to align with the results of this study but the limitations of this study may also indicate room for some unforeseen outcomes. 

A roadmap to counter microtargeted deepfakes should include legislators passing regulations to limit political campaign spending online, which would directly confine a campaign to focus on their limited financial resources and weed out corporate interests. Second, new regulations should focus on the protection of personal-identifiable data. A microtargeting dataset includes location data, personal preferences and website interactions etc. While this data is valuable within a commercial context, it should be excluded from civic engagements such as elections. Academics will have an opportunity to discover insights on algorithm bias to improve upon the existing machine learning approach that is training generative adversarial networks with pre-conditioned datasets. Moreover, future research has an opportunity to further investigate the impact of manipulated media on voter education, confidence and behavior within and outside of political elections.     

Here’s one of my favorite deepfake videos of president Trump explaining money laundering to his son-in-law Jared Kushner in a deepfake(d) scene of “Breaking Bad”

A History Of Disinformation And Political Warfare

After political powerhouse Hillary Clinton lost in a spectacular fashion against underdog Donald J. Trump in the 2016 U.S. presidential elections, the world was flabbergasted to learn of foreign election interference orchestrated by the Russian Internet Research Agency. Its mission: to secretly divide the electorate and skew votes away from Clinton and towards Trump. In order to understand the present, one must know the past. This is the baseline of ‘Active Measures – The Secret History of Disinformation and Political Warfare’ by Johns Hopkins Professor of Strategic Studies Thomas Rid. 

I bought this book to study the methodology, strategy and tactics of disinformation and political warfare. To my surprise, the book only spends 11 pages on disinformation. The remaining 424 pages introduce historic examples of influence operations with the bulk of it dedicated to episodes of the cold war. Rid offers insights into the American approach to defend against a communist narrative in a politically divided Germany. He details Soviet influence operations to time-and-again smear American democracy and capitalism. The detail spent on the German Ministry of State Security known as “Stasi” is interesting and overwhelming. 

While my personal expectation wasn’t met with this book, I learned about retracing historic events to attribute world events to specific nations. Its readability is designed for a mass audience fraught with thrilling stories. What is the role of journalistic publications in political warfare? Did Germany politically regress under American and Soviet active measures? Was the constructive vote of no confidence on German chancellor Willy Brandt a product of active measures? Who did really spread the information the AIDS virus was a failed American experiment? On the downside, this book doesn’t really offer any new details into the specifics of disinformation operations. Most contemporary espionage accounts have already been recorded. Defectors told their stories. This makes these stories sometimes bloated and redundant. Nevertheless, I believe to understand our current affairs, we must connect the dots through the lens of political history. Rid presents the foundations for future research into influence operations.

Political Warfare Is A Threat To Democracy. And Free Speech Enables It

“I disapprove of what you say, but I will defend to the death your right to say it” is an interpretation of Voltaire’s principles by Evelyn Beatrice Hall. Freedom of expression is often cited as the last frontier before falling into authoritarian rule. But is free speech, our greatest strength, really our greatest weakness? Hostile authoritarian actors seem to exploit these individual liberties by engaging in layered political warfare to undermine trust in our democratic systems. These often clandestine operations pose an existential threat to our democracy.   

tl;dr

The digital age has permanently changed the way states conduct political warfare—necessitating a rebalancing of security priorities in democracies. The utilisation of cyberspace by state and non- state actors to subvert democratic elections, encourage the proliferation of violence and challenge the sovereignty and values of democratic states is having a highly destabilising effect. Successful political warfare campaigns also cause voters to question the results of democratic elections and whether special interests or foreign powers have been the decisive factor in a given outcome. This is highly damaging for the political legitimacy of democracies, which depend upon voters being able to trust in electoral processes and outcomes free from malign influence— perceived or otherwise. The values of individual freedom and political expression practised within democratic states challenges their ability to respond to political warfare. The continued failure of governments to understand this has undermined their ability to combat this emerging threat. The challenges that this new digitally enabled political warfare poses to democracies is set to rise with developments in machine learning and the emergence of digital tools such as ‘deep fakes’.

Make sure to read the full paper titled Political warfare in the digital age: cyber subversion, information operations and ‘deep fakes’ by Thomas Paterson and Lauren Hanley at https://www.tandfonline.com/doi/abs/10.1080/10357718.2020.1734772

MC2 Joseph Millar | Credit: U.S. Navy

This paper’s central theme is at the intersection of democratic integrity and political subversion operations. The authors describe an increase of cyber-enabled espionage and political warfare due to the global spread of the internet. They argue it has led to an imbalance between authoritarian and democratic state actors. Their argument rests on the notion that individual liberties such as freedom of expression put democratic states at a disadvantage compared to authoritarian states. Therefore authoritarian states are observed to more often choose political warfare and subversion operations versus democracies are confined to breaching cyber security and conducting cyber espionage. Cyber espionage is defined as

“the use of computer networks to gain illicit access to confidential information, typically that held by a government or other organization”

and is not a new concept. I disagree with the premise of illicit access because cyberspace specifically enables the free flow of information beyond any local regulation. Illicit is either redundant for espionage does not necessarily require breaking laws, rules or customs or it is duplicative with confidential information, which I interpret as synonymous with classified information. Though one might argue about the difference. From a legal perspective, the information does not need to be obtained through illicit access.

With regard to the broader term political warfare, I found the definition of political warfare as, 

“diverse operations to influence, persuade, and coerce nation states, organizations, and individuals to operate in accord with one’s strategic interests without employing kinetic force” 

most appropriate. It demonstrates the depth of political warfare, which encompasses influence and subversion operations outside of physical activity. Subversion operations are defined as 

“a subcategory of political warfare that aims to undermine institutional as well as individual legitimacy and authority”

I disagree with this definition for it fails to emphasize the difference between political warfare and subversion – both undermine legitimacy and authority. However, a subversion operation is specifically aimed to erode and deconstruct a political mandate. It is the logical next step after political warfare influenced a populace in order to achieve political power. The authors see the act of subversion culminating in a loss of trust in democratic principles. It leads to voter suppression, reduced voter participation, decreased and asymmetrical review of electoral laws but more importantly it poses a challenge to the democratic values of its citizens. It is an existential threat to a democracy. It favors authoritarian states detached from checks and balances that are usually present in democratic systems. These actors are not limited by law or civic popularity or reputational capital. Ironically, this bestows a certain amount of freedom upon them to deploy political warfare operations. Democracies on the other hand uphold individual liberties such as freedom of expression, freedom of the press, freedom of assembly or equal treatment under law and due process. As demonstrated during the 2016 U.S. presidential elections, a democracy generally struggles with identifying political warfare initiated by a foreign (hostile) state from certain segments of the population pursuing their strategic objectives by leveraging these exact individual freedoms. An example from the Mueller Report 

“stated that the Internet Research Agency (IRA), which had clear links to the Russian Government, used social media accounts and interest groups to sow discord in the US political system through what it termed ‘information warfare’ […] The IRA’s operation included the purchase of political advertisements on social media in the names of US persons and entities, as well as the staging of political rallies inside the United States.”

And it doesn’t stop in America. Russia is deploying influence operations in volatile regions on the African continent. China has a history of attempting to undermine democratic efforts in Africa. Both states aim to chip away power from former colonial powers such as France or at least suppress efforts to democratise regions in Africa. China is also deeply engaged in large-scale political warfare in the Southeast Asian region over regional dominance but also territorial expansion as observed in the South China Sea. New Zealand and Australia recorded numerous incidents of China’s attempted influence operations. Australia faced a real-world political crisis when Australian Labor Senator Sam Dastyari was found to be connected to political donor Huang Xiangmo, who has ties to the Chinese Communist Party. Therefore, China having a direct in-route to influence Australian policy decisions. 

The paper concludes with an overview of future challenges posed by political warfare. With more and more computing power readily available the development of new cyber tools and tactics to ideate political warfare operations is only going to increase. Authoritarian states are likely to expand their disinformation playbooks by tapping into the fears of people fueled by conspiracy theories. Developments of machine learning and artificial intelligence will further improvements of inauthentic behavior online. For example, partisan political bots will become more human and harder to discern from real human users. Deep fake technology will increase sampling rates by tapping into larger datasets from the social graph of every human being making it increasingly possible to impersonate individuals to gain access or achieve certain strategic objectives. Altogether, political warfare poses a greater challenge than cyber-enabled espionage in particular for democracies. Democracies need to understand the asymmetrical relationship with authoritarian actors and dedicate resources to effective countermeasures to political warfare without undoing civil liberties in the process.

How Political Bots Worsen Polarization

Do you always know who you are dealing with? Probably not. Do you always recognize when you are influenced? Unlikely. I found it hard to pick up on human signals without succumbing to my own predisposed biases. In other words maintaining “an open mindset” is easier said than done. A recent study found this to be true in particular for dealing with political bots.

tl;dr

Political bots are social media algorithms that impersonate political actors and interact with other users, aiming to influence public opinion. This research investigates the ability to differentiate bots with partisan personas from humans on Twitter. This online experiment (N = 656) explores how various characteristics of the participants and of the stimulus profiles bias recognition accuracy. The analysis reveals asymmetrical partisan-motivated reasoning, in that conservative profiles appear to be more confusing and Republican participants perform less well in the recognition task. Moreover, Republican users are more likely to confuse conservative bots with humans, whereas Democratic users are more likely to confuse conservative human users with bots. The research discusses implications for how partisan identities affect motivated reasoning and how political bots exacerbate political polarization.

Make sure to read the full paper titled Asymmetrical Perceptions of Partisan Political Bots by Harry Yaojun Yan, Kai-Cheng Yang, Filippo Menczer, James Shanahan at https://journals.sagepub.com/doi/full/10.1177/1461444820942744 

Illustration by C. R. Sasikumar

The modern democratic process is technological information warfare. Voters need to be enticed to engage with information about a candidate and election campaigns need to ensure accurate information is presented to build and expand an audience, or a voter base. Assurances for the integrity of information do not exist. And campaigns are incentivised to undercut the opponent’s narrative while amplifying its own candidates message. Advertisements are a potent weapon in any election campaign. Ad spending  on social media for the 2020 U.S. presidential election between Donald Trump and Joe Biden is already at a high with a projected, total bill of $10.8 billion driven by both campaigns. Grassroots campaigns are another potent weapon to decentralize a campaign, mobilize local leaders and impact a particular (untapped) electorate. While the impact of the coronoavirus on grassroots efficacy is yet to be determined, these campaigns are critical to solicit game-changing votes.

Which brings me to the central theme of this post and the paper: bots. When ad dollars or a human movement are out of reach bots are the cheap, fast and impactful alternative. Bots are algorithms to produce a programmed result automatically or human induced with the objective to copy and create the impression of human behavior. We have all seen or interacted with bots on social media after reaching out to customer service. We have all heard or received messages from bots trying to set us up with “the chance of a lifetime”. But do we always know when we’re interacting with bots? Are there ways of telling an algorithm apart from a human?

Researchers from Indiana University of Bloomington took on these important questions in their paper titled Asymmetrical Perceptions of Partisan Political Bots. It explains the psychological factors that impact our perception and decisioning when interacting with partisan political bots on social media. Political bots are used to impersonate political actors and interact with other users in an effort to influence public opinion. Bots have been known to facilitate the spread of spam mail and fake news. They have been used and abused to amplify conspiracy theories. Usage leads to improvement. In conjunction with enhanced algorithms and coding this poses three problems: 

(1) social media users become vulnerable to misreading a bot’s actions as human. 
(2) A partisan message, campaign success or sensitive information can be scaled up through networking effects and coordination of automation. A certainly frightening example would be the use of bots to declare to have won the election while voters are still casting their ballot. And
(3) political bots are by virtue partisan. A highly polarized media landscape, offers fertile ground for political bots to exploit biases and overcome political misconceptions. That means becoming vulnerable isn’t really necessary; a mere exposure to a partisan political bot can lay the groundwork for later manipulation or influence of opinion.

Examples of low-ambiguity liberal (left), low-ambiguity conservative (right) profiles used as stimuli. Identifiable information is blurred.

The research is focused on whether certain individuals or groups are easier to be influenced by partisan political bots than others. This recognition task depends on how skillful certain individuals or groups can detect a partisan narrative, recognize their own partisan bias and either navigate through motivated reasoning or drown in it. Motivated reasoning can be seen as in-group favoritism and out-group hostility, i.e. conservatives favor Republicans and displease democrats. Contemporary detection methods include (1) network-based, i.e. bots are presumed to be inter-connected – detecting one exposes connections to other bots. (2) Crowdsourcing, i.e. engaging experts in the manual detection of bots. And (3) feature-based, i.e. a supervised machine-learning classifier is trained with categorization statistics of political accounts and is constantly matching against inauthentic metrics. These methods can be combined to increase detection rates. At this interesting point in history, it is an arms race between writing code for better bots against building systems to better identify novel algorithms at scale. This arms race, however, is severely detrimental for democratic processes as they are potent enough to deter or at least undermine confidence of participants at the opposing end of the political spectrum.

Examples high-ambiguity liberal (left), and high-ambiguity conservative (right) profiles used as stimuli. Identifiable information is blurred.

The researchers found that knowingly interacting with partisan political bots only magnifies polarization eroding trust in the opposing party’s intentions. However, a regular user will presumably struggle to discern a political bot from a politically motivated (real) user. It leaves the potential voter base vulnerable to automated manipulation. To overcome this manipulation, the researchers focused on identifying the factors that make up the human perception when it comes to ambiguity between real user and political bot as well as recognition of the coded partisanship of the bot. Active users of social media were more likely to establish a mutual following with a political bot. These users happened to be more conservative while the political bots they chose to interact were likely conservative too. Time played a role insofar as active users who took more time to investigate and understand the political bot, which they only saw as a regular-looking, partisan social media account, were less likely to accurately discern real user from political bot. In their results, this demonstrated (1) a higher chance for conservative users to be deceived by conservative political bots and (2) a higher chance for liberal users to misidentify conservative (real) users for political bots. The researchers conclude that 

users have a moderate capability to identify political bots, but such a capability is also limited due to cognitive biases. In particular, bots with explicit political personas activate the partisan bias of users. ML algorithms to detect social bots provide one of the main countermeasures to malicious manipulation of social media. While adopting third-party bot detection methods is still advisable, our findings also suggest possible bias in human-annotated data used for training these ML models. This also calls for careful consideration of algorithmic bias in future development of artificial intelligence tools for political bot detection.

I have been intrigued with these findings. Humans tend to struggle to establish trust online. It’s surprising and concerning that conservative bots may be perceived as conservative humans by Republicans and conservative humans may be perceived as bots by Democrats. The potential to sow distrust to polarize public opinion is nearly limitless for a motivated interest group. While policymakers and technology companies are beginning to address these issues with targeted legislation, it will take a concerted multi-stakeholder approach to mitigated and reverse polarization spread by political bots.

Ballistic Books: Disinformation

I am currently reading Active Measures. Thomas Rid authored the paper Cyberwar Will Not Take Place which stirred up excellent controversy during my studies. I will write a review in due time. The Hacker and the State caught my attention for its unique position at the intersection of cybersecurity and geopolitics. Ben Buchanan became known to me for his contributions to the Lawfare Blog. And Infowars emits an intriguing combination of current global affairs and psychological warfare. Stengl describes the battle with Russian disinformation while countering terrorist propaganda. Without having read the book, I wonder if Operation Glowing Symphony came across Stengl’s desk as Undersecretary of State to President Barack Obama. 

Ballistic books is a series to present literature of interest. Each edition is dedicated to a specific topic. I found it challenging to discover and distinguish good from great literature. With this series, I aim to mitigate that challenge.

1. Active Measures: The Secret History of Disinformation and Political Warfare by Thomas Rid

Active Measures is a term coined by the Soviet and Russian intelligence services to influence foreign nations, collect intelligence and subvert public opinion in favor of Russian interests. The term received global attention after it was linked to successful influence operations during the 2016 U.S. Presidential elections. It is also the name of a widely circulated documentary. Though it seems Thomas Rid wasn’t interviewed for it.

Thomas Rid is a professor for strategic studies at Johns Hopkins University. Born in Germany, Rid is best known for his contributions to political science at the intersection of technology and war studies. You can find Thomas Rid on Twitter at @RidT

2. The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics by Ben Buchanan

Ben Buchanan is an assistant professor at Georgetown University’s School of Foreign Service, where his research and teaching is focused on the intersection of cybersecurity and public affairs. You can find Ben Buchanan on Twitter at @BuchananBen

3. Information Wars: How We Lost the Global Battle Against Disinformation and What We Can Do About It by Richard Stengel

Richard Stengl is an American journalist, former government official and served as president and CEO for the National Constitution Center. You can find Richard Stengl on Twitter at @Stengl

Why Are We Sharing Political Misinformation?

Democracy is built upon making informed decisions by the rule of the majority. As a society, we can’t make informed decisions if the majority is confused by fake news in the shape of false information distributed and labeled as real news. It has the potential to erode trust in democratic institutions, stir up social conflict and facilitate voter suppression. This paper by researchers from New York and Cambridge University examines the psychological drivers of sharing political misinformation and is providing solutions to reduce the proliferation of misinformation online. 

tl;dr

The spread of misinformation, including “fake news,” disinformation, and conspiracy theories, represents a serious threat to society, as it has the potential to alter beliefs, behavior, and policy. Research is beginning to disentangle how and why misinformation is spread and identify processes that contribute to this social problem. This paper reviews the social and political psychology that underlies the dissemination of misinformation and highlights strategies that might be effective in mitigating this problem. However, the spread of misinformation is also a rapidly growing and evolving problem; thus, scholars also need to identify and test novel solutions, and simultaneously work with policy makers to evaluate and deploy these solutions. Hence, this paper provides a roadmap for future research to identify where scholars should invest their energy in order to have the greatest overall impact.

Make sure to read the full paper titled Political psychology in the digital (mis)information age by Jay J. Van Bavel, Elizabeth Harris, Philip Pärnamets, Steve Rathje, Kimberly C. Doell, Joshua A. Tucker at https://psyarxiv.com/u5yts/ 

It’s no surprise that misinformation spreads significantly faster than the truth. The illusory truth effect describes this phenomenon as misinformation that people had heard before were more likely to be believed. We all have heard of a juicy rumor in the office before learning it is remotely true or made up altogether. Political misinformation takes the dissemination rate to the next level. It has far greater rates of sharing due to its polarizing nature driven by partisan beliefs and personal values. Even simple measures seemingly beneficial to all of society are faced with an onslaught of misinformation. For example, California proposition 15 designed to close corporate tax loopholes was opposed by conservative groups resorting to spread misinformation about the reach of the law. They conflated corporations with individuals making it a family affair to solicit an emotional response from the electorate. It’s a prime example for a dangerous cycle in which political positions are the drivers of misinformation which in turn is facilitating political division and obstructing the truth to make informed decisions. Misinformation is found to be shared more willingly, quicker and despite contradicting facts if the misinformation was in line with the political identity and seeking to derogate the opposition. In the example above, misinformation about proposition 15 was largely shared if it (a) contained information in line with partisan beliefs and (b) it sought to undercut the opponents of the measure. As described in the paper, the more polarized a topic is (e.g. climate change, immigration, pandemic response, taxation of the rich, police brutality etc.) the more likely misinformation will be shared by its individual political in-groups to be used against their political out-groups without further review of its factual truth. This predisposed ‘need for chaos’ is hard to mitigate because the feeling of being marginalized is a complex, societal problem that no one administration can resolve. Further, political misinformation tends to be novel and trigger more extreme emotions of fear and disgust. It tends to confuse the idea of being better off is equal to being better than another political out-group. 

Potential solutions to limit the spread of political misinformation can already be observed across social media:

  1. Third-Party Fact Checking, is the second review by a dedicated, independent fact-checker committed to neutrality in reporting information. Fact-checking does reduce belief in misinformation but is less effective for political misinformation. Ideological commitments and exposure to partisan information foster a different reality that, in rare extreme cases, can create scepticism of fact-checks leading to an increased sharing of political misinformation, the so-called backfire effect.  
  2. Investing in media literacy to drive efforts of ‘pre-bunking’ false information before they gain traction including to offer tips or engage in critical reflection of certain information is likely to produce optimal long-term results. Though it might be problematic to implement effectively for political information as media literacy is dependent on the provider and bi-partisan efforts are likely to be opposed by their respective extreme counterparts. 
  3. Disincentivizing viral content by changing the monetization structure to a blend of views, ratings and civic benefit would be a potent deterrent for creating and sharing political misinformation. However, this measure would likely conflict with growth objectives of social media platforms in a shareholder-centric economy.

This paper is an important contribution to the current landscape of behavioral psychology. Future research will need to focus on developing a more comprehensive theory of why we believe and share political misinformation but also how political psychology correlates with incentives to create political misinformation. It will be interesting to learn how to manipulate the underlying psychology to alter the lifecycle of political information on different platforms, in different mediums and through new channels.