Why Are We Polarized?

Are we bound to follow tribal instincts when logic should lead us across the political aisle?

When I hear that the American political system isn’t broken, but exactly working as designed I can’t help but wonder how this can be true in times of all-encompassing social media, rapid loss of attention, and increasing discrimination of economic opportunity. However Ezra Klein’s book Why We’re Polarized claims that, and more, that this working political system is polarized by us as we are getting polarized by it. As confusing as it starts, Klein nevertheless does a fantastic job to elaborate his thoughts throughout ten chapters spread over 268 pages with convincing research and easy-to-read prose.

Frankly, I found this general topic challenging to comprehend. Hence Klein’s book appears to me neither a clear-cut psychological review of polarization nor is it a deep dive into America’s governance and democratic institutions. It comes across as a hybrid of history lessons, democratic ideas, and political media management. In light of such a mess I tend to gravitate to first principles: what is polarization? 

According to Klein “the logic of polarization is to appeal to a more polarized public, (so) political institutions and actors behave in more polarized ways. As political institutions and actors become more polarized, they further polarize the public.

Explaining polarization with polarization isn’t helpful. After searching for adequate definitions I found myself trapped in deciding between constitutional polarization and political polarization and the iterative sense of polarization. Interpreting Klein’s logic polarization may be a deviation from core political beliefs toward ideological extremes in an effort to reach a new audience. That in turn perpetuates a more extreme behavior of political actors and institutions. As Klein argues:   

“This sets off a feedback cycle: to appeal to a yet more polarized public, institutions must polarize further; when faced with yet more polarized institutions, the public polarizes further, and on on.”

It’s not an ideal beginning to a complex story, but it makes the most out of it. Across the first few chapters, Klein dives into the history of the American political system; mainly how Democrats turned liberal and Republicans became conservative. When it comes to group identity, the book dives deeper into the psychological aspects of us voters. 

“We became more consistent in the party we vote for not because we came to like our party more– indeed, we’ve come to like the parties we vote for less–but because we came to dislike the opposing party more.”

To put it simply Klein argues we have a stronger loyalty to our group than we have to our own ideology. Add in some cases a strong repulsion of the other group’s belief system. Klein continues:

“The human mind is exquisitely tuned to group affiliation and group difference. It takes almost nothing for us to form a group identity, and once that happens, we naturally assume ourselves in competition with other groups. The deeper our commitment to our group becomes, the more determined we become to ensure our group wins.”

There is plenty of well-established scientific research to concur with this notion. While the psychology of the crowd is one factor in this complex analysis, Klein manages to clarify that our identity, more than our previous system of beliefs, where we live, or who we associate with, dictates our sense of loyalty. And no other entity threatens our identity as much as the media. American media, the press, and political journalism are by nature mouthpieces of certain political powers – and always have been. Following the hotly contested Presidential election in the year 2000, the election of America’s first African-American President in 2008, and the consistently increasing economic gap between those who repair, clean, transport, deliver, and educate our communities and those who (merely) push paper our American identity has never been more called into question as it is today; especially in policy proposals of aspiring presidential candidates. Klein does not shy away from criticizing the media’s contribution to the skewed, partisan landscape:

“If we (the media) decide to give more coverage to Hillary Clinton’s emails than to her policy proposals–which is what we did–then we make her emails more important to the public’s understanding of her character and the potential presidency than her policy proposals. In doing so, we shape not just the news but the election, and thus the country.”

Overall, though, Klein’s book feels like a warm conversation with someone who is genuinely interested in understanding how we got where we are. He offers a clear diagnosis of the current State of the Union without swaying too far into either political camp, but falls short in offering a pathway forward or even mere suggestions on how to bridge the gap between opposing (political) viewpoints; therefore groups. Ezra Klein’s advice is “to pay attention to identity. What identity is that news article invoking? What identity is making you defensive? What does it feel like when you get pushed back into an identity? Can you notice when it happens?”

It is an engaging book that provides insight into the political discourse of America beyond New York or California. While it is well written and researched it feels more like a conversation, a starting point, rather than a solution or a means forward. 

How To Built Community To Influence Elections

Read this book to improve your civic engagement and create a more meaningful neighborhood.

Eitan Hersh is an associate professor of political science at Tufts University. In his latest book “Politics Is For Power – How to Move Beyond Political Hobbyism, Take Action, and Make Real Change” he decries the virtue signaling that is political hobbyism on social media and makes a case for grassroots politics. 

Political hobbyism can be identified as short-lived, current affair commentary on social media that results in no real-world change. It delivers a feeling of participation. We all have done it to some extent. Yet, Hersh finds, especially the political left fails to recognize that real political change is driven by a few selected local leaders who listen to the needs of a community. Consistent in-person community outreach builds a stronger community that is rather aligned than divided on overarching, public policy programs. 

“Political hobbyism is to public affairs what watching SportsCenter is to playing football.”

Source: College-Educated Voters Are Ruining American Politics by Eitan Hersh

Among the many well-told stories in this book, Hersh offers a prominent example of Starbucks founder Howard Schultz. For a brief moment in the 2020 U.S. Presidential Elections Schultz entertained an independent bid for the highest position in this country. Remember, Starbucks was built over decades of carefully choosing product ingredients, the ambiance of its stores, and hiring local leaders to represent its brand. It was a slow expansion from Seattle to the greater Pacific Northwest and year by year to more States across the United States. But when it came to his own campaign bid, Schultz seemingly forgot his patient business acumen but threw endless money at cable news and talk shows to make his case in less than eighteen months. Obviously, from the outside and in hindsight, this approach reeks of failure when it took years to build the Starbucks brand nationwide. Why would he seriously believe to reach the same market plurality in the political domain in just eighteen months? Because politics were only a hobby to Howard Schultz. 

“Politics Is For Power” is appropriate for community leaders, new and seasoned neighbors, social justice warriors and keyboard cowboys, and anybody really interested in improving civic engagement in their community. Personally, I loved the idea of using political donations instead of buying political ads to rather spend it on support for local community organizers who engage in face-to-face conversations with the local community and actually listen. Crafting impactful, social and economic policies is an arduous process that can only succeed if all voices of society have been heard. Furthermore, Hersh created captivating storylines condensed and spread across each chapter, which really brings home his point about taking action requires getting out the door, talking to your neighbors, and listen. 

Lastly, if you’re still reading, I feel it’s necessary to call out editorial ingenuity when it is due: this book has 217 pages, 22 chapters, and encompasses 5 parts. Each page is formatted for the reader’s pleasure. Chapters are comprehensive yet not longer than a commute to work would be. And its parts really provide a structure around the argument that highlights the thoughtful content of the book. Kudos, Simon & Schuster!

When Did Truth Die?

Michiko Kakutani offers an eloquent compilation that explains the decay of veracity in the United States. But perhaps more importantly, it skillfully weaves together almost a century of painful lessons from history, literature, and politics.

The Death of Truth was highly scrutinized by media publishers, book critiques, and the greater literature community at the time of its publication. Google the reviews. As the title suggests The Death of Truth – Notes on Falsehood in the Age of Trump by Michiko Kakutani advocates for the truth to be added to the list of casualties of the former Trump administration. Reading this book at the end of 2021, almost exactly one year since Joe Biden became the 46th President of the United States, and almost 3 ½ years after its initial release, I can’t help but view this book as a compilation of essays that are really bite-sized opinion pieces. This makes for an immersive, moving reading experience, but also renders the message of The Death of Truth to be the mere same polemic it appeared to seek to quash. Admittedly, a provocative diagnosis of our current political landscape is hardly done in the total absence of partisanship. 

Kakutani brilliantly threads her analysis by starting with a historical review of culture wars and past regimes’ handling of truth. She gradually escalates her storyline to the twenty-first century with humanity’s dependency on social media, algorithmic subversion of political decision making, and foreign actors exploiting the American focus on self-pursuit at the expense of civil responsibilities. In her epilogue, Kakutani warns of the continued erosion of democratic institutions. We, the people, must protect the democratic institutions that uphold the roof of democracy. At the same time, there won’t be any easy remedies or shortcuts that will fix our polarized, cultural division. Times like these require deft civil disobedience of the many that are publicly rejecting the idea of cynicism and resignation pursued by the totalitarian few. 

People who are likely to read this book are unlikely to learn something new, but I believe it’s still worth it for the extensive reading resources provided by Kakutani. Her remarkably colorful writing style and sobering outlook on the future state of veracity in the United States won’t disappoint either. NPR’s Michael Schaub nailed it when he wrote: “The Death of Truth is a slim volume that’s equally intriguing and frustrating, an uneven effort from a writer who is, nonetheless, always interesting to read.”

On Tyranny

A pocket guide for civil disobedience to safe democracy.

Democracy requires action. Timothy Synder’s “Twenty Lessons From The Twentieth Century” inspires action. In his short pocket guide, Synder offers civic lessons ranging from taking responsibility for the face of the world to political awareness all the way to what it really means to be a patriot. His theme is ‘those who do not learn from history are doomed to repeat it’. It struck me as an ideal guide to give out at demonstrations or town hall meetings. His ideas for civic measures are worth recounting for they aim to protect the integrity of democracy. That being said, most of his lessons should be working knowledge for every citizen. 

Twitter And Tear Gas

Zeynep Tufekci takes an insightful look at the intersection of protest movements and social media.

Ever since I’ve read Gustave Le Bon’s “The Crowd”, I’ve been fascinated with crowd psychology and social networks. In “Twitter And Tear Gas – The Power And Fragility Of Networked Protests” Zeynep Tufekci connects the elements of protest movements with 21st-century technology. In her work, she describes movements as

“attempts to intervene in the public sphere through collective, coordinated action. A social movement is both a type of (counter) public itself and a claim made to a public that a wrong should be righted or a change should be made.”

In times of far-reaching social media platforms, restricted online forums, and end-to-end encrypted private group chats, the means to organize a protest movement have drastically changed. 

“Modern networked movements can scale up quickly and take care of all sorts of logistical tasks without building any substantial organizational capacity before the first march or protest. (…) The Gezi Park moment, going from almost zero to a massive movement within days clearly demonstrates the power of digital tools. However, with this speed comes weakness, some of it unexpected. First, the new movements find it difficult to make tactical shifts because they lack both the culture and the infrastructure for making collective decisions. Often unable to change course after the initial, speedy expansion phase, they exhibit a ‘tactical freeze’. Second, although their ability (as well as their desire) to operate without defined leadership protects them from co-optation or “decapitation,” it also makes them unable to negotiate with adversaries or even inside the movement itself. Third, the ease with which current social movements form often fails to signal an organizing capacity powerful enough to threaten those in authority.”

While these movements often catch the general public by surprise, it really does come down to timing and committment by a group of decentralized actors. These actors, who come from all walks of life, seek to connect with others as rapidly as possibly by leveraging the unrestricted powers of social media. Social media creates ties with a variety of supporters. Tufekci points out

“people who seek political change, the networking that takes place among people with weak ties is especially important. People with strong ties already share similar views (…). Weaker ties may be far-flung and composed of people with varying political and social ties. Also, weak ties may create bridges to other clusters of people in a way strong ties do not.”

Protest movements predating social media often shared similarities with multi-day music festivals, overnight camps or even military training exercises. They instill a sense of camaraderie which attracts a certain type of indivudal. Today’s protest movements differ from those days in that they can erupt quickly, but fall apart as fast as they came to be. Still 

“many people are drawn to protest camps because of the alienation they feel in their ordinary lives as consumers. Exchanging products without money is like reverse commodity fetishism: for many, the point is not the product being exchanged but the relationship that is created.”

In addition the speed at which modern movements operate serves as an invitation for individuals disconnected from broader society or individuals who simply prefer the short-lived special operation to right a policy wrong over the long-term work required to build and maintain relationships that are powerful enough to organically drive a change of policy.

“Some online communities not only are distant from offline communities but also have little or no persistence or reputational impact. (…) Social scientists call this the “stranger-on-a-train” effect, describing the way people sometimes open up more to anonymous strangers than to the people they see around every day. (…) Such encounters can even be more authentic and liberating.”

Tufekci spends much time on describing the evolution of social interactions in a networked space, the social inertia that needs to be managed in order to pick up momentum, but she also offers some insights on defensive considerations to make a protest movement work. First and foremost, a protest movement garners attention online, which in turn creates an influx of supporters. It will also attract opposition from private individuals, political opponents, and current political leaders. Those in power had previously relied upon, and in some countries still rely upon, censorship and suppression of information. Twitter and other social media platforms have disrupted this control over the narrative:

“To be effective, censorship in the digital era requires a reframing of the goals of censorship not as a total denial of access, which is difficult to achieve, but as a denial of attention, focus, and credibility In the networked public sphere, the goal of the powerful often is not to convince people of the truth of a particular narrative or to block a particular piece of information from getting out, but to produce resignation, cynicism, and a sense of disempowerment among the people.”

I apologize for using a wealth of quotes from her book, but it’s best described there, in her own words. Protests movements are here to stay. Understanding how democratic nations evolve their policies, right political wrongs, and influence authoritarian nations through subtle policy, online protest and real-world tear gas confrontation will help us make more informed decisions as we pick our political battles. Zeynep Tufekci put together a well-researched account that helps to make sense of the most important, controversial online protest movements from the Occupy Gezi/Wall Street movements to the Eqyptian Revolution to the Arab Spring to Black Lives Matter and MeToo or the March For Our Lives. There are two noticeable drawbacks of this otherwise excellent book. First, the chapters appear uncoordinated within the book and are too long. The reader can’t take a breather without feeling to lose a thought. Second, her examples are chronologically disconnected from the actual movements. While this helps to illustrate a certain point, I found it to be a confusing feat. Twitter And Tear Gas has its own website. Check it out at https://www.twitterandteargas.org/ or reach out to the author on Twitter @zeynep 

Falsehoods And The First Amendment

Our society is built around freedom of expression. How can regulators mitigate the negative consequences of misinformation without restricting speech? And should we strive for a legal solution rather than improving upon our social contract? In this paper, constitutional law professor, bestselling author and former administrator of the Office of Information and Regulatory Affairs Cass Sunstein reviews the status of falsehoods against our current constitutional regime to regulate speech and offers his perspective to control libel and other false statements of fact. 

tl;dr

What is the constitutional status of falsehoods? From the standpoint of the First Amendment, does truth or falsity matter? These questions have become especially pressing with the increasing power of social media, the frequent contestation of established facts, and the current focus on “fake news,” disseminated by both foreign and domestic agents in an effort to drive politics in the United States and elsewhere in particular directions. In 2012, the Supreme Court ruled for the first time that intentional falsehoods are protected by the First Amendment, at least when they do not cause serious harm. But in important ways, 2012 seems like a generation ago, and the Court has yet to give an adequate explanation for its conclusion. Such an explanation must begin with the risk of a “chilling effect,” by which an effort to punish or deter falsehoods might also in the process chill truth. But that is hardly the only reason to protect falsehoods, intentional or otherwise; there are several others. Even so, the various arguments suffer from abstraction and high-mindedness; they do not amount to decisive reasons to protect falsehoods. These propositions bear on old questions involving defamation and on new questions involving fake news, deepfakes, and doctored videos.

Make sure to read the full paper titled Falsehoods and the First Amendment by Cass R. Sunstein at https://jolt.law.harvard.edu/assets/articlePDFs/v33/33HarvJLTech387.pdf 

(Source: Los Angeles Times)

As a democracy, why should we bother to protect misinformation? We already prohibit various kinds of falsehoods including perjury, false advertising, and fraud. Why not extend these regulations to online debates, deepfakes, etc? Sunstein offers a basic truth of democratic systems: freedom of expression is a core tenet to promote self-government; it is enshrined in the first amendment. People need to be free to say what they think, even if what they think is false. A society that punishes people for spreading falsehoods inevitably creates a chilling effect for those who (want to) speak the truth. Possible criminal prosecution for spreading misinformation should not itself have a chilling effect on the public discussion about the misinformation. Of course, the delineator is a clear and present danger manifested in the misinformation that creates real-world harm. The dilemma for regulators lies in the difficult task to identify a clear and present danger and real-world harm. It’s not a binary right versus wrong but rather a right versus another right dilemma. 

Sunstein points out a few factors that make it so difficult to strike an acceptable balance between restrictions and free speech. A prominent concern is collateral censorship aka official fallibility. That is where a government would censor what it deems to be false but ends up restricting truth as well. Government officials may act in self-interest to preserve their status, which inevitably invites the risk of censorship of critical voices. Even if the government correctly identifies and isolates misinformation, who has the burden of proof? How detailed must it be demonstrated that a false statement of fact is in fact false and does indeed present a clear danger in causality with real-world harm? As mentioned earlier, any ban of speech may impose a chilling effect on people who aim to speak the truth but fear government retaliation. In some cases, misinformation may be helpful to magnify the truth. Misinformation offers a clear contrast that allows people to make up their minds. Learning falsehoods from others also increases the chances to learn what other people think, how they process and label misinformation, and where they ultimately stand. The free flow of information is another core tenet of democratic systems. It is therefore preferred to have all information in the open so people can choose and pick whichever they believe in. Lastly, a democracy may consider counterspeech as a preferred method to deal with misinformation. Studies have shown that media literacy, fact-checking labels, and accuracy cues help people to better assess misinformation and its social value. Banning a falsehood, however, would drive the false information and its creators underground. Isn’t it better to find common ground, rather than to silence people?

With all this in mind, striking a balance between permitting falsehoods in some cases, enforcing upon them should be the exception and nuanced on a case-by-case basis. Sunstein shares his ideas to protect people from falsehoods without producing excessive chilling effects from the potential threat of costly lawsuits. First, there should be limits on monetary damages and schedules should be limited to address specific scenarios. Second, a general right to correct or retract misinformation should pre-empt proceedings seeking damages. And, third, online environments may benefit from notice-and-takedown protocols similar to the existing copyright practice under the Digital Millennium Copyright Act (DMCA). Germany’s Network Enforcement Act (NetzDG) is a prominent example of notice-and-takedown regulations aimed at harmful, but not necessarily false speech. I think a well-functioning society must work towards a social contract that facilitates intrinsic motivation and curiosity to seek and speak the truth. Lies should not get a platform, but cannot be outlawed either. If the legal domain is sought to adjudicate misinformation, it should be done expeditiously with few formal, procedural hurdles. The burden of proof has to be on the plaintiff and the bar for false statements of fact must be calibrated against the reach of the defendant, i.e. influencers and public figures should have less freedom to spread misinformation due to their reach is far more damaging than that of John Doe. Lastly, shifting the regulatory enforcement on carriers or social media platforms is tantamount to hold responsible the construction worker of a marketplace – it fails to identify the bad actor, which is the person disseminating the misinformation. Perhaps enforcement of misinformation can be done through crowdsourced communities, accuracy cues at the point of submission or supporting information on a given topic. Here are a few noteworthy decisions for further reading: 

The Future Of Political Elections On Social Media

Should private companies decide what politician people will hear about? How can tech policy make our democracy stronger? What is the role of social media and journalism in an increasingly polarized society? Katie Harbath, a former director for global elections at Facebook discusses these questions in a lecture about politics, policy and democracy. Her unparalleled experience as a political operative combined with her decade long experience working on political elections across the globe make her a leading intellectual voice to shape the future of civic engagement online. In her lecture to honor the legacy of former Wisconsin State senator Paul Offner she shares historical context on the evolution of technology and presidential election campaigns. She also talks about the impact of the 2016 election and the post-truth reality online that came with the election of Donald Trump. In her concluding remarks she offers some ideas for future regulations of technology to strengthen civic integrity as well as our democracy and she answers questions during her Q&A.

tl;dr

As social media companies face growing scrutiny among lawmakers and the general public, the La Follette School of Public Affairs at University of Wisconsin–Madison welcomed Katie Harbath, a former global public policy director at Facebook for the past 10 years, for a livestreamed public presentation. Harbath’s presentation focused on her experiences and thoughts on the future of social media, especially how tech companies are addressing civic integrity issues such as free and hate speech, misinformation and political advertising.

Make sure to watch the full lecture titled Politics and Policy: Democracy in the Digital Age at https://lafollette.wisc.edu/outreach-public-service/events/politics-and-policy-democracy-in-the-digital-age (or below)

Timestamps

03:04 – Opening remarks by Susan Webb Yackee
05:19 – Introduction of the speaker by Amber Joshway
06:59 – Opening remarks by Katie Harbath
08:24 – Historical context of tech policy
14:39 – The promise of technology and the 2016 Facebook Election
17:31 – 2016 Philippine presidential election
18:55 – Post-truth politics and the era of Donald J. Trump
20:04 – Social media for social good
20:27 – 2020 US presidential elections 
22:52 – The Capitol attacks, deplatforming and irreversible change
23:49 – Legal aspects of tech policy
24:37 – Refresh Section 230 CDA and political advertising
26:03 – Code aspects of tech policy
28:00 – Developing new social norms
30:41 – More diversity, more inclusion, more openness to change
33:24 – Tech policy has no finishing line
34:48 – Technology as a force for social good and closing remarks

Q&A

(Click on the question to watch the answer)

1. In a digitally democratized world how can consumers exercise their influence over companies to ensure that online platforms are free of bias?

2. What should we expect from the congressional hearing on disinformation?

3. Is Facebook a platform or a publisher?

4. Is social media going to help us to break the power of money in politics?

4. How have political campaigns changed over time?

5. What is the relationship between social media and the ethics of journalism?

6. Will the Oversight Board truly impact Facebook’s content policy?

7. How is Facebook handling COVID-19 related misinformation?

8. What is Facebook’s approach to moderating content vs encryption/data privacy?

9. Does social media contribute to social fragmentation (polarization)? If so, how can social media be a solution for reducing polarization?

10. What type of regulation should we advocate for as digitally evolving voters?

11. What are Katies best and worst career memories? What’s next for Katie post Facebook?

Last but not least: Katie mentioned a number of books (and a blog) as a recommended read that I will list below:

Demystifying Foreign Election Interference

The Office of the Director of National Intelligence (ODNI) released a declassified report detailing efforts by foreign actors to influence and interfere in the 2020 U.S. presidential elections. The key finding of the report: Russia sought to undermine confidence in our democratic processes to support then President Donald J. Trump. Iran launched similar efforts but to diminish Trump’s chances of getting reelected. And China stayed out of it altogether.  

(Source: ODNI)

Make sure to read the full declassified report titled Intelligence Community Assessment of Foreign Threats to the 2020 U.S. Federal Elections releasedby the Office of the Director of National Intelligence at https://www.odni.gov/index.php/newsroom/reports-publications/reports-publications-2021/item/2192-intelligence-community-assessment-on-foreign-threats-to-the-2020-u-s-federal-elections

Background

On September 12, 2018 then President Donald J. Trump issued Executive Order 13848 to address foreign interference in U.S. elections. In essence, it authorizes an interagency review to determine whether an interference has occurred. In the event of foreign interference in a U.S. election the directive orders to create an impact report to trigger sanctions against (1) foreign individuals and (2) nation states. A comprehensive breakdown of the directive including the process of imposing sanctions can be found here. I will only focus on the findings of the interagency review laid out in the Intelligence Community Assessment (ICA) pursuant to EO 13848 (1)(a). The ICA is limited to intelligence reporting and other information available as of December 31, 2020.

Findings

The former President touted American voters before his own election in 2016, during his presidency and beyond the 2020 presidential elections with unsubstantiated claims of foreign election interference that would disadvantage his reelection chances. In Trump’s mind, China sought to undermine his chances to be reelected to office while he downplayed the role of Russia or Iran. The recently released ICA directly contradicts Trump’s claims. Here’s the summary per country:

Russia

  • Russia conducted influence operations targeting the integrity of the 2020 presidential elections authorized by Vladimir Putin
  • Russia supported then incumbent Donald J. Trump and aimed to undermine confidence in then candidate Joseph R. Biden
  • Russia attempted to exploit socio-political divisions through spreading polarized narratives without leveraging persistent cyber efforts against critical election infrastructure

The ICA finds a theme in Russian intelligence officials pushing misinformation about President Biden through U.S. media organizations, officials and prominent individuals. Such influence operations follow basic money laundering structures: (1) creation and dissemination of a false and misleading narrative, (2) conceal its source through layering in multiple media outlets involving independent (unaware) actors, and (3) integrating the damning narrative into the nation states official communication after the fact. A recurring theme was the false claim of corrupt ties between President Biden and Ukraine. These began spreading as early as 2014. 

Russian attempts to sow discord among the American people took place through narratives that amplified misinformation about the election process and its systems, e.g. undermining the integrity of mail-in ballots or highlighting technical failures and exceptions of misconduct. In a broader sense, topics around pandemic related lockdown measures or racial injustice or conservative censorship were exploited to polarize the affected groups. While these efforts required Russia’s cyber offensive units to take action, the actual evidence for a persistent cyber influence operation was not conclusive. The ICA categorized Russian actions as general intelligence gathering to inform Russian foreign policy rather than specifically targeting critical election infrastructure.

Iran

  • Iran conducted influence operations targeting the integrity of the 2020 presidential elections likely authorized by Supreme Leader Ali Khamenei
  • Unlike Russia, Iran did not support either candidate but aimed to undermine confidence in then incumbent Donald J. Trump
  • Iran did not interfere in the 2020 presidential elections as defined as activities targeting technical aspects of the election

The ICA finds Iran leveraged similar influence tactics as Russia targeting the integrity of the election process presumably in an effort to steer the public’s attention away from Iran and towards domestic issues around pandemic related lockdown measures or racial injustice or conservative censorship. However, Iran relied more notably on cyber-enabled offensive operations. These included aggressive spoofing emails disguised as to be sent from the Proud Boys group to intimidate liberal and left-leaning voters. Spear phishing emails sent to former and current officials aimed to gain impactful information and access to critical infrastructure. A high volume of inauthentic social media accounts was used to create divisive political narratives. Some of these accounts dated back to 2012.     

China

  • China did not conduct influence operations or efforts to interfere in the 2020 presidential elections

The ICA finds China did not actively interfere in the 2020 presidential elections. While the rationale in their assessment is largely based on political reasoning and foreign policy objectives, the report provides no data points for me to evaluate. The report does not offer insights into the role of Chinese Technology platforms repeatedly targeted by the former President. A minority view by the National Intelligence Office for Cyber (NIO) holds the opinion that China did deploy some cyber offensive operations to counter anti-Chinese policies. Former Director of National Intelligence John Ratcliffe leads this minority view expressed in a scathing memorandum that concludes the ICA fell short in their analysis with regard to China.

Recommendations

The ICA offers several insights into a long, strenuous election cycle. Its sober findings help to reformulate U.S. foreign policy and redefine domestic policy objectives. While this report is unable to detail all available intelligence and other information it offers some solace to shape future policies. For example:

  1. Cybersecurity – increased efforts to update critical election infrastructure has probably played a key role in the decreased efforts around cyber offensive operations. Government and private actors must continue to focus on cybersecurity, practise cyber hygiene and conduct digital audits to improve cyber education
  2. Media Literacy – increased efforts to educate the public about political processes. This includes private actors to educate their users about potential abuse on their platforms. Continuing programs to depolarize ideologically-charged groups through empathy and regulation is a cornerstone for a more perfect union

Additional and more detailed recommendations to improve the resilience of American elections and democratic processes can be found in the Joint Report of the Department of Justice and the Department of Homeland Security on Foreign Interference Targeting Election Infrastructure or Political Organization, Campaign, or Candidate Infrastructure Related to the 2020 U.S. Federal Elections

An Economic Approach To Analyze Politics On YouTube

YouTube’s recommendation algorithm is said to be a gateway to introduce viewers to extremist content and a stepping stone towards online radicalization. However, two other factors are equally important when analyzing political ideologies on YouTube: the novel psychological effects of audio-visual content and the ability of monetization. This paper contributes to the field of political communications by offering an economic framework to explain behavioral patterns of right-wing radicalization. It attempts to answer how YouTube is used by right-wing creators and audiences and offers a way forward for future research.

tl;dr

YouTube is the most used social network in the United States and the only major platform that is more popular among right-leaning users. We propose the “Supply and Demand” framework for analyzing politics on YouTube, with an eye toward understanding dynamics among right-wing video producers and consumers. We discuss a number of novel technological affordances of YouTube as a platform and as a collection of videos, and how each might drive supply of or demand for extreme content. We then provide large-scale longitudinal descriptive information about the supply of and demand for conservative political content on YouTube. We demonstrate that viewership of far-right videos peaked in 2017.


Make sure to read the full paper titled Right-Wing YouTube: A Supply and Demand Perspective by Kevin Munger and Joseph Phillips at https://journals.sagepub.com/doi/full/10.1177/1940161220964767

YouTube is unique in its combination of leveraging Google’s powerful content discovery algorithms, i.e. recommending content to keep attention levels on its platform and offering a type of content that is arguably the most immersive and versatile: video. The resulting product is highly effective to distribute a narrative, which caused journalists and academics to categorize YouTube as an important tool for online radicalization. In particular right-wing commentators make use of YouTube to spread their political ideologies ranging from conservative views to far-right extremism. However, the researchers draft a firm argument that the ability to create and manage committed audiences around a political ideology who mutually create and reinforce their extreme views is not only highly contagious to impact less committed audiences but pure fuel to ignite online radicalization.

Radio replaced the written word. Television replaced the spoken word. And online audio-visual content will replace the necessity to observe and understand. YouTube offers an unlimited library across all genres, all topics, all public figures ranging from user-generated content to six-figure Hollywood productions. Its 24/7 availability, immersive setup by incentivising comments and creating videos, allows YouTube to draw in audiences on much stronger psychological triggers than its mostly text-based competitors Facebook, Twitter or Reddit. Moreover, YouTube transcends national borders. It enables political commentary from abroad ranging from American expats to foreigners to exiled politicians or expelled opposition. In particular the controversial presidency of Donald Trump triggered political commentators in Europe and elsewhere to comment (and influence) the political landscape, its voters and domestic policies in the United States. This is important to acknowledge because YouTube has more users in the United States than any other social network including Facebook and Instagram.

Monetizing The Right

YouTube has been proven valuable to “Alternative Influence Networks”. In essence, potent political commentators and small productions that collaborate in direct opposition of mass media, both with regard to reporting ethics and political ideology. Albeit relatively unknown to the general populous, they draw consistent, committed audiences and tend to base their content around conservative and right-wing political commentary. There is some evidence in psychological research that conservatives tend to respond more to emotional content than liberals.

As such, the supply side on YouTube is fueled by the easy and efficient means to create political content. Production costs of a video are usually the equipment. The required time to shoot a video on a social issue is exactly as long as the video. In comparison drafting a text-based political commentary on the same issue can take up several days. YouTube’s recommendation system in conjunction with tailored targeting of certain audiences and social classes enable right-wing commentators to reach like-minded individuals and build massive audiences. The monetization methods include

  • Ad revenue from display, overlay, and video ads (not including product placement or sponsored by videos)
  • Channel memberships
  • Merchandise
  • Highlighted messages in Super Chat & Super Stickers
  • Partial revenue of YouTube Premium service

While YouTube has expanded its policy enforcement of extremist content, conservative and right-wing creators have adapted to the fewer monetization methods on YouTube by increasingly relying on crowdfunded donations, product placement or sale of products through affiliate marketing or through their own distribution network. Perhaps the most convincing factor for right-wing commentators to flock to YouTube is, however, the ability to build a large audience from scratch without the need of legitimacy or credentials.

The demand side on YouTube is more difficult to determine. Following the active audience theory users would have made a deliberate choice to click on right-wing content, to search for it, and to continue to engage with it over time. The researchers of this paper demonstrate that it isn’t just that easy. Many social and economic factors drive middle class democrats to adopt more conservative and extreme views. For example economic decline of blue-collar employment, a broken educational system in conjunction with increasing social isolation and lack of future prospects contribute to susceptibility to extremists content leading up to radicalization. The researchers rightfully argue it is difficult to determine the particular drivers that made an individual seek and watch right-wing content on YouTube. Those who do watch or listen to a right-wing political commentator tend to seek for affirmation and validation with their fringe ideologies.

“the novel and disturbing fact of people consuming white nationalist video media was not caused by the supply of this media radicalizing an otherwise moderate audience, but merely reflects the novel ease of producing all forms of video media, the presence of audience demand for white nationalist media, and the decreased search costs due to the efficiency and accuracy of the political ecosystem in matching supply and demand.”

While I believe this paper deserves much more attention and a reader should discover its research questions in the process of studying this paper, I find it helpful to provide the author’s research questions here, in conjunction with my takeaways, to make it easier for readers to prioritize this study: 

Research Question 1: What technological affordances make YouTube distinct from other social media platforms, and distinctly popular among the online right? 

Answer 1: YouTube is a media company; media on YouTube is videos; YouTube is powered by recommendations.

Research Question 2: How have the supply of and demand for right-wing videos on YouTube changed over time?

Answer 2.1: YouTube viewership of the extreme right has been in decline since mid-2017, well before YouTube changed its algorithm to demote far-right content in January 2019.

Answer 2.2: The bulk of the growth in terms of both video production and viewership over the past two years has come from the entry of mainstream conservatives into the YouTube marketplace.

This paper offers insights into the supply side of right-wing content and gives a rationale why people tend to watch right-wing content. It contributes to understanding how right-wing content is spreading across YouTube. An active comment section indicates higher engagement rates which are unique to right-wing audiences. These interactions facilitate a communal experience between creator and audience. Increased policy enforcement effectively disrupted this communal experience. Nevertheless, the researchers found evidence that those who return to create or watch right-wing content are likely to engage intensely with the content as well. Future research may investigate the actual power of the recommendation algorithm on YouTube. While this paper focused on right-wing content, the opposing political spectrum including the extreme left are increasingly utilizing YouTube to proliferate their political commentary. Personally I am curious to better understand the influence of foreign audiences on domestic issues and how YouTube is diluting the local populous with foreign activist voices.

Is Transparency Really Reducing The Impact Of Misinformation?

A recent study investigated YouTube’s efforts to provide more transparency about the ownership of certain YouTube channels. The study concerned YouTube’s disclaimers displayed under the video that indicate the content was produced or is funded by a state-controlled media outlet. The study sought to shed light on whether or not these disclaimers are an efficient means to reduce the impact of misinformation.  

tl;dr

In order to test the efficacy of YouTube’s disclaimers, we ran two experiments presenting participants with one of four videos: A non-political control, an RT video without a disclaimer, an RT video with the real disclaimer, or the RT video with a custom implementation of the disclaimer superimposed onto the video frame. The first study, conducted in April 2020 (n = 580) used an RT video containing misinformation about Russian interference in the 2016 election. The second conducted in July 2020 (n = 1,275) used an RT video containing misinformation about Russian interference in the 2020 election. Our results show that misinformation in RT videos has some ability to influence the opinions and perceptions of viewers. Further, we find YouTube’s funding labels have the ability to mitigate the effects of misinformation, but only when they are noticed, and the information absorbed by the participants. The findings suggest that platforms should focus on providing increased transparency to users where misinformation is being spread. If users are informed, they can overcome the potential effects of misinformation. At the same time, our findings suggest platforms need to be intentional in how warning labels are implemented to avoid subtlety that may cause users to miss them.

Make sure to read the full article titled State media warning labels can counteract the effects of foreign misinformation by Jack Nassetta and Kimberly Gross at https://misinforeview.hks.harvard.edu/article/state-media-warning-labels-can-counteract-the-effects-of-foreign-misinformation/

Source: RT, 2020 US elections, Russia to blame for everything… again, Last accessed on Dec 31, 2020 at https://youtu.be/2qWANJ40V34?t=164

State-controlled media outlets are increasingly used for foreign interference in civic events. While independent media outlets can be categorized on social media and associated with a political ideology, a state-controlled media outlet generally appears independent or detached from a state-controlled political agenda. Yet they regularly create content concomitant with the controlling state’s political objectives and its leaders. This deceives the public about its state-affiliation and undermines civil liberties. The problem is magnified on social media platforms with their reach and potential for virality ahead of political elections. A prominent example is China’s foreign interference efforts in the referendum on the independence of Hong Kong.

An increasing number of social media platforms launched integrity measures to increase content transparency to counter the integrity risks associated with a state-controlled media outlet proliferating potential disinformation content. In 2018 YouTube began to roll out an information panel feature to provide additional context on state-controlled and publicly funded media outlets. These information panels or disclaimers are really warning labels that make the viewer aware about the potential political influence of a government on the information shown in the video. These warning labels don’t provide any additional context on the veracity of the content or whether the content was fact-checked. On desktop, they appear alongside a hyperlink leading to the wikipedia entry of the media outlet. As of this writing the feature applies to 27 governments including the United States government. 

Source: DW News, Massive explosion in Beirut, Last Accessed on Dec 31, 2020 at https://youtu.be/PLOwKTY81y4?t=7

The researchers focused on whether these warning labels would mitigate the effects on viewers’ perception created by misinformation shown in videos of the Russian state-controlled media outlet RT (Russia Today). RT evades deplatforming by complying with YouTube’s terms of service. This turned the RT channel into an influential resource for the Russian government to undermine confidence of the American public to trust established American media outlets and the United States government when reporting on the Russian interference in the 2016 U.S. presidential elections. An RT video downplaying the Russian influence operation was used for the study and shown to participants with and without a label identifying RT’s affiliation with the Russian government as well as a superimposed warning label with the same language and hyperlink to wikipedia. This surfaced the following findings: 

  1. Disinformation spread by RT does impact viewer’s perception and is effective at that.
  2. Videos without a warning label were more successful in reducing trust in established mainstream media and the government
  3. Videos without a warning label but a superimposed interstitial with the language of the warning label were most effective in preserving the integrity of viewer’ perceptions
Source: RT, $4,700 worth of ‘meddling’: Google questioned over ‘Russian interference’, Last accessed on Dec 31, 2020 at https://www.youtube.com/watch?v=wTCSbw3W4EI

The researchers further discovered small changes in coloring, design and placement of the warning label increase the viewer taking notice of it and it helps with absorbing the information. Both conditions must be met because noticing a label without comprehending its message had no significant impact on understanding the political connection of creator and content. 

I’m intrigued by these findings for the road ahead offers a great opportunity to shape how we distribute and consume information on social media without falling prey for foreign influence operations. Though open questions remain: 

  1. Are these warning labels equally effective on other social media platforms, e.g. Facebook, Instagram, Twitter, Reddit, TikTok, etc.? 
  2. Are these warning labels equally effective with other state-controlled media? This study focused on Russia, a large, globally acknowledged state actor. How does a warning label for content by the government of Venezuela or Australia impact the efficacy of misinformation? 
  3. This study seemed to be focused on the desktop version of YouTube. Are these findings transferable to the mobile version of YouTube?  
  4. What is the impact of peripheral content on viewer’s perception, e.g. YouTube’s recommendation showing videos in its sidebar that all claim RT is a hoax versus videos that all give RT independent credibility?
  5. The YouTube channels of C-SPAN and NPR did not appear to display a warning label within their videos. Yet the United States is among the 27 countries currently listed in YouTube’s policy. What are the criteria to be considered a publisher, publicly funded or state-controlled? How are these criteria met or impacted by a government, e.g. passing certain broadcasting legislation or declaration?
  6. Lastly, the cultural and intellectual background of the target audience is particularly interesting. Here is an opportunity to research the impact of warning labels with participants of different political ideologies, economic circumstances and age-groups in contrast to the actual civic engagement ahead, during and after an election