On Tyranny

A pocket guide for civil disobedience to safe democracy.

Democracy requires action. Timothy Synder’s “Twenty Lessons From The Twentieth Century” inspires action. In his short pocket guide, Synder offers civic lessons ranging from taking responsibility for the face of the world to political awareness all the way to what it really means to be a patriot. His theme is ‘those who do not learn from history are doomed to repeat it’. It struck me as an ideal guide to give out at demonstrations or town hall meetings. His ideas for civic measures are worth recounting for they aim to protect the integrity of democracy. That being said, most of his lessons should be working knowledge for every citizen. 

Twitter And Tear Gas

Zeynep Tufekci takes an insightful look at the intersection of protest movements and social media.

Ever since I’ve read Gustave Le Bon’s “The Crowd”, I’ve been fascinated with crowd psychology and social networks. In “Twitter And Tear Gas – The Power And Fragility Of Networked Protests” Zeynep Tufekci connects the elements of protest movements with 21st-century technology. In her work, she describes movements as

“attempts to intervene in the public sphere through collective, coordinated action. A social movement is both a type of (counter) public itself and a claim made to a public that a wrong should be righted or a change should be made.”

In times of far-reaching social media platforms, restricted online forums, and end-to-end encrypted private group chats, the means to organize a protest movement have drastically changed. 

“Modern networked movements can scale up quickly and take care of all sorts of logistical tasks without building any substantial organizational capacity before the first march or protest. (…) The Gezi Park moment, going from almost zero to a massive movement within days clearly demonstrates the power of digital tools. However, with this speed comes weakness, some of it unexpected. First, the new movements find it difficult to make tactical shifts because they lack both the culture and the infrastructure for making collective decisions. Often unable to change course after the initial, speedy expansion phase, they exhibit a ‘tactical freeze’. Second, although their ability (as well as their desire) to operate without defined leadership protects them from co-optation or “decapitation,” it also makes them unable to negotiate with adversaries or even inside the movement itself. Third, the ease with which current social movements form often fails to signal an organizing capacity powerful enough to threaten those in authority.”

While these movements often catch the general public by surprise, it really does come down to timing and committment by a group of decentralized actors. These actors, who come from all walks of life, seek to connect with others as rapidly as possibly by leveraging the unrestricted powers of social media. Social media creates ties with a variety of supporters. Tufekci points out

“people who seek political change, the networking that takes place among people with weak ties is especially important. People with strong ties already share similar views (…). Weaker ties may be far-flung and composed of people with varying political and social ties. Also, weak ties may create bridges to other clusters of people in a way strong ties do not.”

Protest movements predating social media often shared similarities with multi-day music festivals, overnight camps or even military training exercises. They instill a sense of camaraderie which attracts a certain type of indivudal. Today’s protest movements differ from those days in that they can erupt quickly, but fall apart as fast as they came to be. Still 

“many people are drawn to protest camps because of the alienation they feel in their ordinary lives as consumers. Exchanging products without money is like reverse commodity fetishism: for many, the point is not the product being exchanged but the relationship that is created.”

In addition the speed at which modern movements operate serves as an invitation for individuals disconnected from broader society or individuals who simply prefer the short-lived special operation to right a policy wrong over the long-term work required to build and maintain relationships that are powerful enough to organically drive a change of policy.

“Some online communities not only are distant from offline communities but also have little or no persistence or reputational impact. (…) Social scientists call this the “stranger-on-a-train” effect, describing the way people sometimes open up more to anonymous strangers than to the people they see around every day. (…) Such encounters can even be more authentic and liberating.”

Tufekci spends much time on describing the evolution of social interactions in a networked space, the social inertia that needs to be managed in order to pick up momentum, but she also offers some insights on defensive considerations to make a protest movement work. First and foremost, a protest movement garners attention online, which in turn creates an influx of supporters. It will also attract opposition from private individuals, political opponents, and current political leaders. Those in power had previously relied upon, and in some countries still rely upon, censorship and suppression of information. Twitter and other social media platforms have disrupted this control over the narrative:

“To be effective, censorship in the digital era requires a reframing of the goals of censorship not as a total denial of access, which is difficult to achieve, but as a denial of attention, focus, and credibility In the networked public sphere, the goal of the powerful often is not to convince people of the truth of a particular narrative or to block a particular piece of information from getting out, but to produce resignation, cynicism, and a sense of disempowerment among the people.”

I apologize for using a wealth of quotes from her book, but it’s best described there, in her own words. Protests movements are here to stay. Understanding how democratic nations evolve their policies, right political wrongs, and influence authoritarian nations through subtle policy, online protest and real-world tear gas confrontation will help us make more informed decisions as we pick our political battles. Zeynep Tufekci put together a well-researched account that helps to make sense of the most important, controversial online protest movements from the Occupy Gezi/Wall Street movements to the Eqyptian Revolution to the Arab Spring to Black Lives Matter and MeToo or the March For Our Lives. There are two noticeable drawbacks of this otherwise excellent book. First, the chapters appear uncoordinated within the book and are too long. The reader can’t take a breather without feeling to lose a thought. Second, her examples are chronologically disconnected from the actual movements. While this helps to illustrate a certain point, I found it to be a confusing feat. Twitter And Tear Gas has its own website. Check it out at https://www.twitterandteargas.org/ or reach out to the author on Twitter @zeynep 

Falsehoods And The First Amendment

Our society is built around freedom of expression. How can regulators mitigate the negative consequences of misinformation without restricting speech? And should we strive for a legal solution rather than improving upon our social contract? In this paper, constitutional law professor, bestselling author and former administrator of the Office of Information and Regulatory Affairs Cass Sunstein reviews the status of falsehoods against our current constitutional regime to regulate speech and offers his perspective to control libel and other false statements of fact. 

tl;dr

What is the constitutional status of falsehoods? From the standpoint of the First Amendment, does truth or falsity matter? These questions have become especially pressing with the increasing power of social media, the frequent contestation of established facts, and the current focus on “fake news,” disseminated by both foreign and domestic agents in an effort to drive politics in the United States and elsewhere in particular directions. In 2012, the Supreme Court ruled for the first time that intentional falsehoods are protected by the First Amendment, at least when they do not cause serious harm. But in important ways, 2012 seems like a generation ago, and the Court has yet to give an adequate explanation for its conclusion. Such an explanation must begin with the risk of a “chilling effect,” by which an effort to punish or deter falsehoods might also in the process chill truth. But that is hardly the only reason to protect falsehoods, intentional or otherwise; there are several others. Even so, the various arguments suffer from abstraction and high-mindedness; they do not amount to decisive reasons to protect falsehoods. These propositions bear on old questions involving defamation and on new questions involving fake news, deepfakes, and doctored videos.

Make sure to read the full paper titled Falsehoods and the First Amendment by Cass R. Sunstein at https://jolt.law.harvard.edu/assets/articlePDFs/v33/33HarvJLTech387.pdf 

(Source: Los Angeles Times)

As a democracy, why should we bother to protect misinformation? We already prohibit various kinds of falsehoods including perjury, false advertising, and fraud. Why not extend these regulations to online debates, deepfakes, etc? Sunstein offers a basic truth of democratic systems: freedom of expression is a core tenet to promote self-government; it is enshrined in the first amendment. People need to be free to say what they think, even if what they think is false. A society that punishes people for spreading falsehoods inevitably creates a chilling effect for those who (want to) speak the truth. Possible criminal prosecution for spreading misinformation should not itself have a chilling effect on the public discussion about the misinformation. Of course, the delineator is a clear and present danger manifested in the misinformation that creates real-world harm. The dilemma for regulators lies in the difficult task to identify a clear and present danger and real-world harm. It’s not a binary right versus wrong but rather a right versus another right dilemma. 

Sunstein points out a few factors that make it so difficult to strike an acceptable balance between restrictions and free speech. A prominent concern is collateral censorship aka official fallibility. That is where a government would censor what it deems to be false but ends up restricting truth as well. Government officials may act in self-interest to preserve their status, which inevitably invites the risk of censorship of critical voices. Even if the government correctly identifies and isolates misinformation, who has the burden of proof? How detailed must it be demonstrated that a false statement of fact is in fact false and does indeed present a clear danger in causality with real-world harm? As mentioned earlier, any ban of speech may impose a chilling effect on people who aim to speak the truth but fear government retaliation. In some cases, misinformation may be helpful to magnify the truth. Misinformation offers a clear contrast that allows people to make up their minds. Learning falsehoods from others also increases the chances to learn what other people think, how they process and label misinformation, and where they ultimately stand. The free flow of information is another core tenet of democratic systems. It is therefore preferred to have all information in the open so people can choose and pick whichever they believe in. Lastly, a democracy may consider counterspeech as a preferred method to deal with misinformation. Studies have shown that media literacy, fact-checking labels, and accuracy cues help people to better assess misinformation and its social value. Banning a falsehood, however, would drive the false information and its creators underground. Isn’t it better to find common ground, rather than to silence people?

With all this in mind, striking a balance between permitting falsehoods in some cases, enforcing upon them should be the exception and nuanced on a case-by-case basis. Sunstein shares his ideas to protect people from falsehoods without producing excessive chilling effects from the potential threat of costly lawsuits. First, there should be limits on monetary damages and schedules should be limited to address specific scenarios. Second, a general right to correct or retract misinformation should pre-empt proceedings seeking damages. And, third, online environments may benefit from notice-and-takedown protocols similar to the existing copyright practice under the Digital Millennium Copyright Act (DMCA). Germany’s Network Enforcement Act (NetzDG) is a prominent example of notice-and-takedown regulations aimed at harmful, but not necessarily false speech. I think a well-functioning society must work towards a social contract that facilitates intrinsic motivation and curiosity to seek and speak the truth. Lies should not get a platform, but cannot be outlawed either. If the legal domain is sought to adjudicate misinformation, it should be done expeditiously with few formal, procedural hurdles. The burden of proof has to be on the plaintiff and the bar for false statements of fact must be calibrated against the reach of the defendant, i.e. influencers and public figures should have less freedom to spread misinformation due to their reach is far more damaging than that of John Doe. Lastly, shifting the regulatory enforcement on carriers or social media platforms is tantamount to hold responsible the construction worker of a marketplace – it fails to identify the bad actor, which is the person disseminating the misinformation. Perhaps enforcement of misinformation can be done through crowdsourced communities, accuracy cues at the point of submission or supporting information on a given topic. Here are a few noteworthy decisions for further reading: 

The Future Of Political Elections On Social Media

Should private companies decide what politician people will hear about? How can tech policy make our democracy stronger? What is the role of social media and journalism in an increasingly polarized society? Katie Harbath, a former director for global elections at Facebook discusses these questions in a lecture about politics, policy and democracy. Her unparalleled experience as a political operative combined with her decade long experience working on political elections across the globe make her a leading intellectual voice to shape the future of civic engagement online. In her lecture to honor the legacy of former Wisconsin State senator Paul Offner she shares historical context on the evolution of technology and presidential election campaigns. She also talks about the impact of the 2016 election and the post-truth reality online that came with the election of Donald Trump. In her concluding remarks she offers some ideas for future regulations of technology to strengthen civic integrity as well as our democracy and she answers questions during her Q&A.

tl;dr

As social media companies face growing scrutiny among lawmakers and the general public, the La Follette School of Public Affairs at University of Wisconsin–Madison welcomed Katie Harbath, a former global public policy director at Facebook for the past 10 years, for a livestreamed public presentation. Harbath’s presentation focused on her experiences and thoughts on the future of social media, especially how tech companies are addressing civic integrity issues such as free and hate speech, misinformation and political advertising.

Make sure to watch the full lecture titled Politics and Policy: Democracy in the Digital Age at https://lafollette.wisc.edu/outreach-public-service/events/politics-and-policy-democracy-in-the-digital-age (or below)

Timestamps

03:04 – Opening remarks by Susan Webb Yackee
05:19 – Introduction of the speaker by Amber Joshway
06:59 – Opening remarks by Katie Harbath
08:24 – Historical context of tech policy
14:39 – The promise of technology and the 2016 Facebook Election
17:31 – 2016 Philippine presidential election
18:55 – Post-truth politics and the era of Donald J. Trump
20:04 – Social media for social good
20:27 – 2020 US presidential elections 
22:52 – The Capitol attacks, deplatforming and irreversible change
23:49 – Legal aspects of tech policy
24:37 – Refresh Section 230 CDA and political advertising
26:03 – Code aspects of tech policy
28:00 – Developing new social norms
30:41 – More diversity, more inclusion, more openness to change
33:24 – Tech policy has no finishing line
34:48 – Technology as a force for social good and closing remarks

Q&A

(Click on the question to watch the answer)

1. In a digitally democratized world how can consumers exercise their influence over companies to ensure that online platforms are free of bias?

2. What should we expect from the congressional hearing on disinformation?

3. Is Facebook a platform or a publisher?

4. Is social media going to help us to break the power of money in politics?

4. How have political campaigns changed over time?

5. What is the relationship between social media and the ethics of journalism?

6. Will the Oversight Board truly impact Facebook’s content policy?

7. How is Facebook handling COVID-19 related misinformation?

8. What is Facebook’s approach to moderating content vs encryption/data privacy?

9. Does social media contribute to social fragmentation (polarization)? If so, how can social media be a solution for reducing polarization?

10. What type of regulation should we advocate for as digitally evolving voters?

11. What are Katies best and worst career memories? What’s next for Katie post Facebook?

Last but not least: Katie mentioned a number of books (and a blog) as a recommended read that I will list below:

Demystifying Foreign Election Interference

The Office of the Director of National Intelligence (ODNI) released a declassified report detailing efforts by foreign actors to influence and interfere in the 2020 U.S. presidential elections. The key finding of the report: Russia sought to undermine confidence in our democratic processes to support then President Donald J. Trump. Iran launched similar efforts but to diminish Trump’s chances of getting reelected. And China stayed out of it altogether.  

(Source: ODNI)

Make sure to read the full declassified report titled Intelligence Community Assessment of Foreign Threats to the 2020 U.S. Federal Elections releasedby the Office of the Director of National Intelligence at https://www.odni.gov/index.php/newsroom/reports-publications/reports-publications-2021/item/2192-intelligence-community-assessment-on-foreign-threats-to-the-2020-u-s-federal-elections

Background

On September 12, 2018 then President Donald J. Trump issued Executive Order 13848 to address foreign interference in U.S. elections. In essence, it authorizes an interagency review to determine whether an interference has occurred. In the event of foreign interference in a U.S. election the directive orders to create an impact report to trigger sanctions against (1) foreign individuals and (2) nation states. A comprehensive breakdown of the directive including the process of imposing sanctions can be found here. I will only focus on the findings of the interagency review laid out in the Intelligence Community Assessment (ICA) pursuant to EO 13848 (1)(a). The ICA is limited to intelligence reporting and other information available as of December 31, 2020.

Findings

The former President touted American voters before his own election in 2016, during his presidency and beyond the 2020 presidential elections with unsubstantiated claims of foreign election interference that would disadvantage his reelection chances. In Trump’s mind, China sought to undermine his chances to be reelected to office while he downplayed the role of Russia or Iran. The recently released ICA directly contradicts Trump’s claims. Here’s the summary per country:

Russia

  • Russia conducted influence operations targeting the integrity of the 2020 presidential elections authorized by Vladimir Putin
  • Russia supported then incumbent Donald J. Trump and aimed to undermine confidence in then candidate Joseph R. Biden
  • Russia attempted to exploit socio-political divisions through spreading polarized narratives without leveraging persistent cyber efforts against critical election infrastructure

The ICA finds a theme in Russian intelligence officials pushing misinformation about President Biden through U.S. media organizations, officials and prominent individuals. Such influence operations follow basic money laundering structures: (1) creation and dissemination of a false and misleading narrative, (2) conceal its source through layering in multiple media outlets involving independent (unaware) actors, and (3) integrating the damning narrative into the nation states official communication after the fact. A recurring theme was the false claim of corrupt ties between President Biden and Ukraine. These began spreading as early as 2014. 

Russian attempts to sow discord among the American people took place through narratives that amplified misinformation about the election process and its systems, e.g. undermining the integrity of mail-in ballots or highlighting technical failures and exceptions of misconduct. In a broader sense, topics around pandemic related lockdown measures or racial injustice or conservative censorship were exploited to polarize the affected groups. While these efforts required Russia’s cyber offensive units to take action, the actual evidence for a persistent cyber influence operation was not conclusive. The ICA categorized Russian actions as general intelligence gathering to inform Russian foreign policy rather than specifically targeting critical election infrastructure.

Iran

  • Iran conducted influence operations targeting the integrity of the 2020 presidential elections likely authorized by Supreme Leader Ali Khamenei
  • Unlike Russia, Iran did not support either candidate but aimed to undermine confidence in then incumbent Donald J. Trump
  • Iran did not interfere in the 2020 presidential elections as defined as activities targeting technical aspects of the election

The ICA finds Iran leveraged similar influence tactics as Russia targeting the integrity of the election process presumably in an effort to steer the public’s attention away from Iran and towards domestic issues around pandemic related lockdown measures or racial injustice or conservative censorship. However, Iran relied more notably on cyber-enabled offensive operations. These included aggressive spoofing emails disguised as to be sent from the Proud Boys group to intimidate liberal and left-leaning voters. Spear phishing emails sent to former and current officials aimed to gain impactful information and access to critical infrastructure. A high volume of inauthentic social media accounts was used to create divisive political narratives. Some of these accounts dated back to 2012.     

China

  • China did not conduct influence operations or efforts to interfere in the 2020 presidential elections

The ICA finds China did not actively interfere in the 2020 presidential elections. While the rationale in their assessment is largely based on political reasoning and foreign policy objectives, the report provides no data points for me to evaluate. The report does not offer insights into the role of Chinese Technology platforms repeatedly targeted by the former President. A minority view by the National Intelligence Office for Cyber (NIO) holds the opinion that China did deploy some cyber offensive operations to counter anti-Chinese policies. Former Director of National Intelligence John Ratcliffe leads this minority view expressed in a scathing memorandum that concludes the ICA fell short in their analysis with regard to China.

Recommendations

The ICA offers several insights into a long, strenuous election cycle. Its sober findings help to reformulate U.S. foreign policy and redefine domestic policy objectives. While this report is unable to detail all available intelligence and other information it offers some solace to shape future policies. For example:

  1. Cybersecurity – increased efforts to update critical election infrastructure has probably played a key role in the decreased efforts around cyber offensive operations. Government and private actors must continue to focus on cybersecurity, practise cyber hygiene and conduct digital audits to improve cyber education
  2. Media Literacy – increased efforts to educate the public about political processes. This includes private actors to educate their users about potential abuse on their platforms. Continuing programs to depolarize ideologically-charged groups through empathy and regulation is a cornerstone for a more perfect union

Additional and more detailed recommendations to improve the resilience of American elections and democratic processes can be found in the Joint Report of the Department of Justice and the Department of Homeland Security on Foreign Interference Targeting Election Infrastructure or Political Organization, Campaign, or Candidate Infrastructure Related to the 2020 U.S. Federal Elections