Falsehoods And The First Amendment

Our society is built around freedom of expression. How can regulators mitigate the negative consequences of misinformation without restricting speech? And should we strive for a legal solution rather than improving upon our social contract? In this paper, constitutional law professor, bestselling author and former administrator of the Office of Information and Regulatory Affairs Cass Sunstein reviews the status of falsehoods against our current constitutional regime to regulate speech and offers his perspective to control libel and other false statements of fact. 

tl;dr

What is the constitutional status of falsehoods? From the standpoint of the First Amendment, does truth or falsity matter? These questions have become especially pressing with the increasing power of social media, the frequent contestation of established facts, and the current focus on “fake news,” disseminated by both foreign and domestic agents in an effort to drive politics in the United States and elsewhere in particular directions. In 2012, the Supreme Court ruled for the first time that intentional falsehoods are protected by the First Amendment, at least when they do not cause serious harm. But in important ways, 2012 seems like a generation ago, and the Court has yet to give an adequate explanation for its conclusion. Such an explanation must begin with the risk of a “chilling effect,” by which an effort to punish or deter falsehoods might also in the process chill truth. But that is hardly the only reason to protect falsehoods, intentional or otherwise; there are several others. Even so, the various arguments suffer from abstraction and high-mindedness; they do not amount to decisive reasons to protect falsehoods. These propositions bear on old questions involving defamation and on new questions involving fake news, deepfakes, and doctored videos.

Make sure to read the full paper titled Falsehoods and the First Amendment by Cass R. Sunstein at https://jolt.law.harvard.edu/assets/articlePDFs/v33/33HarvJLTech387.pdf 

(Source: Los Angeles Times)

As a democracy, why should we bother to protect misinformation? We already prohibit various kinds of falsehoods including perjury, false advertising, and fraud. Why not extend these regulations to online debates, deepfakes, etc? Sunstein offers a basic truth of democratic systems: freedom of expression is a core tenet to promote self-government; it is enshrined in the first amendment. People need to be free to say what they think, even if what they think is false. A society that punishes people for spreading falsehoods inevitably creates a chilling effect for those who (want to) speak the truth. Possible criminal prosecution for spreading misinformation should not itself have a chilling effect on the public discussion about the misinformation. Of course, the delineator is a clear and present danger manifested in the misinformation that creates real-world harm. The dilemma for regulators lies in the difficult task to identify a clear and present danger and real-world harm. It’s not a binary right versus wrong but rather a right versus another right dilemma. 

Sunstein points out a few factors that make it so difficult to strike an acceptable balance between restrictions and free speech. A prominent concern is collateral censorship aka official fallibility. That is where a government would censor what it deems to be false but ends up restricting truth as well. Government officials may act in self-interest to preserve their status, which inevitably invites the risk of censorship of critical voices. Even if the government correctly identifies and isolates misinformation, who has the burden of proof? How detailed must it be demonstrated that a false statement of fact is in fact false and does indeed present a clear danger in causality with real-world harm? As mentioned earlier, any ban of speech may impose a chilling effect on people who aim to speak the truth but fear government retaliation. In some cases, misinformation may be helpful to magnify the truth. Misinformation offers a clear contrast that allows people to make up their minds. Learning falsehoods from others also increases the chances to learn what other people think, how they process and label misinformation, and where they ultimately stand. The free flow of information is another core tenet of democratic systems. It is therefore preferred to have all information in the open so people can choose and pick whichever they believe in. Lastly, a democracy may consider counterspeech as a preferred method to deal with misinformation. Studies have shown that media literacy, fact-checking labels, and accuracy cues help people to better assess misinformation and its social value. Banning a falsehood, however, would drive the false information and its creators underground. Isn’t it better to find common ground, rather than to silence people?

With all this in mind, striking a balance between permitting falsehoods in some cases, enforcing upon them should be the exception and nuanced on a case-by-case basis. Sunstein shares his ideas to protect people from falsehoods without producing excessive chilling effects from the potential threat of costly lawsuits. First, there should be limits on monetary damages and schedules should be limited to address specific scenarios. Second, a general right to correct or retract misinformation should pre-empt proceedings seeking damages. And, third, online environments may benefit from notice-and-takedown protocols similar to the existing copyright practice under the Digital Millennium Copyright Act (DMCA). Germany’s Network Enforcement Act (NetzDG) is a prominent example of notice-and-takedown regulations aimed at harmful, but not necessarily false speech. I think a well-functioning society must work towards a social contract that facilitates intrinsic motivation and curiosity to seek and speak the truth. Lies should not get a platform, but cannot be outlawed either. If the legal domain is sought to adjudicate misinformation, it should be done expeditiously with few formal, procedural hurdles. The burden of proof has to be on the plaintiff and the bar for false statements of fact must be calibrated against the reach of the defendant, i.e. influencers and public figures should have less freedom to spread misinformation due to their reach is far more damaging than that of John Doe. Lastly, shifting the regulatory enforcement on carriers or social media platforms is tantamount to hold responsible the construction worker of a marketplace – it fails to identify the bad actor, which is the person disseminating the misinformation. Perhaps enforcement of misinformation can be done through crowdsourced communities, accuracy cues at the point of submission or supporting information on a given topic. Here are a few noteworthy decisions for further reading: 

Learn To Discern: How To Take Ownership Of Your News Diet

I am tired of keeping up with the news these days. The sheer volume of information is intimidating. It creates a challenge to filter relevant news from political noise only to then begin a process of analyzing the information for its integrity and accuracy. I certainly struggle to identify subtle misinformation when faced with it. That’s why I became interested in the psychological triggers weaved into the news to better understand my decision-making and conclusions. Pennycook and Rand wrote an excellent research paper on the human psychology of fake news.

tl;dr

We synthesize a burgeoning literature investigating why people believe and share false or highly misleading news online. Contrary to a common narrative whereby politics drives susceptibility to fake news, people are ‘better’ at discerning truth from falsehood (despite greater overall belief) when evaluating politically concordant news. Instead, poor truth discernment is associated with lack of careful reasoning and relevant knowledge, and the use of heuristics such as familiarity. Furthermore, there is a substantial disconnect between what people believe and what they share on social media. This dissociation is largely driven by inattention, more so than by purposeful sharing of misinformation. Thus, interventions can successfully nudge social media users to focus more on accuracy. Crowdsourced veracity ratings can also be leveraged to improve social media ranking algorithms.


Make sure to read the full paper titled The Psychology of Fake News by Gordon Pennycook and David G. Rand at https://www.sciencedirect.com/science/article/pii/S1364661321000516

This recent research paper by psychologists of the University of Regina and the Sloan School of Management at Massachusetts Institute of Technology took a closer look at the sources of political polarization, hyperpartisan news, and the underlying psychology that influences our decision-making on whether news is accurate or misinformation. They answered the question of why people fall for misinformation on social media. Lessons that can be drawn from this research will be helpful to build effective tools to intercept and mitigate misinformation online. It will further advance our understanding of the underlying human psychology when interacting with information on social media. And while the topic could fill entire libraries, they limited their scope of research to individual examples of misinformation rather than organized, coordinated campaigns of inauthentic behavior excluding the spread of disinformation.

So, Why Do People Fall For Fake News?

There are two fundamental concepts that explain the psychological dynamics when faced with misinformation: truth discernment aims to establish a belief in the relative accuracy of news that is greater than known-to-be false information on the same event. Basically, this concept is rooted in active recognition and critical analysis of the information to capture people’s overall beliefs. Another concept that is used to explain why people fall for misinformation is the idea of truth acceptance. Thereunder the accuracy of news is not a factor but the overall belief of it. Instead of critical analysis of the information people chose to average or combine all available information, true or false, to establish an opinion about the veracity of the news that captures people’s overall belief. This commonly results in a biased perception of news. Other concepts related to this question look at motives. Political motivations can influence people’s willingness to reason based on their partisan, political identity. In other words, when faced with news that is consistent with their political beliefs, the information is regarded as true; when faced with news that is inconsistent with their political beliefs, the information is regarded as false. Loyalty to their political ideology can become so strong that it can override an apparent falsehood for the sake of party loyalty. Interestingly, the researchers found that political partisanship has much less weight than the actual veracity of news when assessing information. Misinformation that is in harmony with people’s political beliefs is less trustworthy than accurate information that is against people’s political beliefs. They also discovered that people tend to be better at analyzing information that is in harmony with our political beliefs, which helps to discern truth from falsehood. But if people hardly fall for misinformation consistent with our political beliefs, which characteristics make people fall for misinformation?

“People who are more reflective are less likely to believe false news content – and are better at discerning between truth and falsehood – regardless of whether the news is consistent or inconsistent with their partisanship”

Well, this brings us back to truth discernment. Belief in misinformation is commonly associated with overconfidence, lack of reflection, zealotry, delusionality, or overclaiming where an individual acts on completely fabricated information as a self-proclaimed expert. All of these factors indicate an inability of analytical thinking. On the opposite side of the spectrum, people determine the veracity of information through cognitive reflection and tapping into their relevant existing knowledge. This can be general political knowledge, a basic understanding of established scientific theories, or simple online media literacy.

“Thus, when it comes to the role of reasoning, it seems that people fail to discern truth from falsehood because they do not stop to reflect sufficiently on their prior knowledge (or have insufficient or inaccurate prior knowledge) – and not because their reasoning abilities are hijacked by political motivations.” 

The researchers found that the truth has little impact on sharing intentions. They describe three types of information-sharing on social media:

  • Confusion-based sharing: this concept encompasses a genuine belief in the veracity of the information-shared (even though the person is mistaken)
  • Preference-based sharing: this concept places political ideology, or related motives such as virtue signaling, above the truth of the information shared accepting misinformation as a collateral
  • Inattention-based sharing: thereunder people are only intending to share accurate information, but are distracted by the social media environment

Steps To Own What You Know

If prior knowledge is a critical factor to identify misinformation, then familiarity with accurate information goes a long way. An awareness of familiar information is critical to determine whether the information presented is the information that you already know or a slightly manipulated version. Be familiar with social media products. What does virality look like on platform XYZ? Is the uploader a verified actor? What is the source of the news? In general, sources are a critical signal to determine veracity. The more credible and established a source, the likelier the information is well-researched and accurate. Finally, a red flag for misinformation is emotional headlines, provocative captions, or shocking images.

Challenges To Identify Misinformation

Truth is not a binary metric. In order to determine the veracity of news, a piece of information may be falsified or laced with inaccuracies or compared against established, known information. Therefore the accuracy and precision, or overall quality, of a machine learning classifier for misinformation hinges on the clarity of the provided training data times the depth of exposure on the platform where the classifier will be deployed. Another challenge to consider is the almost ever-changing landscape of misinformation. Misinformation is rapidly evolving, convulsing into conspiracy theories and maybe (mistakenly) supported by established influencers and institutions. This creates problems to discern the elements of a news story, which undermines the chances to determine accuracy. Inoculation (deliberate exposure to misinformation to improve recognition abilities) is in part ineffective because people fail to stop, reflect and consider the accuracy of the information at all. Therefore successful interventions to minimize misinformation may start with efforts to slow down interactions on social media. This can be achieved by changing the user interface to introduce friction and prompts to help induce active reflection. Lastly, human fact-checking is not scalable. For so many reasons: time, accuracy, integrity, etc. Leveraging a community-based (crowd-sourced) fact-checking model might be an alternative until a more automated solution will be ready. Twitter has recently introduced experiments with these types of crowd-sourced products. Their platform is called Birdwatch.

This research paper didn’t unearth breakthrough findings or new material. It rather helped me to learn more about the dynamics of human psychology when exposed to a set of information. Looking at the individual concepts people use to determine the accuracy of information, the underlying motives that drive our attention, and the dynamics for when we decide to share news made this paper a worthwhile read. Its concluding remarks to improve the technical environment by leveraging technology to facilitate a more reflective, conscious experience of news on social media leaves me optimistic for better products to come. 

Left Of Launch

The Perfect Weapon is an intriguing account of history’s most cunning cyberwarfare operations. I learned about the incremental evolution of cyberspace as the fifth domain of war and how policymakers, military leaders and the private technology sector continue to adapt to this new threat landscape.  

Much has been written about influence operations or cyber criminals, but few accounts present so clearly a link between national security, cyberspace and foreign policy. Some of the stories told in The Perfect Weapon touch upon the Russian interference in the 2016 presidential elections, the 2015 hack of the Ukrainian power grid, the 2014 Sony hack, the 2013 revelations by Edward Snowden and many other notable breaches of cybersecurity. These aren’t news anymore, but help to understand America’s 21st century vulnerabilities.

Chapter 8 titled “The Fumble” left a particular mark on me. In it, Sanger details the handling of Russian hackers infiltrating the computer and server networks of the Democratic National Committee. The sheer lethargy by officials at the time demonstrated over months on end, including Obama’s failure to openly address the ongoing cyber influence operations perpetrated by the Russians ahead of the elections, was nothing particularly new yet I still felt outraged by what now seems to be obvious. The chapter illustrates some governance shortcomings that we as a society need to overcome in order to address cyberattacks but also build better cyber defense mechanisms.

Left of Launch is a strategy to leverage cyberwarfare or other infrastructure sabotage to prevent ballistic missiles from being launched

But the most insights for me came from the books cross-cutting between the cyberspace/cybersecurity domain to the public policy domain. It showed me how much work is still left to be done to educate our elected officials, our leaders and ourselves about a growing threat landscape in cyberspace. While technology regulation is a partisan issue, only bi-partisan solutions will yield impactful results.

David E. Sanger is a great journalist, bestselling author and an excellent writer. His storytelling is concise, easy to read and accessible for a wide audience. Throughout the book, I never felt that Sanger allowed himself to get caught up in the politics of it but rather maintained a refreshing neutrality. His outlook is simple: we need to redefine our sense of national security and come up with an international solution for cyberspace. We need to think broadly about the consequences of cyber-enabled espionage and cyberattacks against critical infrastructures. And we need to act now.