Our society is built around freedom of expression. How can regulators mitigate the negative consequences of misinformation without restricting speech? And should we strive for a legal solution rather than improving upon our social contract? In this paper, constitutional law professor, bestselling author and former administrator of the Office of Information and Regulatory Affairs Cass Sunstein reviews the status of falsehoods against our current constitutional regime to regulate speech and offers his perspective to control libel and other false statements of fact.
tl;dr
What is the constitutional status of falsehoods? From the standpoint of the First Amendment, does truth or falsity matter? These questions have become especially pressing with the increasing power of social media, the frequent contestation of established facts, and the current focus on “fake news,” disseminated by both foreign and domestic agents in an effort to drive politics in the United States and elsewhere in particular directions. In 2012, the Supreme Court ruled for the first time that intentional falsehoods are protected by the First Amendment, at least when they do not cause serious harm. But in important ways, 2012 seems like a generation ago, and the Court has yet to give an adequate explanation for its conclusion. Such an explanation must begin with the risk of a “chilling effect,” by which an effort to punish or deter falsehoods might also in the process chill truth. But that is hardly the only reason to protect falsehoods, intentional or otherwise; there are several others. Even so, the various arguments suffer from abstraction and high-mindedness; they do not amount to decisive reasons to protect falsehoods. These propositions bear on old questions involving defamation and on new questions involving fake news, deepfakes, and doctored videos.
Make sure to read the full paper titled Falsehoods and the First Amendment by Cass R. Sunstein at https://jolt.law.harvard.edu/assets/articlePDFs/v33/33HarvJLTech387.pdf

As a democracy, why should we bother to protect misinformation? We already prohibit various kinds of falsehoods including perjury, false advertising, and fraud. Why not extend these regulations to online debates, deepfakes, etc? Sunstein offers a basic truth of democratic systems: freedom of expression is a core tenet to promote self-government; it is enshrined in the first amendment. People need to be free to say what they think, even if what they think is false. A society that punishes people for spreading falsehoods inevitably creates a chilling effect for those who (want to) speak the truth. Possible criminal prosecution for spreading misinformation should not itself have a chilling effect on the public discussion about the misinformation. Of course, the delineator is a clear and present danger manifested in the misinformation that creates real-world harm. The dilemma for regulators lies in the difficult task to identify a clear and present danger and real-world harm. It’s not a binary right versus wrong but rather a right versus another right dilemma.
Sunstein points out a few factors that make it so difficult to strike an acceptable balance between restrictions and free speech. A prominent concern is collateral censorship aka official fallibility. That is where a government would censor what it deems to be false but ends up restricting truth as well. Government officials may act in self-interest to preserve their status, which inevitably invites the risk of censorship of critical voices. Even if the government correctly identifies and isolates misinformation, who has the burden of proof? How detailed must it be demonstrated that a false statement of fact is in fact false and does indeed present a clear danger in causality with real-world harm? As mentioned earlier, any ban of speech may impose a chilling effect on people who aim to speak the truth but fear government retaliation. In some cases, misinformation may be helpful to magnify the truth. Misinformation offers a clear contrast that allows people to make up their minds. Learning falsehoods from others also increases the chances to learn what other people think, how they process and label misinformation, and where they ultimately stand. The free flow of information is another core tenet of democratic systems. It is therefore preferred to have all information in the open so people can choose and pick whichever they believe in. Lastly, a democracy may consider counterspeech as a preferred method to deal with misinformation. Studies have shown that media literacy, fact-checking labels, and accuracy cues help people to better assess misinformation and its social value. Banning a falsehood, however, would drive the false information and its creators underground. Isn’t it better to find common ground, rather than to silence people?
With all this in mind, striking a balance between permitting falsehoods in some cases, enforcing upon them should be the exception and nuanced on a case-by-case basis. Sunstein shares his ideas to protect people from falsehoods without producing excessive chilling effects from the potential threat of costly lawsuits. First, there should be limits on monetary damages and schedules should be limited to address specific scenarios. Second, a general right to correct or retract misinformation should pre-empt proceedings seeking damages. And, third, online environments may benefit from notice-and-takedown protocols similar to the existing copyright practice under the Digital Millennium Copyright Act (DMCA). Germany’s Network Enforcement Act (NetzDG) is a prominent example of notice-and-takedown regulations aimed at harmful, but not necessarily false speech. I think a well-functioning society must work towards a social contract that facilitates intrinsic motivation and curiosity to seek and speak the truth. Lies should not get a platform, but cannot be outlawed either. If the legal domain is sought to adjudicate misinformation, it should be done expeditiously with few formal, procedural hurdles. The burden of proof has to be on the plaintiff and the bar for false statements of fact must be calibrated against the reach of the defendant, i.e. influencers and public figures should have less freedom to spread misinformation due to their reach is far more damaging than that of John Doe. Lastly, shifting the regulatory enforcement on carriers or social media platforms is tantamount to hold responsible the construction worker of a marketplace – it fails to identify the bad actor, which is the person disseminating the misinformation. Perhaps enforcement of misinformation can be done through crowdsourced communities, accuracy cues at the point of submission or supporting information on a given topic. Here are a few noteworthy decisions for further reading:
United States v. Alvarez, 567 U.S. 709 (Stolen Valor)
Hustler Magazine, Inc. v. Falwell, 485 U.S. 46 (Recovering Damages)
Gertz v. Robert Welch, Inc., 418 U.S. 323 (Libel of Private Individuals)
New York Times Co. v. Sullivan, 376 U.S. 254 (Libelous Ad)
Dennis v. United States, 341 U.S. 494 (Convicting Communists)
Schenck v. United States, 249 U.S. 47 (Civil Disobedience)