Zuckerberg’s Ugly Truth Isn’t So Ugly

A review of the 2021 book “Inside Facebook’s Battle for Domination” by Sheera Frenkel and Cecilia Kang. The truth is far more complex.

Writing this review didn’t come easy. I spent five years helping to mitigate and solve Facebook’s most thorny problems. When the book was published, I perceived it to be an attack on Facebook orchestrated by the New York Times, a stock-listed company and direct competitor in the attention and advertising market. Today, I know that my perception then was compromised by Meta’s relentless, internal corporate propaganda.

Similar to Chaos Monkeys, An Ugly Truth tells a story that is limited to available information at the time. The book claims to have had unprecedented access to internal, executive leadership directly reporting to Mark Zuckerberg and Sheryl Sandberg. It is focused on the time period roughly between 2015 and 2020; arguably it was Facebook’s most challenging time. Despite a constant flow of news reporting about Facebook’s shortcomings, the book, for the most part of it, remains focused on the executive leadership decisions that got the company into hot waters in the first place. Across 14 chapters, well-structured and perfectly written, the authors build a case of desperation: in an increasingly competitive market environment, Facebook needs to innovate and increase its user statistics to beat earnings to satisfy shareholders. Yet, the pursuit of significance infiltrated the better judgment of Facebook’s executive leadership team and eventually led to drowning out the rational voices, the protective and concerned opinions of genuine leadership staff over the self-serving voices of staff only interested to progress at any cost.

To illustrate this point, the authors tell the story of former Chief Security Officer Alex Stamos, who persistently called out data privacy and security shortcomings:

Worst of all, Stamos told them (Zuckerberg and Sandberg), was that despite firing dozens of employees over the last eighteen months for abusing their access, Facebook was doing nothing to solve or prevent what was clearly a systemic problem. In a chart, Stamos highlighted how nearly every month, engineers had exploited the tools designed to give them easy access to data for building new products to violate the privacy of Facebook users and infiltrate their lives. If the public knew about these transgressions, they would be outraged […]

His calls, however, often went unanswered, or, worse invited other executive leadership threatened by Stamos’ findings to take hostile measures.      

By December, Stamos, losing patience, drafted a memo suggesting that Facebook reorganize its security team so that instead of sitting on their own, members were embedded across the various parts of the company. […] Facebook had decided to take his advice, but rather than organizing the new security team under Stamos, Facebook’s longtime vice president of engineering, Pedro Canahuati, was assuming control of all security functions. […] The decision felt spiteful to Stamos: he advised Zuckerberg to cut engineers off from access to user data. No team had been more affected by the decision than Canahuati’s, and as a result, the vice president of engineering told colleagues that he harbored a grudge against Stamos. Now he would be taking control of an expanded department at Stamos’s expense.

Many more of those stories would never be told. Engineers and other employees, much smaller fish than Stamos, who raised ethical concerns of security and integrity were routinely silenced, ignored, and “managed out” – Facebook’s preferred method of dealing with staff refusing to drink the kool-aid and toe the line. Throughout the book, the authors maintain a neutral voice yet it becomes very clear how difficult the decisions were for executive leadership. It seemed as though leading Facebook is the real-world equivalent of Kobayashi Maru – an everyday, no-win scenario. Certainly, I can sympathize with the pressure Mark, Sheryl, and others must have felt during those times.

Take the case of Donald John Trump, the 45th President of the United States. His Facebook Page has a reach of 34 million followers (at the time of this writing). On January 6, 2021, his account actively instigated his millions of followers to view Vice President Mike Pence as the reason for his lost bid for reelection. History went on to witness the attack on the United States Capitol. Democracy and our liberties were under attack on that day. And how did Mark Zuckerberg and Sheryl Sandberg respond on behalf of Facebook? First, silence. Second, indecision. Shall Trump remain on the platform? Are we going to suspend his account temporarily? Indefinitely? Eventually, Facebook’s leadership punted the decision to the puppet regime of the Oversight Board, who returned the decision power due to a lack of existing policies that would govern such a situation. When everybody was avoiding the headlights, Facebook’s executive leadership acted like a deer. Yes, Zuckerberg’s philosophy on speech has evolved over time. Trump challenged this evolution.

Throughout Facebook’s seventeen-year history, the social network’s massive gains have repeatedly come at the expense of consumer privacy and safety and the integrity of democratic systems. […] And the platform is built upon a fundamental, possibly irreconcilable dichotomy: its purported mission is to advance society by connecting people while also profiting off them. It is Facebook’s dilemma and its ugly truth.

The book contains many more interesting stories. There were a wealth of internal leaks to desperately influence and return Facebook’s leadership back to its original course. There were the infamous Brett Kavanaugh hearings, which highlighted the political affiliations and ideologies of Facebook’s executive leader Joel Kaplan, who weathered the sexual harassment allegations against Brett Kavanaugh by Christine Blasey-Ford despite an outrage of Facebook’s female employees. Myanmar saw horrific human rights abuses enabled by and perpetrated through the platform. The speaker of the U.S. House of Representatives and Bay Area representative since 1987, Nancy Pelosi was humiliated when Facebook fumbled to remove a deepfake video about a speech of hers that was manipulated to make it sound slurred. And the list goes on and on and on and on.

The book is worth reading. The detail and minutiae afforded to report accurately and convincingly are rich and slow-burning. That being said, Facebook has been dying since 2015. Users leave the platform and delete Facebook. While Instagram and WhatsApp pull the company’s advertising revenue for the time being with stronger performances abroad, it is clear that the five years of the executive leadership of Facebook covered in this book point towards an undefiable conclusion: it failed. 

NPR’s Terry Gross interviewed the authors Sheera Frenkel and Cecilia Kang on Fresh Air. It further demonstrates the dichotomy of writing about the leadership at one of the most influential and controversial corporations in the world. You can listen to the full episode here

Advertisement

Why Are We Polarized?

Are we bound to follow tribal instincts when logic should lead us across the political aisle?

When I hear that the American political system isn’t broken, but exactly working as designed I can’t help but wonder how this can be true in times of all-encompassing social media, rapid loss of attention, and increasing discrimination of economic opportunity. However Ezra Klein’s book Why We’re Polarized claims that, and more, that this working political system is polarized by us as we are getting polarized by it. As confusing as it starts, Klein nevertheless does a fantastic job to elaborate his thoughts throughout ten chapters spread over 268 pages with convincing research and easy-to-read prose.

Frankly, I found this general topic challenging to comprehend. Hence Klein’s book appears to me neither a clear-cut psychological review of polarization nor is it a deep dive into America’s governance and democratic institutions. It comes across as a hybrid of history lessons, democratic ideas, and political media management. In light of such a mess I tend to gravitate to first principles: what is polarization? 

According to Klein “the logic of polarization is to appeal to a more polarized public, (so) political institutions and actors behave in more polarized ways. As political institutions and actors become more polarized, they further polarize the public.

Explaining polarization with polarization isn’t helpful. After searching for adequate definitions I found myself trapped in deciding between constitutional polarization and political polarization and the iterative sense of polarization. Interpreting Klein’s logic polarization may be a deviation from core political beliefs toward ideological extremes in an effort to reach a new audience. That in turn perpetuates a more extreme behavior of political actors and institutions. As Klein argues:   

“This sets off a feedback cycle: to appeal to a yet more polarized public, institutions must polarize further; when faced with yet more polarized institutions, the public polarizes further, and on on.”

It’s not an ideal beginning to a complex story, but it makes the most out of it. Across the first few chapters, Klein dives into the history of the American political system; mainly how Democrats turned liberal and Republicans became conservative. When it comes to group identity, the book dives deeper into the psychological aspects of us voters. 

“We became more consistent in the party we vote for not because we came to like our party more– indeed, we’ve come to like the parties we vote for less–but because we came to dislike the opposing party more.”

To put it simply Klein argues we have a stronger loyalty to our group than we have to our own ideology. Add in some cases a strong repulsion of the other group’s belief system. Klein continues:

“The human mind is exquisitely tuned to group affiliation and group difference. It takes almost nothing for us to form a group identity, and once that happens, we naturally assume ourselves in competition with other groups. The deeper our commitment to our group becomes, the more determined we become to ensure our group wins.”

There is plenty of well-established scientific research to concur with this notion. While the psychology of the crowd is one factor in this complex analysis, Klein manages to clarify that our identity, more than our previous system of beliefs, where we live, or who we associate with, dictates our sense of loyalty. And no other entity threatens our identity as much as the media. American media, the press, and political journalism are by nature mouthpieces of certain political powers – and always have been. Following the hotly contested Presidential election in the year 2000, the election of America’s first African-American President in 2008, and the consistently increasing economic gap between those who repair, clean, transport, deliver, and educate our communities and those who (merely) push paper our American identity has never been more called into question as it is today; especially in policy proposals of aspiring presidential candidates. Klein does not shy away from criticizing the media’s contribution to the skewed, partisan landscape:

“If we (the media) decide to give more coverage to Hillary Clinton’s emails than to her policy proposals–which is what we did–then we make her emails more important to the public’s understanding of her character and the potential presidency than her policy proposals. In doing so, we shape not just the news but the election, and thus the country.”

Overall, though, Klein’s book feels like a warm conversation with someone who is genuinely interested in understanding how we got where we are. He offers a clear diagnosis of the current State of the Union without swaying too far into either political camp, but falls short in offering a pathway forward or even mere suggestions on how to bridge the gap between opposing (political) viewpoints; therefore groups. Ezra Klein’s advice is “to pay attention to identity. What identity is that news article invoking? What identity is making you defensive? What does it feel like when you get pushed back into an identity? Can you notice when it happens?”

It is an engaging book that provides insight into the political discourse of America beyond New York or California. While it is well written and researched it feels more like a conversation, a starting point, rather than a solution or a means forward. 

How To Built Community To Influence Elections

Read this book to improve your civic engagement and create a more meaningful neighborhood.

Eitan Hersh is an associate professor of political science at Tufts University. In his latest book “Politics Is For Power – How to Move Beyond Political Hobbyism, Take Action, and Make Real Change” he decries the virtue signaling that is political hobbyism on social media and makes a case for grassroots politics. 

Political hobbyism can be identified as short-lived, current affair commentary on social media that results in no real-world change. It delivers a feeling of participation. We all have done it to some extent. Yet, Hersh finds, especially the political left fails to recognize that real political change is driven by a few selected local leaders who listen to the needs of a community. Consistent in-person community outreach builds a stronger community that is rather aligned than divided on overarching, public policy programs. 

“Political hobbyism is to public affairs what watching SportsCenter is to playing football.”

Source: College-Educated Voters Are Ruining American Politics by Eitan Hersh

Among the many well-told stories in this book, Hersh offers a prominent example of Starbucks founder Howard Schultz. For a brief moment in the 2020 U.S. Presidential Elections Schultz entertained an independent bid for the highest position in this country. Remember, Starbucks was built over decades of carefully choosing product ingredients, the ambiance of its stores, and hiring local leaders to represent its brand. It was a slow expansion from Seattle to the greater Pacific Northwest and year by year to more States across the United States. But when it came to his own campaign bid, Schultz seemingly forgot his patient business acumen but threw endless money at cable news and talk shows to make his case in less than eighteen months. Obviously, from the outside and in hindsight, this approach reeks of failure when it took years to build the Starbucks brand nationwide. Why would he seriously believe to reach the same market plurality in the political domain in just eighteen months? Because politics were only a hobby to Howard Schultz. 

“Politics Is For Power” is appropriate for community leaders, new and seasoned neighbors, social justice warriors and keyboard cowboys, and anybody really interested in improving civic engagement in their community. Personally, I loved the idea of using political donations instead of buying political ads to rather spend it on support for local community organizers who engage in face-to-face conversations with the local community and actually listen. Crafting impactful, social and economic policies is an arduous process that can only succeed if all voices of society have been heard. Furthermore, Hersh created captivating storylines condensed and spread across each chapter, which really brings home his point about taking action requires getting out the door, talking to your neighbors, and listen. 

Lastly, if you’re still reading, I feel it’s necessary to call out editorial ingenuity when it is due: this book has 217 pages, 22 chapters, and encompasses 5 parts. Each page is formatted for the reader’s pleasure. Chapters are comprehensive yet not longer than a commute to work would be. And its parts really provide a structure around the argument that highlights the thoughtful content of the book. Kudos, Simon & Schuster!

On Tyranny

A pocket guide for civil disobedience to safe democracy.

Democracy requires action. Timothy Synder’s “Twenty Lessons From The Twentieth Century” inspires action. In his short pocket guide, Synder offers civic lessons ranging from taking responsibility for the face of the world to political awareness all the way to what it really means to be a patriot. His theme is ‘those who do not learn from history are doomed to repeat it’. It struck me as an ideal guide to give out at demonstrations or town hall meetings. His ideas for civic measures are worth recounting for they aim to protect the integrity of democracy. That being said, most of his lessons should be working knowledge for every citizen. 

Falsehoods And The First Amendment

Our society is built around freedom of expression. How can regulators mitigate the negative consequences of misinformation without restricting speech? And should we strive for a legal solution rather than improving upon our social contract? In this paper, constitutional law professor, bestselling author and former administrator of the Office of Information and Regulatory Affairs Cass Sunstein reviews the status of falsehoods against our current constitutional regime to regulate speech and offers his perspective to control libel and other false statements of fact. 

tl;dr

What is the constitutional status of falsehoods? From the standpoint of the First Amendment, does truth or falsity matter? These questions have become especially pressing with the increasing power of social media, the frequent contestation of established facts, and the current focus on “fake news,” disseminated by both foreign and domestic agents in an effort to drive politics in the United States and elsewhere in particular directions. In 2012, the Supreme Court ruled for the first time that intentional falsehoods are protected by the First Amendment, at least when they do not cause serious harm. But in important ways, 2012 seems like a generation ago, and the Court has yet to give an adequate explanation for its conclusion. Such an explanation must begin with the risk of a “chilling effect,” by which an effort to punish or deter falsehoods might also in the process chill truth. But that is hardly the only reason to protect falsehoods, intentional or otherwise; there are several others. Even so, the various arguments suffer from abstraction and high-mindedness; they do not amount to decisive reasons to protect falsehoods. These propositions bear on old questions involving defamation and on new questions involving fake news, deepfakes, and doctored videos.

Make sure to read the full paper titled Falsehoods and the First Amendment by Cass R. Sunstein at https://jolt.law.harvard.edu/assets/articlePDFs/v33/33HarvJLTech387.pdf 

(Source: Los Angeles Times)

As a democracy, why should we bother to protect misinformation? We already prohibit various kinds of falsehoods including perjury, false advertising, and fraud. Why not extend these regulations to online debates, deepfakes, etc? Sunstein offers a basic truth of democratic systems: freedom of expression is a core tenet to promote self-government; it is enshrined in the first amendment. People need to be free to say what they think, even if what they think is false. A society that punishes people for spreading falsehoods inevitably creates a chilling effect for those who (want to) speak the truth. Possible criminal prosecution for spreading misinformation should not itself have a chilling effect on the public discussion about the misinformation. Of course, the delineator is a clear and present danger manifested in the misinformation that creates real-world harm. The dilemma for regulators lies in the difficult task to identify a clear and present danger and real-world harm. It’s not a binary right versus wrong but rather a right versus another right dilemma. 

Sunstein points out a few factors that make it so difficult to strike an acceptable balance between restrictions and free speech. A prominent concern is collateral censorship aka official fallibility. That is where a government would censor what it deems to be false but ends up restricting truth as well. Government officials may act in self-interest to preserve their status, which inevitably invites the risk of censorship of critical voices. Even if the government correctly identifies and isolates misinformation, who has the burden of proof? How detailed must it be demonstrated that a false statement of fact is in fact false and does indeed present a clear danger in causality with real-world harm? As mentioned earlier, any ban of speech may impose a chilling effect on people who aim to speak the truth but fear government retaliation. In some cases, misinformation may be helpful to magnify the truth. Misinformation offers a clear contrast that allows people to make up their minds. Learning falsehoods from others also increases the chances to learn what other people think, how they process and label misinformation, and where they ultimately stand. The free flow of information is another core tenet of democratic systems. It is therefore preferred to have all information in the open so people can choose and pick whichever they believe in. Lastly, a democracy may consider counterspeech as a preferred method to deal with misinformation. Studies have shown that media literacy, fact-checking labels, and accuracy cues help people to better assess misinformation and its social value. Banning a falsehood, however, would drive the false information and its creators underground. Isn’t it better to find common ground, rather than to silence people?

With all this in mind, striking a balance between permitting falsehoods in some cases, enforcing upon them should be the exception and nuanced on a case-by-case basis. Sunstein shares his ideas to protect people from falsehoods without producing excessive chilling effects from the potential threat of costly lawsuits. First, there should be limits on monetary damages and schedules should be limited to address specific scenarios. Second, a general right to correct or retract misinformation should pre-empt proceedings seeking damages. And, third, online environments may benefit from notice-and-takedown protocols similar to the existing copyright practice under the Digital Millennium Copyright Act (DMCA). Germany’s Network Enforcement Act (NetzDG) is a prominent example of notice-and-takedown regulations aimed at harmful, but not necessarily false speech. I think a well-functioning society must work towards a social contract that facilitates intrinsic motivation and curiosity to seek and speak the truth. Lies should not get a platform, but cannot be outlawed either. If the legal domain is sought to adjudicate misinformation, it should be done expeditiously with few formal, procedural hurdles. The burden of proof has to be on the plaintiff and the bar for false statements of fact must be calibrated against the reach of the defendant, i.e. influencers and public figures should have less freedom to spread misinformation due to their reach is far more damaging than that of John Doe. Lastly, shifting the regulatory enforcement on carriers or social media platforms is tantamount to hold responsible the construction worker of a marketplace – it fails to identify the bad actor, which is the person disseminating the misinformation. Perhaps enforcement of misinformation can be done through crowdsourced communities, accuracy cues at the point of submission or supporting information on a given topic. Here are a few noteworthy decisions for further reading: 

Nations Fail, But Why?

Is it because the culture of some nations is inferior to that of others? Is it because the natural resources of some nations are less fertile and valuable? Or is it because some nations are in more advantageous geographical locations? Daron Acemoglu and James A. Robinson argue the wealth of some nations can be traced back to their institutions – inclusive institutions to be precise that enable its citizens to partake in the political process and economic agenda. It’s an argument for a decentralized, democratic control structure with checks and balances to hold elected officials accountable and ensure shared economic benefits. Thus they conclude nations fail when a ruling elite creates extractive institutions designed to enrich only themselves on the back of the masses. More democracy is the answer to our looming political and economic problems according to the authors. Therefore political leaders must focus on the disenfranchised, the forgotten – those who have been left behind. It’s a conclusion hard to contend with. 

Altogether, though, this book is disappointing. Among the various economic theories that try to explain the wealth of nations, the authors fail to create quantifiable definitions for their premise. By failing to define inclusion and extraction the reader never learns about required elements, political structures and economic (minimum) metrics that can be measured or produce reliable data. Instead the authors appear to cherry-pick historic examples to demonstrate the perils of extraction and highlight the benefits of inclusive institutions. Throughout the book this reaches an absurd level of comparing contemporary nations with ancient nations without regard to (then) current affairs, social cohesion, trade or world events. This creates a confusing storyline jumping through unrelated examples from Venice to China to Zimbabwe to Argentina to the United States. I found the repetition of their inclusiveness and extraction argument quite draining for it seems to appear on every page. 

Why Nations Fail is an excellent history book full of examples for the success or failure of governance. The stories alone are well-researched, detailed and certainly a pleasure to read. However the author’s explanation for the economic failure of nations is vague and conjecture at best. They fail to answer the origins of power with quantifiable evidence and how prosperous (or poor) nations manipulate power. Altogether this book would have been awesome if it were reduced to a few hundred pages and less repetitive.

The Future Of Political Elections On Social Media

Should private companies decide what politician people will hear about? How can tech policy make our democracy stronger? What is the role of social media and journalism in an increasingly polarized society? Katie Harbath, a former director for global elections at Facebook discusses these questions in a lecture about politics, policy and democracy. Her unparalleled experience as a political operative combined with her decade long experience working on political elections across the globe make her a leading intellectual voice to shape the future of civic engagement online. In her lecture to honor the legacy of former Wisconsin State senator Paul Offner she shares historical context on the evolution of technology and presidential election campaigns. She also talks about the impact of the 2016 election and the post-truth reality online that came with the election of Donald Trump. In her concluding remarks she offers some ideas for future regulations of technology to strengthen civic integrity as well as our democracy and she answers questions during her Q&A.

tl;dr

As social media companies face growing scrutiny among lawmakers and the general public, the La Follette School of Public Affairs at University of Wisconsin–Madison welcomed Katie Harbath, a former global public policy director at Facebook for the past 10 years, for a livestreamed public presentation. Harbath’s presentation focused on her experiences and thoughts on the future of social media, especially how tech companies are addressing civic integrity issues such as free and hate speech, misinformation and political advertising.

Make sure to watch the full lecture titled Politics and Policy: Democracy in the Digital Age at https://lafollette.wisc.edu/outreach-public-service/events/politics-and-policy-democracy-in-the-digital-age (or below)

Timestamps

03:04 – Opening remarks by Susan Webb Yackee
05:19 – Introduction of the speaker by Amber Joshway
06:59 – Opening remarks by Katie Harbath
08:24 – Historical context of tech policy
14:39 – The promise of technology and the 2016 Facebook Election
17:31 – 2016 Philippine presidential election
18:55 – Post-truth politics and the era of Donald J. Trump
20:04 – Social media for social good
20:27 – 2020 US presidential elections 
22:52 – The Capitol attacks, deplatforming and irreversible change
23:49 – Legal aspects of tech policy
24:37 – Refresh Section 230 CDA and political advertising
26:03 – Code aspects of tech policy
28:00 – Developing new social norms
30:41 – More diversity, more inclusion, more openness to change
33:24 – Tech policy has no finishing line
34:48 – Technology as a force for social good and closing remarks

Q&A

(Click on the question to watch the answer)

1. In a digitally democratized world how can consumers exercise their influence over companies to ensure that online platforms are free of bias?

2. What should we expect from the congressional hearing on disinformation?

3. Is Facebook a platform or a publisher?

4. Is social media going to help us to break the power of money in politics?

4. How have political campaigns changed over time?

5. What is the relationship between social media and the ethics of journalism?

6. Will the Oversight Board truly impact Facebook’s content policy?

7. How is Facebook handling COVID-19 related misinformation?

8. What is Facebook’s approach to moderating content vs encryption/data privacy?

9. Does social media contribute to social fragmentation (polarization)? If so, how can social media be a solution for reducing polarization?

10. What type of regulation should we advocate for as digitally evolving voters?

11. What are Katies best and worst career memories? What’s next for Katie post Facebook?

Last but not least: Katie mentioned a number of books (and a blog) as a recommended read that I will list below:

Demystifying Foreign Election Interference

The Office of the Director of National Intelligence (ODNI) released a declassified report detailing efforts by foreign actors to influence and interfere in the 2020 U.S. presidential elections. The key finding of the report: Russia sought to undermine confidence in our democratic processes to support then President Donald J. Trump. Iran launched similar efforts but to diminish Trump’s chances of getting reelected. And China stayed out of it altogether.  

(Source: ODNI)

Make sure to read the full declassified report titled Intelligence Community Assessment of Foreign Threats to the 2020 U.S. Federal Elections releasedby the Office of the Director of National Intelligence at https://www.odni.gov/index.php/newsroom/reports-publications/reports-publications-2021/item/2192-intelligence-community-assessment-on-foreign-threats-to-the-2020-u-s-federal-elections

Background

On September 12, 2018 then President Donald J. Trump issued Executive Order 13848 to address foreign interference in U.S. elections. In essence, it authorizes an interagency review to determine whether an interference has occurred. In the event of foreign interference in a U.S. election the directive orders to create an impact report to trigger sanctions against (1) foreign individuals and (2) nation states. A comprehensive breakdown of the directive including the process of imposing sanctions can be found here. I will only focus on the findings of the interagency review laid out in the Intelligence Community Assessment (ICA) pursuant to EO 13848 (1)(a). The ICA is limited to intelligence reporting and other information available as of December 31, 2020.

Findings

The former President touted American voters before his own election in 2016, during his presidency and beyond the 2020 presidential elections with unsubstantiated claims of foreign election interference that would disadvantage his reelection chances. In Trump’s mind, China sought to undermine his chances to be reelected to office while he downplayed the role of Russia or Iran. The recently released ICA directly contradicts Trump’s claims. Here’s the summary per country:

Russia

  • Russia conducted influence operations targeting the integrity of the 2020 presidential elections authorized by Vladimir Putin
  • Russia supported then incumbent Donald J. Trump and aimed to undermine confidence in then candidate Joseph R. Biden
  • Russia attempted to exploit socio-political divisions through spreading polarized narratives without leveraging persistent cyber efforts against critical election infrastructure

The ICA finds a theme in Russian intelligence officials pushing misinformation about President Biden through U.S. media organizations, officials and prominent individuals. Such influence operations follow basic money laundering structures: (1) creation and dissemination of a false and misleading narrative, (2) conceal its source through layering in multiple media outlets involving independent (unaware) actors, and (3) integrating the damning narrative into the nation states official communication after the fact. A recurring theme was the false claim of corrupt ties between President Biden and Ukraine. These began spreading as early as 2014. 

Russian attempts to sow discord among the American people took place through narratives that amplified misinformation about the election process and its systems, e.g. undermining the integrity of mail-in ballots or highlighting technical failures and exceptions of misconduct. In a broader sense, topics around pandemic related lockdown measures or racial injustice or conservative censorship were exploited to polarize the affected groups. While these efforts required Russia’s cyber offensive units to take action, the actual evidence for a persistent cyber influence operation was not conclusive. The ICA categorized Russian actions as general intelligence gathering to inform Russian foreign policy rather than specifically targeting critical election infrastructure.

Iran

  • Iran conducted influence operations targeting the integrity of the 2020 presidential elections likely authorized by Supreme Leader Ali Khamenei
  • Unlike Russia, Iran did not support either candidate but aimed to undermine confidence in then incumbent Donald J. Trump
  • Iran did not interfere in the 2020 presidential elections as defined as activities targeting technical aspects of the election

The ICA finds Iran leveraged similar influence tactics as Russia targeting the integrity of the election process presumably in an effort to steer the public’s attention away from Iran and towards domestic issues around pandemic related lockdown measures or racial injustice or conservative censorship. However, Iran relied more notably on cyber-enabled offensive operations. These included aggressive spoofing emails disguised as to be sent from the Proud Boys group to intimidate liberal and left-leaning voters. Spear phishing emails sent to former and current officials aimed to gain impactful information and access to critical infrastructure. A high volume of inauthentic social media accounts was used to create divisive political narratives. Some of these accounts dated back to 2012.     

China

  • China did not conduct influence operations or efforts to interfere in the 2020 presidential elections

The ICA finds China did not actively interfere in the 2020 presidential elections. While the rationale in their assessment is largely based on political reasoning and foreign policy objectives, the report provides no data points for me to evaluate. The report does not offer insights into the role of Chinese Technology platforms repeatedly targeted by the former President. A minority view by the National Intelligence Office for Cyber (NIO) holds the opinion that China did deploy some cyber offensive operations to counter anti-Chinese policies. Former Director of National Intelligence John Ratcliffe leads this minority view expressed in a scathing memorandum that concludes the ICA fell short in their analysis with regard to China.

Recommendations

The ICA offers several insights into a long, strenuous election cycle. Its sober findings help to reformulate U.S. foreign policy and redefine domestic policy objectives. While this report is unable to detail all available intelligence and other information it offers some solace to shape future policies. For example:

  1. Cybersecurity – increased efforts to update critical election infrastructure has probably played a key role in the decreased efforts around cyber offensive operations. Government and private actors must continue to focus on cybersecurity, practise cyber hygiene and conduct digital audits to improve cyber education
  2. Media Literacy – increased efforts to educate the public about political processes. This includes private actors to educate their users about potential abuse on their platforms. Continuing programs to depolarize ideologically-charged groups through empathy and regulation is a cornerstone for a more perfect union

Additional and more detailed recommendations to improve the resilience of American elections and democratic processes can be found in the Joint Report of the Department of Justice and the Department of Homeland Security on Foreign Interference Targeting Election Infrastructure or Political Organization, Campaign, or Candidate Infrastructure Related to the 2020 U.S. Federal Elections

Threat Assessment: Chinese Technology Platforms

The American University Washington College of Law and the Hoover Institution at Stanford University created a working group to understand and assess the risks posed by Chinese technology companies in the United States. They propose a framework to better assess and evaluate these risks by focusing on the interconnectivity of threats posed by China to the US economy, national security and civil liberties.

tl;dr

The Trump administration took various steps to effectively ban TikTok, WeChat, and other Chinese-owned apps from operating in the United States, at least in their current forms. The primary justification for doing so was national security. Yet the presence of these apps and related internet platforms presents a range of risks not traditionally associated with national security, including data privacy, freedom of speech, and economic competitiveness, and potential responses raise multiple considerations. This report offers a framework for both assessing and responding to the challenges of Chinese-owned platforms operating in the United States.

Make sure to read the full report titled Chinese Technology Platforms Operating In The United States by Gary P. Corn, Jennifer Daskal, Jack Goldsmith, John C. Inglis, Paul Rosenzweig, Samm Sacks, Bruce Schneier, Alex Stamos, Vincent Stewart at https://www.hoover.org/research/chinese-technology-platforms-operating-united-states 

(Source: New America)

China has experienced consistent growth since opening its economy in the late 1970s. With its economy at about x14 today, this growth trajectory dwarfs the growth of the US economy, which increased at about x2 with the S&P 500 being its most rewarding driver at about x5 increase. Alongside economic power comes a thirst for global expansion far beyond the asian-pacific region. China’s foreign policy seeks to advance the Chinese one-party model of authoritarian capitalism that could pose a threat to human rights, democracy and the basic rule of law. US political leaders see these developments as a threat to their own US foreign policy of primacy but perhaps more important a threat to the western ideology deeply rooted in individual liberties. Needless to say that over the years every administration independent of political affiliation put the screws on China. A most recent example is the presidential executive order addressing the threat posed by social media video app TikTok. Given the authoritarian model of governance and the Chinese government’s sphere of control over Chinese companies their expansion into the US market raises concerns about access to critical data and data protection or cyber-enabled attacks on critical US infrastructure among a wide range of other threats to national security. For example:

Internet Governance: China is pursuing regulation to shift the internet from open to closed and decentralized to centralized control. The US government has failed to adequately engage international stakeholders in order to maintain an open internet but rather has authorized large data collection programs that emulate Chinese surveillance.

Privacy, Cybersecurity and National Security: The internet’s continued democratization encourages more social media and e-commerce platforms to integrate and connect features for users to enable multi-surface products. Mass data collection, weak product cybersecurity and the absence of broader data protection regulations can be exploited to collect data on domestic users, their behavior and their travel pattern abroad. It can be exploited to influence or control members of government agencies through targeted intelligence or espionage. Here the key consideration is aggregated data, which even in the absence of identifiable actors can be used to create viable intelligence. China has ramped up its offensive cyber operations beyond cyber-enabled trade or IP-theft and possesses the capabilities and cyber-weaponry to destabilize national security in the United States.

Necessity And Proportionality 

Considering mitigating the threat to national security by taking actions against Chinese owned- or controlled communications technology including tech products manufactured in China the working group suggests an individual case-based analysis. They attempt to address the challenge of accurately identifying the specific risk in an ever-changing digital environment with a framework of necessity and proportionality. Technology standards change at a breathtaking pace. Data processing reaches new levels of intimacy due to the use of artificial intelligence and machine learning. Thoroughly assessing, vetting and weighing a tolerance to specific risks are at the core of this framework in order to calibrate a chosen response to avoid potential collateral consequences.

The working group’s framework of necessity and proportionality reminded me of a classic lean six sigma structure with a strong focus on understanding the threat to national security. Naturally, as a first step they suggest accurately identifying the threat’s nature, credibility, imminence and the chances of the threat becoming a reality. I found this first step incredibly important because a failure to identify a threat will likely lead to false attribution and undermine every subsequent step. In the context of technology companies the obvious challenge is data collection, data integrity and detection systems to tell the difference. By that I imply a Chinese actor may deploy a cyber ruse in concert with the Chinese government to obfuscate their intentions. Following the principle of proportionality, step two is looking into the potential collateral consequence to the United States, its strategic partners and most importantly its citizens. Policymakers must be aware of the unintended path a policy decision may take once a powerful adversary like China starts its propaganda machine. Therefore this step requires policymakers to include thresholds for when a measure to mitigate a threat to national security outweighs the need to act. In particular inalienable rights such as the freedom of expression, freedom of the press or freedom of assembly must be upheld at all times as they are fundamental American values. To quote the immortal Molly IvinsMany a time freedom has been rolled back – and always for the same sorry reason: fear.” The third and final step concerns mitigation measures. In other words: what are we going to do about it? The working group landed on two critical factors: data and compliance. The former might be restricted, redirected or recoded to adhere to national security standards. The latter might be audited to not only identify vulnerabilities but further instill built-in cybersecurity and foster an amicable working-relationship. 

The Biden administration is faced with a daunting challenge to review and develop appropriate cyber policies that will address the growing threat from Chinese technology companies in a coherent manner that is consistent with American values. Only a broad policy response that is tailored to specific threats and focused on stronger cybersecurity and stronger data protection will yield equitable results. International alliances alongside increased collaboration to develop better privacy and cybersecurity measures will lead to success. However, the US must focus on their own strengths first, leverage their massive private sector to identify the specific product capabilities and therefore threats and attack vectors, before taking short-sighted, irreversible actions.

An Economic Approach To Analyze Politics On YouTube

YouTube’s recommendation algorithm is said to be a gateway to introduce viewers to extremist content and a stepping stone towards online radicalization. However, two other factors are equally important when analyzing political ideologies on YouTube: the novel psychological effects of audio-visual content and the ability of monetization. This paper contributes to the field of political communications by offering an economic framework to explain behavioral patterns of right-wing radicalization. It attempts to answer how YouTube is used by right-wing creators and audiences and offers a way forward for future research.

tl;dr

YouTube is the most used social network in the United States and the only major platform that is more popular among right-leaning users. We propose the “Supply and Demand” framework for analyzing politics on YouTube, with an eye toward understanding dynamics among right-wing video producers and consumers. We discuss a number of novel technological affordances of YouTube as a platform and as a collection of videos, and how each might drive supply of or demand for extreme content. We then provide large-scale longitudinal descriptive information about the supply of and demand for conservative political content on YouTube. We demonstrate that viewership of far-right videos peaked in 2017.


Make sure to read the full paper titled Right-Wing YouTube: A Supply and Demand Perspective by Kevin Munger and Joseph Phillips at https://journals.sagepub.com/doi/full/10.1177/1940161220964767

YouTube is unique in its combination of leveraging Google’s powerful content discovery algorithms, i.e. recommending content to keep attention levels on its platform and offering a type of content that is arguably the most immersive and versatile: video. The resulting product is highly effective to distribute a narrative, which caused journalists and academics to categorize YouTube as an important tool for online radicalization. In particular right-wing commentators make use of YouTube to spread their political ideologies ranging from conservative views to far-right extremism. However, the researchers draft a firm argument that the ability to create and manage committed audiences around a political ideology who mutually create and reinforce their extreme views is not only highly contagious to impact less committed audiences but pure fuel to ignite online radicalization.

Radio replaced the written word. Television replaced the spoken word. And online audio-visual content will replace the necessity to observe and understand. YouTube offers an unlimited library across all genres, all topics, all public figures ranging from user-generated content to six-figure Hollywood productions. Its 24/7 availability, immersive setup by incentivising comments and creating videos, allows YouTube to draw in audiences on much stronger psychological triggers than its mostly text-based competitors Facebook, Twitter or Reddit. Moreover, YouTube transcends national borders. It enables political commentary from abroad ranging from American expats to foreigners to exiled politicians or expelled opposition. In particular the controversial presidency of Donald Trump triggered political commentators in Europe and elsewhere to comment (and influence) the political landscape, its voters and domestic policies in the United States. This is important to acknowledge because YouTube has more users in the United States than any other social network including Facebook and Instagram.

Monetizing The Right

YouTube has been proven valuable to “Alternative Influence Networks”. In essence, potent political commentators and small productions that collaborate in direct opposition of mass media, both with regard to reporting ethics and political ideology. Albeit relatively unknown to the general populous, they draw consistent, committed audiences and tend to base their content around conservative and right-wing political commentary. There is some evidence in psychological research that conservatives tend to respond more to emotional content than liberals.

As such, the supply side on YouTube is fueled by the easy and efficient means to create political content. Production costs of a video are usually the equipment. The required time to shoot a video on a social issue is exactly as long as the video. In comparison drafting a text-based political commentary on the same issue can take up several days. YouTube’s recommendation system in conjunction with tailored targeting of certain audiences and social classes enable right-wing commentators to reach like-minded individuals and build massive audiences. The monetization methods include

  • Ad revenue from display, overlay, and video ads (not including product placement or sponsored by videos)
  • Channel memberships
  • Merchandise
  • Highlighted messages in Super Chat & Super Stickers
  • Partial revenue of YouTube Premium service

While YouTube has expanded its policy enforcement of extremist content, conservative and right-wing creators have adapted to the fewer monetization methods on YouTube by increasingly relying on crowdfunded donations, product placement or sale of products through affiliate marketing or through their own distribution network. Perhaps the most convincing factor for right-wing commentators to flock to YouTube is, however, the ability to build a large audience from scratch without the need of legitimacy or credentials.

The demand side on YouTube is more difficult to determine. Following the active audience theory users would have made a deliberate choice to click on right-wing content, to search for it, and to continue to engage with it over time. The researchers of this paper demonstrate that it isn’t just that easy. Many social and economic factors drive middle class democrats to adopt more conservative and extreme views. For example economic decline of blue-collar employment, a broken educational system in conjunction with increasing social isolation and lack of future prospects contribute to susceptibility to extremists content leading up to radicalization. The researchers rightfully argue it is difficult to determine the particular drivers that made an individual seek and watch right-wing content on YouTube. Those who do watch or listen to a right-wing political commentator tend to seek for affirmation and validation with their fringe ideologies.

“the novel and disturbing fact of people consuming white nationalist video media was not caused by the supply of this media radicalizing an otherwise moderate audience, but merely reflects the novel ease of producing all forms of video media, the presence of audience demand for white nationalist media, and the decreased search costs due to the efficiency and accuracy of the political ecosystem in matching supply and demand.”

While I believe this paper deserves much more attention and a reader should discover its research questions in the process of studying this paper, I find it helpful to provide the author’s research questions here, in conjunction with my takeaways, to make it easier for readers to prioritize this study: 

Research Question 1: What technological affordances make YouTube distinct from other social media platforms, and distinctly popular among the online right? 

Answer 1: YouTube is a media company; media on YouTube is videos; YouTube is powered by recommendations.

Research Question 2: How have the supply of and demand for right-wing videos on YouTube changed over time?

Answer 2.1: YouTube viewership of the extreme right has been in decline since mid-2017, well before YouTube changed its algorithm to demote far-right content in January 2019.

Answer 2.2: The bulk of the growth in terms of both video production and viewership over the past two years has come from the entry of mainstream conservatives into the YouTube marketplace.

This paper offers insights into the supply side of right-wing content and gives a rationale why people tend to watch right-wing content. It contributes to understanding how right-wing content is spreading across YouTube. An active comment section indicates higher engagement rates which are unique to right-wing audiences. These interactions facilitate a communal experience between creator and audience. Increased policy enforcement effectively disrupted this communal experience. Nevertheless, the researchers found evidence that those who return to create or watch right-wing content are likely to engage intensely with the content as well. Future research may investigate the actual power of the recommendation algorithm on YouTube. While this paper focused on right-wing content, the opposing political spectrum including the extreme left are increasingly utilizing YouTube to proliferate their political commentary. Personally I am curious to better understand the influence of foreign audiences on domestic issues and how YouTube is diluting the local populous with foreign activist voices.