Zuckerberg’s Ugly Truth Isn’t So Ugly

A review of the 2021 book “Inside Facebook’s Battle for Domination” by Sheera Frenkel and Cecilia Kang. The truth is far more complex.

Writing this review didn’t come easy. I spent five years helping to mitigate and solve Facebook’s most thorny problems. When the book was published, I perceived it to be an attack on Facebook orchestrated by the New York Times, a stock-listed company and direct competitor in the attention and advertising market. Today, I know that my perception then was compromised by Meta’s relentless, internal corporate propaganda.

Similar to Chaos Monkeys, An Ugly Truth tells a story that is limited to available information at the time. The book claims to have had unprecedented access to internal, executive leadership directly reporting to Mark Zuckerberg and Sheryl Sandberg. It is focused on the time period roughly between 2015 and 2020; arguably it was Facebook’s most challenging time. Despite a constant flow of news reporting about Facebook’s shortcomings, the book, for the most part of it, remains focused on the executive leadership decisions that got the company into hot waters in the first place. Across 14 chapters, well-structured and perfectly written, the authors build a case of desperation: in an increasingly competitive market environment, Facebook needs to innovate and increase its user statistics to beat earnings to satisfy shareholders. Yet, the pursuit of significance infiltrated the better judgment of Facebook’s executive leadership team and eventually led to drowning out the rational voices, the protective and concerned opinions of genuine leadership staff over the self-serving voices of staff only interested to progress at any cost.

To illustrate this point, the authors tell the story of former Chief Security Officer Alex Stamos, who persistently called out data privacy and security shortcomings:

Worst of all, Stamos told them (Zuckerberg and Sandberg), was that despite firing dozens of employees over the last eighteen months for abusing their access, Facebook was doing nothing to solve or prevent what was clearly a systemic problem. In a chart, Stamos highlighted how nearly every month, engineers had exploited the tools designed to give them easy access to data for building new products to violate the privacy of Facebook users and infiltrate their lives. If the public knew about these transgressions, they would be outraged […]

His calls, however, often went unanswered, or, worse invited other executive leadership threatened by Stamos’ findings to take hostile measures.      

By December, Stamos, losing patience, drafted a memo suggesting that Facebook reorganize its security team so that instead of sitting on their own, members were embedded across the various parts of the company. […] Facebook had decided to take his advice, but rather than organizing the new security team under Stamos, Facebook’s longtime vice president of engineering, Pedro Canahuati, was assuming control of all security functions. […] The decision felt spiteful to Stamos: he advised Zuckerberg to cut engineers off from access to user data. No team had been more affected by the decision than Canahuati’s, and as a result, the vice president of engineering told colleagues that he harbored a grudge against Stamos. Now he would be taking control of an expanded department at Stamos’s expense.

Many more of those stories would never be told. Engineers and other employees, much smaller fish than Stamos, who raised ethical concerns of security and integrity were routinely silenced, ignored, and “managed out” – Facebook’s preferred method of dealing with staff refusing to drink the kool-aid and toe the line. Throughout the book, the authors maintain a neutral voice yet it becomes very clear how difficult the decisions were for executive leadership. It seemed as though leading Facebook is the real-world equivalent of Kobayashi Maru – an everyday, no-win scenario. Certainly, I can sympathize with the pressure Mark, Sheryl, and others must have felt during those times.

Take the case of Donald John Trump, the 45th President of the United States. His Facebook Page has a reach of 34 million followers (at the time of this writing). On January 6, 2021, his account actively instigated his millions of followers to view Vice President Mike Pence as the reason for his lost bid for reelection. History went on to witness the attack on the United States Capitol. Democracy and our liberties were under attack on that day. And how did Mark Zuckerberg and Sheryl Sandberg respond on behalf of Facebook? First, silence. Second, indecision. Shall Trump remain on the platform? Are we going to suspend his account temporarily? Indefinitely? Eventually, Facebook’s leadership punted the decision to the puppet regime of the Oversight Board, who returned the decision power due to a lack of existing policies that would govern such a situation. When everybody was avoiding the headlights, Facebook’s executive leadership acted like a deer. Yes, Zuckerberg’s philosophy on speech has evolved over time. Trump challenged this evolution.

Throughout Facebook’s seventeen-year history, the social network’s massive gains have repeatedly come at the expense of consumer privacy and safety and the integrity of democratic systems. […] And the platform is built upon a fundamental, possibly irreconcilable dichotomy: its purported mission is to advance society by connecting people while also profiting off them. It is Facebook’s dilemma and its ugly truth.

The book contains many more interesting stories. There were a wealth of internal leaks to desperately influence and return Facebook’s leadership back to its original course. There were the infamous Brett Kavanaugh hearings, which highlighted the political affiliations and ideologies of Facebook’s executive leader Joel Kaplan, who weathered the sexual harassment allegations against Brett Kavanaugh by Christine Blasey-Ford despite an outrage of Facebook’s female employees. Myanmar saw horrific human rights abuses enabled by and perpetrated through the platform. The speaker of the U.S. House of Representatives and Bay Area representative since 1987, Nancy Pelosi was humiliated when Facebook fumbled to remove a deepfake video about a speech of hers that was manipulated to make it sound slurred. And the list goes on and on and on and on.

The book is worth reading. The detail and minutiae afforded to report accurately and convincingly are rich and slow-burning. That being said, Facebook has been dying since 2015. Users leave the platform and delete Facebook. While Instagram and WhatsApp pull the company’s advertising revenue for the time being with stronger performances abroad, it is clear that the five years of the executive leadership of Facebook covered in this book point towards an undefiable conclusion: it failed. 

NPR’s Terry Gross interviewed the authors Sheera Frenkel and Cecilia Kang on Fresh Air. It further demonstrates the dichotomy of writing about the leadership at one of the most influential and controversial corporations in the world. You can listen to the full episode here

Who Holds The Pen? 

Richard Stengel’s memoir illustrates the complexity of modern government.

Richard Stengel served as the Undersecretary of State for Public Diplomacy and Public Affairs alongside the 68th Secretary of State John Kerry. In his memoir “Information Wars – How We Lost The Global Battle Against Disinformation & What We Can Do About It” he recounts his time working for the Obama administration. Arguably, the Obama administration was a forward-leaning government calibrated to modern technology with a pulse on current affairs. Stengel really captures the struggles that even a modern government must overcome. From protocol and etiquette at meetings to the clearance protocol of social media use and other technology. When recounting his efforts to drive the democratic narrative online, combatting bad actors in the process, Stengel observed: 

“One of the things I’d noticed in government is that people who had never been in media, who had never written a story or produced one, […] who didn’t understand audiences or what they liked, seemed to think it was easy to create content. People had the illusion that because they consumed something, they understood how it worked.

This fallacy applies to many more segments of society, not just government. It illustrates how technology is misunderstood by the public who tend to forget that policy decisions and strategy at scale, impacting thousands if not millions of people, are incredibly tough to fine-tune and nuanced at all levels. Stengel offers an example of counter-messaging the Islamic terrorist group Boko Haram social media by leveraging the Center for Strategic Counterterrorism Communications (CSCC). Boko Haram had kidnapped some 276 girls from a secondary school in Nigeria. The idea was simple: show support for the kidnapped girls in an online campaign. Stengel approved the content for the campaign. Ten days later, he found out the content was objected by the Africa bureau. After updating the content with feedback from the Africa bureau, the content was approved but not through the clearance process because the Bureau of Intelligence and Research had objected on those changes. Ten days of silence on social media is tantamount to a lifetime of non-existence. Stengel went on learning that things he’d expect to take hours would take days; things he’d expect to take days would take weeks; things that he’d expect to take weeks would take months. Many more governmental departments default to “No” than to a “Yes”. It really made me think about new ways to improve government. But it is also an urgent reminder that government needs disruption.  

Another interesting lesson from this book is the balance between diplomacy, career development and leadership. His interactions with the Secretary of State John Kerry testify to Stengel’s business acumen despite working for the government. About Kerry Stengel notes:

“He’s permanently leaning forward. That was his attitude about the world as well. To plunge in, to move forward, to engage. There’s no knot he doesn’t think he can untie, no breach that he can’t heal. For him, the cost of doing nothing was always higher than that of trying something.

It’s almost bittersweet to read these lines of optimism considering the slow pace the State Department moved during these heydays of ISIS, Al-Qaeda or Boko Haram all the way leading up to the Russian influence operation to undermine the 2016 US Presidential elections. Then again, Stengel really captured the predicament of the government at the time when he writes:

“What few of us understood at that point was that our opponents– Russia as well as ISIS –wanted us to get into a back-and-forth with them. It validated what they were doing, brought us down to their level, and besides, we weren’t as good at it as they were. They won when they got us to respond in kind.”

Engagement and impressions are everything online. Capturing our attention is the success metric for effective influence operations. This can be an overt diplomatic endeavor, like the Iran Nuclear deal, that sought to bring the United States and Iran a step closer together, or it can be a clandestine operation, like ‘Glowing Symphony’, that sought to deplatform ISIS and eradicate their narrative online. 

Information Wars should have been titled with a more accurate title. Other than that I found Stengel’s memoir quite illuminating when it comes to government processes and how the State Department aligns itself with the current administration. As a journalist-by-trade and former managing editor of Time Magazine, Stengel’s writing style is simple and narrating. The density could have been better. It sometimes feels like a magazine. Across 7 parts and numerous chapters a lot of personal anecdotes and experience dilute the lessons of this book. Without that, this 314 page memoir could have been a concise non-fiction on influence operations and a concise memoir about his life. 

When Did Truth Die?

Michiko Kakutani offers an eloquent compilation that explains the decay of veracity in the United States. But perhaps more importantly, it skillfully weaves together almost a century of painful lessons from history, literature, and politics.

The Death of Truth was highly scrutinized by media publishers, book critiques, and the greater literature community at the time of its publication. Google the reviews. As the title suggests The Death of Truth – Notes on Falsehood in the Age of Trump by Michiko Kakutani advocates for the truth to be added to the list of casualties of the former Trump administration. Reading this book at the end of 2021, almost exactly one year since Joe Biden became the 46th President of the United States, and almost 3 ½ years after its initial release, I can’t help but view this book as a compilation of essays that are really bite-sized opinion pieces. This makes for an immersive, moving reading experience, but also renders the message of The Death of Truth to be the mere same polemic it appeared to seek to quash. Admittedly, a provocative diagnosis of our current political landscape is hardly done in the total absence of partisanship. 

Kakutani brilliantly threads her analysis by starting with a historical review of culture wars and past regimes’ handling of truth. She gradually escalates her storyline to the twenty-first century with humanity’s dependency on social media, algorithmic subversion of political decision making, and foreign actors exploiting the American focus on self-pursuit at the expense of civil responsibilities. In her epilogue, Kakutani warns of the continued erosion of democratic institutions. We, the people, must protect the democratic institutions that uphold the roof of democracy. At the same time, there won’t be any easy remedies or shortcuts that will fix our polarized, cultural division. Times like these require deft civil disobedience of the many that are publicly rejecting the idea of cynicism and resignation pursued by the totalitarian few. 

People who are likely to read this book are unlikely to learn something new, but I believe it’s still worth it for the extensive reading resources provided by Kakutani. Her remarkably colorful writing style and sobering outlook on the future state of veracity in the United States won’t disappoint either. NPR’s Michael Schaub nailed it when he wrote: “The Death of Truth is a slim volume that’s equally intriguing and frustrating, an uneven effort from a writer who is, nonetheless, always interesting to read.”

Twitter And Tear Gas

Zeynep Tufekci takes an insightful look at the intersection of protest movements and social media.

Ever since I’ve read Gustave Le Bon’s “The Crowd”, I’ve been fascinated with crowd psychology and social networks. In “Twitter And Tear Gas – The Power And Fragility Of Networked Protests” Zeynep Tufekci connects the elements of protest movements with 21st-century technology. In her work, she describes movements as

“attempts to intervene in the public sphere through collective, coordinated action. A social movement is both a type of (counter) public itself and a claim made to a public that a wrong should be righted or a change should be made.”

In times of far-reaching social media platforms, restricted online forums, and end-to-end encrypted private group chats, the means to organize a protest movement have drastically changed. 

“Modern networked movements can scale up quickly and take care of all sorts of logistical tasks without building any substantial organizational capacity before the first march or protest. (…) The Gezi Park moment, going from almost zero to a massive movement within days clearly demonstrates the power of digital tools. However, with this speed comes weakness, some of it unexpected. First, the new movements find it difficult to make tactical shifts because they lack both the culture and the infrastructure for making collective decisions. Often unable to change course after the initial, speedy expansion phase, they exhibit a ‘tactical freeze’. Second, although their ability (as well as their desire) to operate without defined leadership protects them from co-optation or “decapitation,” it also makes them unable to negotiate with adversaries or even inside the movement itself. Third, the ease with which current social movements form often fails to signal an organizing capacity powerful enough to threaten those in authority.”

While these movements often catch the general public by surprise, it really does come down to timing and committment by a group of decentralized actors. These actors, who come from all walks of life, seek to connect with others as rapidly as possibly by leveraging the unrestricted powers of social media. Social media creates ties with a variety of supporters. Tufekci points out

“people who seek political change, the networking that takes place among people with weak ties is especially important. People with strong ties already share similar views (…). Weaker ties may be far-flung and composed of people with varying political and social ties. Also, weak ties may create bridges to other clusters of people in a way strong ties do not.”

Protest movements predating social media often shared similarities with multi-day music festivals, overnight camps or even military training exercises. They instill a sense of camaraderie which attracts a certain type of indivudal. Today’s protest movements differ from those days in that they can erupt quickly, but fall apart as fast as they came to be. Still 

“many people are drawn to protest camps because of the alienation they feel in their ordinary lives as consumers. Exchanging products without money is like reverse commodity fetishism: for many, the point is not the product being exchanged but the relationship that is created.”

In addition the speed at which modern movements operate serves as an invitation for individuals disconnected from broader society or individuals who simply prefer the short-lived special operation to right a policy wrong over the long-term work required to build and maintain relationships that are powerful enough to organically drive a change of policy.

“Some online communities not only are distant from offline communities but also have little or no persistence or reputational impact. (…) Social scientists call this the “stranger-on-a-train” effect, describing the way people sometimes open up more to anonymous strangers than to the people they see around every day. (…) Such encounters can even be more authentic and liberating.”

Tufekci spends much time on describing the evolution of social interactions in a networked space, the social inertia that needs to be managed in order to pick up momentum, but she also offers some insights on defensive considerations to make a protest movement work. First and foremost, a protest movement garners attention online, which in turn creates an influx of supporters. It will also attract opposition from private individuals, political opponents, and current political leaders. Those in power had previously relied upon, and in some countries still rely upon, censorship and suppression of information. Twitter and other social media platforms have disrupted this control over the narrative:

“To be effective, censorship in the digital era requires a reframing of the goals of censorship not as a total denial of access, which is difficult to achieve, but as a denial of attention, focus, and credibility In the networked public sphere, the goal of the powerful often is not to convince people of the truth of a particular narrative or to block a particular piece of information from getting out, but to produce resignation, cynicism, and a sense of disempowerment among the people.”

I apologize for using a wealth of quotes from her book, but it’s best described there, in her own words. Protests movements are here to stay. Understanding how democratic nations evolve their policies, right political wrongs, and influence authoritarian nations through subtle policy, online protest and real-world tear gas confrontation will help us make more informed decisions as we pick our political battles. Zeynep Tufekci put together a well-researched account that helps to make sense of the most important, controversial online protest movements from the Occupy Gezi/Wall Street movements to the Eqyptian Revolution to the Arab Spring to Black Lives Matter and MeToo or the March For Our Lives. There are two noticeable drawbacks of this otherwise excellent book. First, the chapters appear uncoordinated within the book and are too long. The reader can’t take a breather without feeling to lose a thought. Second, her examples are chronologically disconnected from the actual movements. While this helps to illustrate a certain point, I found it to be a confusing feat. Twitter And Tear Gas has its own website. Check it out at https://www.twitterandteargas.org/ or reach out to the author on Twitter @zeynep 

Learn To Discern: How To Take Ownership Of Your News Diet

I am tired of keeping up with the news these days. The sheer volume of information is intimidating. It creates a challenge to filter relevant news from political noise only to then begin a process of analyzing the information for its integrity and accuracy. I certainly struggle to identify subtle misinformation when faced with it. That’s why I became interested in the psychological triggers weaved into the news to better understand my decision-making and conclusions. Pennycook and Rand wrote an excellent research paper on the human psychology of fake news.

tl;dr

We synthesize a burgeoning literature investigating why people believe and share false or highly misleading news online. Contrary to a common narrative whereby politics drives susceptibility to fake news, people are ‘better’ at discerning truth from falsehood (despite greater overall belief) when evaluating politically concordant news. Instead, poor truth discernment is associated with lack of careful reasoning and relevant knowledge, and the use of heuristics such as familiarity. Furthermore, there is a substantial disconnect between what people believe and what they share on social media. This dissociation is largely driven by inattention, more so than by purposeful sharing of misinformation. Thus, interventions can successfully nudge social media users to focus more on accuracy. Crowdsourced veracity ratings can also be leveraged to improve social media ranking algorithms.


Make sure to read the full paper titled The Psychology of Fake News by Gordon Pennycook and David G. Rand at https://www.sciencedirect.com/science/article/pii/S1364661321000516

This recent research paper by psychologists of the University of Regina and the Sloan School of Management at Massachusetts Institute of Technology took a closer look at the sources of political polarization, hyperpartisan news, and the underlying psychology that influences our decision-making on whether news is accurate or misinformation. They answered the question of why people fall for misinformation on social media. Lessons that can be drawn from this research will be helpful to build effective tools to intercept and mitigate misinformation online. It will further advance our understanding of the underlying human psychology when interacting with information on social media. And while the topic could fill entire libraries, they limited their scope of research to individual examples of misinformation rather than organized, coordinated campaigns of inauthentic behavior excluding the spread of disinformation.

So, Why Do People Fall For Fake News?

There are two fundamental concepts that explain the psychological dynamics when faced with misinformation: truth discernment aims to establish a belief in the relative accuracy of news that is greater than known-to-be false information on the same event. Basically, this concept is rooted in active recognition and critical analysis of the information to capture people’s overall beliefs. Another concept that is used to explain why people fall for misinformation is the idea of truth acceptance. Thereunder the accuracy of news is not a factor but the overall belief of it. Instead of critical analysis of the information people chose to average or combine all available information, true or false, to establish an opinion about the veracity of the news that captures people’s overall belief. This commonly results in a biased perception of news. Other concepts related to this question look at motives. Political motivations can influence people’s willingness to reason based on their partisan, political identity. In other words, when faced with news that is consistent with their political beliefs, the information is regarded as true; when faced with news that is inconsistent with their political beliefs, the information is regarded as false. Loyalty to their political ideology can become so strong that it can override an apparent falsehood for the sake of party loyalty. Interestingly, the researchers found that political partisanship has much less weight than the actual veracity of news when assessing information. Misinformation that is in harmony with people’s political beliefs is less trustworthy than accurate information that is against people’s political beliefs. They also discovered that people tend to be better at analyzing information that is in harmony with our political beliefs, which helps to discern truth from falsehood. But if people hardly fall for misinformation consistent with our political beliefs, which characteristics make people fall for misinformation?

“People who are more reflective are less likely to believe false news content – and are better at discerning between truth and falsehood – regardless of whether the news is consistent or inconsistent with their partisanship”

Well, this brings us back to truth discernment. Belief in misinformation is commonly associated with overconfidence, lack of reflection, zealotry, delusionality, or overclaiming where an individual acts on completely fabricated information as a self-proclaimed expert. All of these factors indicate an inability of analytical thinking. On the opposite side of the spectrum, people determine the veracity of information through cognitive reflection and tapping into their relevant existing knowledge. This can be general political knowledge, a basic understanding of established scientific theories, or simple online media literacy.

“Thus, when it comes to the role of reasoning, it seems that people fail to discern truth from falsehood because they do not stop to reflect sufficiently on their prior knowledge (or have insufficient or inaccurate prior knowledge) – and not because their reasoning abilities are hijacked by political motivations.” 

The researchers found that the truth has little impact on sharing intentions. They describe three types of information-sharing on social media:

  • Confusion-based sharing: this concept encompasses a genuine belief in the veracity of the information-shared (even though the person is mistaken)
  • Preference-based sharing: this concept places political ideology, or related motives such as virtue signaling, above the truth of the information shared accepting misinformation as a collateral
  • Inattention-based sharing: thereunder people are only intending to share accurate information, but are distracted by the social media environment

Steps To Own What You Know

If prior knowledge is a critical factor to identify misinformation, then familiarity with accurate information goes a long way. An awareness of familiar information is critical to determine whether the information presented is the information that you already know or a slightly manipulated version. Be familiar with social media products. What does virality look like on platform XYZ? Is the uploader a verified actor? What is the source of the news? In general, sources are a critical signal to determine veracity. The more credible and established a source, the likelier the information is well-researched and accurate. Finally, a red flag for misinformation is emotional headlines, provocative captions, or shocking images.

Challenges To Identify Misinformation

Truth is not a binary metric. In order to determine the veracity of news, a piece of information may be falsified or laced with inaccuracies or compared against established, known information. Therefore the accuracy and precision, or overall quality, of a machine learning classifier for misinformation hinges on the clarity of the provided training data times the depth of exposure on the platform where the classifier will be deployed. Another challenge to consider is the almost ever-changing landscape of misinformation. Misinformation is rapidly evolving, convulsing into conspiracy theories and maybe (mistakenly) supported by established influencers and institutions. This creates problems to discern the elements of a news story, which undermines the chances to determine accuracy. Inoculation (deliberate exposure to misinformation to improve recognition abilities) is in part ineffective because people fail to stop, reflect and consider the accuracy of the information at all. Therefore successful interventions to minimize misinformation may start with efforts to slow down interactions on social media. This can be achieved by changing the user interface to introduce friction and prompts to help induce active reflection. Lastly, human fact-checking is not scalable. For so many reasons: time, accuracy, integrity, etc. Leveraging a community-based (crowd-sourced) fact-checking model might be an alternative until a more automated solution will be ready. Twitter has recently introduced experiments with these types of crowd-sourced products. Their platform is called Birdwatch.

This research paper didn’t unearth breakthrough findings or new material. It rather helped me to learn more about the dynamics of human psychology when exposed to a set of information. Looking at the individual concepts people use to determine the accuracy of information, the underlying motives that drive our attention, and the dynamics for when we decide to share news made this paper a worthwhile read. Its concluding remarks to improve the technical environment by leveraging technology to facilitate a more reflective, conscious experience of news on social media leaves me optimistic for better products to come. 

The Future Of Political Elections On Social Media

Should private companies decide what politician people will hear about? How can tech policy make our democracy stronger? What is the role of social media and journalism in an increasingly polarized society? Katie Harbath, a former director for global elections at Facebook discusses these questions in a lecture about politics, policy and democracy. Her unparalleled experience as a political operative combined with her decade long experience working on political elections across the globe make her a leading intellectual voice to shape the future of civic engagement online. In her lecture to honor the legacy of former Wisconsin State senator Paul Offner she shares historical context on the evolution of technology and presidential election campaigns. She also talks about the impact of the 2016 election and the post-truth reality online that came with the election of Donald Trump. In her concluding remarks she offers some ideas for future regulations of technology to strengthen civic integrity as well as our democracy and she answers questions during her Q&A.

tl;dr

As social media companies face growing scrutiny among lawmakers and the general public, the La Follette School of Public Affairs at University of Wisconsin–Madison welcomed Katie Harbath, a former global public policy director at Facebook for the past 10 years, for a livestreamed public presentation. Harbath’s presentation focused on her experiences and thoughts on the future of social media, especially how tech companies are addressing civic integrity issues such as free and hate speech, misinformation and political advertising.

Make sure to watch the full lecture titled Politics and Policy: Democracy in the Digital Age at https://lafollette.wisc.edu/outreach-public-service/events/politics-and-policy-democracy-in-the-digital-age (or below)

Timestamps

03:04 – Opening remarks by Susan Webb Yackee
05:19 – Introduction of the speaker by Amber Joshway
06:59 – Opening remarks by Katie Harbath
08:24 – Historical context of tech policy
14:39 – The promise of technology and the 2016 Facebook Election
17:31 – 2016 Philippine presidential election
18:55 – Post-truth politics and the era of Donald J. Trump
20:04 – Social media for social good
20:27 – 2020 US presidential elections 
22:52 – The Capitol attacks, deplatforming and irreversible change
23:49 – Legal aspects of tech policy
24:37 – Refresh Section 230 CDA and political advertising
26:03 – Code aspects of tech policy
28:00 – Developing new social norms
30:41 – More diversity, more inclusion, more openness to change
33:24 – Tech policy has no finishing line
34:48 – Technology as a force for social good and closing remarks

Q&A

(Click on the question to watch the answer)

1. In a digitally democratized world how can consumers exercise their influence over companies to ensure that online platforms are free of bias?

2. What should we expect from the congressional hearing on disinformation?

3. Is Facebook a platform or a publisher?

4. Is social media going to help us to break the power of money in politics?

4. How have political campaigns changed over time?

5. What is the relationship between social media and the ethics of journalism?

6. Will the Oversight Board truly impact Facebook’s content policy?

7. How is Facebook handling COVID-19 related misinformation?

8. What is Facebook’s approach to moderating content vs encryption/data privacy?

9. Does social media contribute to social fragmentation (polarization)? If so, how can social media be a solution for reducing polarization?

10. What type of regulation should we advocate for as digitally evolving voters?

11. What are Katies best and worst career memories? What’s next for Katie post Facebook?

Last but not least: Katie mentioned a number of books (and a blog) as a recommended read that I will list below:

Cyber Security and the Financial System

The financial sector is a highly regulated marketplace. Deepfakes or artificially-generated synthetic media are associated with political disinformation but have not yet been linked to the financial system. The Carnegie Endowment for International Peace issued a scintillating working paper series titled “Cyber Security and the Financial System” covering a wide range of cutting edge issues from the European framework for Threat Intelligence-Based Ethical Red Teaming (TIBER) to assessing cyber resilience measures for financial organizations to global policies to combat manipulation of financial data. Jon Bateman’s contribution titled “Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios” takes a closer look on how deepfakes can impact the financial system. 

tl;dr

Rapid advances in artificial intelligence (AI) are enabling novel forms of deception. AI algorithms can produce realistic “deepfake” videos, as well as authentic-looking fake photos and writing. Collectively called synthetic media, these tools have triggered widespread concern about their potential in spreading political disinformation. Yet the same technology can also facilitate financial harm. Recent months have seen the first publicly documented cases of deepfakes used for fraud and extortion. Today the financial threat from synthetic media is low, so the key policy question is how much this threat will grow over time. Leading industry experts diverge widely in their assessments. Some believe firms and regulators should act now to head off serious risks. Others believe the threat will likely remain minor and the financial system should focus on more pressing technology challenges. A lack of data has stymied the discussion. In the absence of hard data, a close analysis of potential scenarios can help to better gauge the problem. In this paper, ten scenarios illustrate how criminals and other bad actors could abuse synthetic media technology to inflict financial harm on a broad swath of targets. Based on today’s synthetic media technology and the realities of financial crime, the scenarios explore whether and how synthetic media could alter the threat landscape.

Make sure to read the full paper titled Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios by Jon Bateman at https://carnegieendowment.org/2020/07/08/deepfakes-and-synthetic-media-in-financial-system-assessing-threat-scenarios-pub-82237

(Source: Daily Swig)

Deepfakes are a variation of manipulated media. In essence, a successful deepfake requires a sample data set of a original that is used to train a deep learning algorithm. It will learn to alter the training data to a degree that another algorithm is unable to distinguish whether the presented result is altered training data or the original. Think of it as a police sketch artist who will create a facial composite based on eye-witness accounts. The more available data and time the artist has to render a draft, the higher the likelihood of creating a successful mugshot sketch. In this paper, the term deepfake relates to a subset of synthetic media including videos, images and voice created through artificial intelligence.

The financial sector is particularly vulnerable in the know-your-customer space. It’s a unique entry point for malicious actors to submit manipulated identity verification or deploy deepfake technology to fool authenticity mechanisms. While anti-fraud prevention tools are an industry-wide standard to prevent impersonation or identity theft, the onset of cheaper, more readily available deepfake technology marks a turning point for the financial sector. Deepfakes may be used to leverage a blend of false or hacked personal identifiable information (PII) data to gain access or open bank accounts, initiate financial transactions, or redistribute private equity assets. Bateman focused on two categories of synthetic media that are most relevant for the financial sector: (1) narrowcast synthetic media, which encompasses one-off, tailored manipulated data deployed directly to the target via private channels and (2) broadcast synthetic media, which is designed for mass-audiences deployed directly or indirectly via publicly available channels, e.g. social media. An example for the first variation is the story of a cybercrime that took place in 2019. A Chief Executive Officer of a UK-based energy company received a phone call from – what he believed – his boss, the CEO of the parent corporation based in Germany. In the phone call, the voice of the German CEO was an impersonation created by artificial intelligence and publicly available voice recordings (speeches, transcripts etc). The voice directed the UK CEO to immediately initiate a financial transaction to pay a Hungarian supplier. This type of attack is also known as deepfake voice phishing (vishing). These fabricated directions resulted in the fraudulent transfer of $234,000. An example for the second variation is commonly found in widespread pump and dump schemes on social media. These could range from malicious actors creating false, incriminating deepfakes of key-personnel of a stock-listed company to artificially lower the stock price or creating synthetic media that misrepresents product results to manipulate a higher stock price and garner more interest from potential investors. Going off the two categories of synthetic media, Bateman presents ten scenarios that are layered into four stages: (1) Targeting Individuals, e.g. identity theft or impersonation, (2) Targeting Companies, e.g. Payment Fraud or Stock Manipulation, (3) Targeting Financial Markets, e.g. creating malicious flash crashes through state-sponsored hacking or cybercriminals backed a foreign government, and (4) Targeting Central Banks and Financial Regulators, e.g. regulatory astroturfing. 

In conclusion, Bateman finds that at this point in time, deepfakes aren’t potent enough to destabilize global financial systems in mature, healthy economies. They are more threatening, however, to individuals and business. To take precautions against malicious actors with deepfake technology, a number of resiliency measures can be implemented: broadcast synthetic media is potent to amplify and prolong already existing crises or scandals. Aside from building trust with key audiences, a potential remedy to deepfakes amplifying false narratives is the readiness to create counter-narratives with evidence. To prevent other companies from potential threats that would decrease the trust in the financial sector, an industry wide sharing of information on cyber attacks is a viable option to mitigate coordinated criminal activity. Lastly, the technology landscape is improving its integrity at a rapid succession rate. A multi-stakeholder response bringing together leaders from the financial sector, the technology sector and experts on consumer behavior with policymakers will help to create more efficient regulations to combat deepfakes in the financial system.

Demystifying Foreign Election Interference

The Office of the Director of National Intelligence (ODNI) released a declassified report detailing efforts by foreign actors to influence and interfere in the 2020 U.S. presidential elections. The key finding of the report: Russia sought to undermine confidence in our democratic processes to support then President Donald J. Trump. Iran launched similar efforts but to diminish Trump’s chances of getting reelected. And China stayed out of it altogether.  

(Source: ODNI)

Make sure to read the full declassified report titled Intelligence Community Assessment of Foreign Threats to the 2020 U.S. Federal Elections releasedby the Office of the Director of National Intelligence at https://www.odni.gov/index.php/newsroom/reports-publications/reports-publications-2021/item/2192-intelligence-community-assessment-on-foreign-threats-to-the-2020-u-s-federal-elections

Background

On September 12, 2018 then President Donald J. Trump issued Executive Order 13848 to address foreign interference in U.S. elections. In essence, it authorizes an interagency review to determine whether an interference has occurred. In the event of foreign interference in a U.S. election the directive orders to create an impact report to trigger sanctions against (1) foreign individuals and (2) nation states. A comprehensive breakdown of the directive including the process of imposing sanctions can be found here. I will only focus on the findings of the interagency review laid out in the Intelligence Community Assessment (ICA) pursuant to EO 13848 (1)(a). The ICA is limited to intelligence reporting and other information available as of December 31, 2020.

Findings

The former President touted American voters before his own election in 2016, during his presidency and beyond the 2020 presidential elections with unsubstantiated claims of foreign election interference that would disadvantage his reelection chances. In Trump’s mind, China sought to undermine his chances to be reelected to office while he downplayed the role of Russia or Iran. The recently released ICA directly contradicts Trump’s claims. Here’s the summary per country:

Russia

  • Russia conducted influence operations targeting the integrity of the 2020 presidential elections authorized by Vladimir Putin
  • Russia supported then incumbent Donald J. Trump and aimed to undermine confidence in then candidate Joseph R. Biden
  • Russia attempted to exploit socio-political divisions through spreading polarized narratives without leveraging persistent cyber efforts against critical election infrastructure

The ICA finds a theme in Russian intelligence officials pushing misinformation about President Biden through U.S. media organizations, officials and prominent individuals. Such influence operations follow basic money laundering structures: (1) creation and dissemination of a false and misleading narrative, (2) conceal its source through layering in multiple media outlets involving independent (unaware) actors, and (3) integrating the damning narrative into the nation states official communication after the fact. A recurring theme was the false claim of corrupt ties between President Biden and Ukraine. These began spreading as early as 2014. 

Russian attempts to sow discord among the American people took place through narratives that amplified misinformation about the election process and its systems, e.g. undermining the integrity of mail-in ballots or highlighting technical failures and exceptions of misconduct. In a broader sense, topics around pandemic related lockdown measures or racial injustice or conservative censorship were exploited to polarize the affected groups. While these efforts required Russia’s cyber offensive units to take action, the actual evidence for a persistent cyber influence operation was not conclusive. The ICA categorized Russian actions as general intelligence gathering to inform Russian foreign policy rather than specifically targeting critical election infrastructure.

Iran

  • Iran conducted influence operations targeting the integrity of the 2020 presidential elections likely authorized by Supreme Leader Ali Khamenei
  • Unlike Russia, Iran did not support either candidate but aimed to undermine confidence in then incumbent Donald J. Trump
  • Iran did not interfere in the 2020 presidential elections as defined as activities targeting technical aspects of the election

The ICA finds Iran leveraged similar influence tactics as Russia targeting the integrity of the election process presumably in an effort to steer the public’s attention away from Iran and towards domestic issues around pandemic related lockdown measures or racial injustice or conservative censorship. However, Iran relied more notably on cyber-enabled offensive operations. These included aggressive spoofing emails disguised as to be sent from the Proud Boys group to intimidate liberal and left-leaning voters. Spear phishing emails sent to former and current officials aimed to gain impactful information and access to critical infrastructure. A high volume of inauthentic social media accounts was used to create divisive political narratives. Some of these accounts dated back to 2012.     

China

  • China did not conduct influence operations or efforts to interfere in the 2020 presidential elections

The ICA finds China did not actively interfere in the 2020 presidential elections. While the rationale in their assessment is largely based on political reasoning and foreign policy objectives, the report provides no data points for me to evaluate. The report does not offer insights into the role of Chinese Technology platforms repeatedly targeted by the former President. A minority view by the National Intelligence Office for Cyber (NIO) holds the opinion that China did deploy some cyber offensive operations to counter anti-Chinese policies. Former Director of National Intelligence John Ratcliffe leads this minority view expressed in a scathing memorandum that concludes the ICA fell short in their analysis with regard to China.

Recommendations

The ICA offers several insights into a long, strenuous election cycle. Its sober findings help to reformulate U.S. foreign policy and redefine domestic policy objectives. While this report is unable to detail all available intelligence and other information it offers some solace to shape future policies. For example:

  1. Cybersecurity – increased efforts to update critical election infrastructure has probably played a key role in the decreased efforts around cyber offensive operations. Government and private actors must continue to focus on cybersecurity, practise cyber hygiene and conduct digital audits to improve cyber education
  2. Media Literacy – increased efforts to educate the public about political processes. This includes private actors to educate their users about potential abuse on their platforms. Continuing programs to depolarize ideologically-charged groups through empathy and regulation is a cornerstone for a more perfect union

Additional and more detailed recommendations to improve the resilience of American elections and democratic processes can be found in the Joint Report of the Department of Justice and the Department of Homeland Security on Foreign Interference Targeting Election Infrastructure or Political Organization, Campaign, or Candidate Infrastructure Related to the 2020 U.S. Federal Elections

An Economic Approach To Analyze Politics On YouTube

YouTube’s recommendation algorithm is said to be a gateway to introduce viewers to extremist content and a stepping stone towards online radicalization. However, two other factors are equally important when analyzing political ideologies on YouTube: the novel psychological effects of audio-visual content and the ability of monetization. This paper contributes to the field of political communications by offering an economic framework to explain behavioral patterns of right-wing radicalization. It attempts to answer how YouTube is used by right-wing creators and audiences and offers a way forward for future research.

tl;dr

YouTube is the most used social network in the United States and the only major platform that is more popular among right-leaning users. We propose the “Supply and Demand” framework for analyzing politics on YouTube, with an eye toward understanding dynamics among right-wing video producers and consumers. We discuss a number of novel technological affordances of YouTube as a platform and as a collection of videos, and how each might drive supply of or demand for extreme content. We then provide large-scale longitudinal descriptive information about the supply of and demand for conservative political content on YouTube. We demonstrate that viewership of far-right videos peaked in 2017.


Make sure to read the full paper titled Right-Wing YouTube: A Supply and Demand Perspective by Kevin Munger and Joseph Phillips at https://journals.sagepub.com/doi/full/10.1177/1940161220964767

YouTube is unique in its combination of leveraging Google’s powerful content discovery algorithms, i.e. recommending content to keep attention levels on its platform and offering a type of content that is arguably the most immersive and versatile: video. The resulting product is highly effective to distribute a narrative, which caused journalists and academics to categorize YouTube as an important tool for online radicalization. In particular right-wing commentators make use of YouTube to spread their political ideologies ranging from conservative views to far-right extremism. However, the researchers draft a firm argument that the ability to create and manage committed audiences around a political ideology who mutually create and reinforce their extreme views is not only highly contagious to impact less committed audiences but pure fuel to ignite online radicalization.

Radio replaced the written word. Television replaced the spoken word. And online audio-visual content will replace the necessity to observe and understand. YouTube offers an unlimited library across all genres, all topics, all public figures ranging from user-generated content to six-figure Hollywood productions. Its 24/7 availability, immersive setup by incentivising comments and creating videos, allows YouTube to draw in audiences on much stronger psychological triggers than its mostly text-based competitors Facebook, Twitter or Reddit. Moreover, YouTube transcends national borders. It enables political commentary from abroad ranging from American expats to foreigners to exiled politicians or expelled opposition. In particular the controversial presidency of Donald Trump triggered political commentators in Europe and elsewhere to comment (and influence) the political landscape, its voters and domestic policies in the United States. This is important to acknowledge because YouTube has more users in the United States than any other social network including Facebook and Instagram.

Monetizing The Right

YouTube has been proven valuable to “Alternative Influence Networks”. In essence, potent political commentators and small productions that collaborate in direct opposition of mass media, both with regard to reporting ethics and political ideology. Albeit relatively unknown to the general populous, they draw consistent, committed audiences and tend to base their content around conservative and right-wing political commentary. There is some evidence in psychological research that conservatives tend to respond more to emotional content than liberals.

As such, the supply side on YouTube is fueled by the easy and efficient means to create political content. Production costs of a video are usually the equipment. The required time to shoot a video on a social issue is exactly as long as the video. In comparison drafting a text-based political commentary on the same issue can take up several days. YouTube’s recommendation system in conjunction with tailored targeting of certain audiences and social classes enable right-wing commentators to reach like-minded individuals and build massive audiences. The monetization methods include

  • Ad revenue from display, overlay, and video ads (not including product placement or sponsored by videos)
  • Channel memberships
  • Merchandise
  • Highlighted messages in Super Chat & Super Stickers
  • Partial revenue of YouTube Premium service

While YouTube has expanded its policy enforcement of extremist content, conservative and right-wing creators have adapted to the fewer monetization methods on YouTube by increasingly relying on crowdfunded donations, product placement or sale of products through affiliate marketing or through their own distribution network. Perhaps the most convincing factor for right-wing commentators to flock to YouTube is, however, the ability to build a large audience from scratch without the need of legitimacy or credentials.

The demand side on YouTube is more difficult to determine. Following the active audience theory users would have made a deliberate choice to click on right-wing content, to search for it, and to continue to engage with it over time. The researchers of this paper demonstrate that it isn’t just that easy. Many social and economic factors drive middle class democrats to adopt more conservative and extreme views. For example economic decline of blue-collar employment, a broken educational system in conjunction with increasing social isolation and lack of future prospects contribute to susceptibility to extremists content leading up to radicalization. The researchers rightfully argue it is difficult to determine the particular drivers that made an individual seek and watch right-wing content on YouTube. Those who do watch or listen to a right-wing political commentator tend to seek for affirmation and validation with their fringe ideologies.

“the novel and disturbing fact of people consuming white nationalist video media was not caused by the supply of this media radicalizing an otherwise moderate audience, but merely reflects the novel ease of producing all forms of video media, the presence of audience demand for white nationalist media, and the decreased search costs due to the efficiency and accuracy of the political ecosystem in matching supply and demand.”

While I believe this paper deserves much more attention and a reader should discover its research questions in the process of studying this paper, I find it helpful to provide the author’s research questions here, in conjunction with my takeaways, to make it easier for readers to prioritize this study: 

Research Question 1: What technological affordances make YouTube distinct from other social media platforms, and distinctly popular among the online right? 

Answer 1: YouTube is a media company; media on YouTube is videos; YouTube is powered by recommendations.

Research Question 2: How have the supply of and demand for right-wing videos on YouTube changed over time?

Answer 2.1: YouTube viewership of the extreme right has been in decline since mid-2017, well before YouTube changed its algorithm to demote far-right content in January 2019.

Answer 2.2: The bulk of the growth in terms of both video production and viewership over the past two years has come from the entry of mainstream conservatives into the YouTube marketplace.

This paper offers insights into the supply side of right-wing content and gives a rationale why people tend to watch right-wing content. It contributes to understanding how right-wing content is spreading across YouTube. An active comment section indicates higher engagement rates which are unique to right-wing audiences. These interactions facilitate a communal experience between creator and audience. Increased policy enforcement effectively disrupted this communal experience. Nevertheless, the researchers found evidence that those who return to create or watch right-wing content are likely to engage intensely with the content as well. Future research may investigate the actual power of the recommendation algorithm on YouTube. While this paper focused on right-wing content, the opposing political spectrum including the extreme left are increasingly utilizing YouTube to proliferate their political commentary. Personally I am curious to better understand the influence of foreign audiences on domestic issues and how YouTube is diluting the local populous with foreign activist voices.

Threat Mitigation In Cyberspace

Richard A. Clarke and Robert K. Knake provide a detailed rundown of the evolution and legislative history of cyberspace. The two leading cybersecurity experts encourage innovative cyber policy solutions to mitigate cyberwar, protect our critical infrastructure and help citizens to prevent cybercrime.

The Fifth Domain, commonly referred to as cyberspace, poses new challenges for governments, companies and citizens. Clarke and Knake discuss the historic milestones that led to modern cybersecurity and cyber policy. With detailed accounts of how governments implement security layers in cyberspace, gripping examples of breaches of cybersecurity and innovative solutions for policymakers, this book ended up rather dense in content – a positive signal for someone interested in cybersecurity, but fairly heavy for everybody else. Some of the content widely circulated the news media, other content is intriguing and through-provoking. While the policy solutions in this book aren’t ground-breaking, the authors provide fuel for policymakers and the public to take action on securing data, but, perhaps more importantly, to start developing transparent, effective cyber policies that account for the new, emerging technologies within machine learning and quantum computing. Personally, I found the hardcover edition too clunky and expensive. Six parts over 298 pages, however, made reading this book a breeze.