Redefining The Media

LiveLeak was a video-sharing website for uncensored depictions of violence, human rights violations, and other world events – often shocking content. Earlier this year, the website reportedly shut down. Surprisingly, there is little detailed academic research on LiveLeak. While available research centers around internet spectatorship of violent content, this 2014 research paper discusses the communication structures created by LiveLeak; arguably redefining social media as we know it.

tl;dr

This research examines a video-sharing website called LiveLeak to be able to analyze the possibilities of democratic and horizontal social mobilization via Internet technology. In this sense, we take into consideration the Gilles Deleuze and Félix Guattari’s philosophical conceptualization of “rhizome” which provides a new approach for activities of online communities. In the light of this concept and its anti-hierarchical approach, we tried to discuss the potentials of network communication models such as LiveLeak in terms of emancipating the use of media and democratic communication. By analyzing the contextual traffic on the LiveLeak for a randomly chosen one week (first week of December 2013) we argue that this video-sharing website shows a rhizomatic characteristic.

Make sure to read the full paper titled An Alternative Media Experience: LiveLeak by Fatih Çömlekçi and Serhat Güney at https://www.researchgate.net/publication/300453215_An_Alternative_Media_Experience_LiveLeak

liveleak
(Source: Olhar Digital)

LiveLeak has often been referred to as the dark side of YouTube. LiveLeak had similar features to engage its existing userbase and attract new users. Among them were Recent Items, Channels, and Forums. Furthermore, immersive features such as Yoursay, Must See, or Entertainment. Its central difference to mainstream video-sharing websites was the absence of content moderation. Few exceptions, however, were made with regard to illegal content, racist remarks, doxing, or obvious propaganda of terrorist organizations. LiveLeak first attained notoriety when it shared a pixelated cellphone video of the execution of former dictator Saddam Hussein. 

Gilles Deleuze and Felix Guattari described aborescence, a tree model of thinking and information sharing whereby a seed idea grows into a concept that can be traced back to the seed idea, as the fundamental way of Western logic and philosophy. In our postmodern world, they argue, arborescence no longer works but instead, they offer the concept of rhizomatic structures. In essence, a rhizomatic structure is a decentralized social network with no single point of entry, no particular core, or any particular form of unity. Fatih Çömlekçi and Serhat Güney describe rhizomes as a

“swarm of ants moving along in an endless plateau by lines. These lines can be destroyed by an external intervention at one point but they will continue marching in an alternative and newly formed way/route”

Most social media networks are built around arborescence: a user creates an account, connects with friends and all interactions can be traced back to a single point of entry. LiveLeak resembled rhizomes. Content that circulated on its platform had not necessarily a single point of entry. It was detached from the uploader and often shared with little context. Therefore it was able to trigger a social mobilization around the particular content from all kinds of users; some with their real-life personas, most anonymously but none connected in an arborescent way. Another interesting feature about LiveLeak was its reversal of the flow of information. Western media outlets define the news. LiveLeak disrupted this power structure by, for example, leaking unredacted, uncensored footage of atrocities committed by the Assad regime during the Syrian Civil War. Moreover, users in third-world countries were able to share footage from local news channels which were not visible in the mainstream media. Taken together, LiveLeak enabled social movements such as the Arab Spring or the Ukraine Revolution. Arguably, the video-sharing platform influenced public opinion about police brutality in the United States fueling the Black Lives Matter movement. Undoubtedly, its features contributed to a more defined reality, that is less whitewashed. LiveLeak played a seminal role in establishing our modern approach to communication moderation on social media networks.

An Ode To Diplomacy

There are few books that taught me more about the strategic decisions behind U.S. foreign policy than the Back Channel. Bill Burns’ account is both a history lesson and an upbeat reminder of the value of diplomacy.

The Back Channel by Bill Burns is a well-written, historic memoir of one of the finest career diplomats in the foreign service. Exceptionally clear-eyed, balanced, and insightful in both voice and content, Burns walks the reader through his three decades of foreign service. Starting out as the most junior officer of the U.S. embassy in Jordan under then Secretary of State George Shultz, Burns quickly made a name for himself in the Baker State Department through his consistency, ability to mediate and deliver, but also his foreign language skills including Arabic, French, English and Russian. In describing “events” at the State Department, Burns strikes a perfect balance between the intellectual depth of his strategic thinking against the contours of U.S. foreign policy. He offers a rare insight into the mechanics of diplomacy and the pursuit of American interests. For example, Burns illustrates the focus of the H.W. Bush administration in Libya was on changing behavior, not the Qaddafi regime. Sanctions and political isolation had already chastised Qaddafi’s sphere of influence, but American and British delegations supported by the weapons of mass destruction (WMD) interdiction in the Mediterranean were able to convince Qaddafi to give up the terrorism and WMD business. “He needed a way out, and we gave him a tough but defensible one. That’s ultimately what diplomacy is all about – not perfect solutions, but outcomes that cost far less than war and leave everyone better off than they would otherwise have been.”

I was too young to remember the German Reunification, but I vividly remember the Yeltsin era, its mismanaged economic policy, and the correlating demise of the Russian Ruble sending millions of Russians into poverty. When Al-Qaeda attacked the United States on September 11, I was glued to the news coverage for days, weeks on end – forever changing my worldviews and national identity. Burns memoir offers a new, liberating viewpoint for me about these events; it helped me to connect their impact on the foreign policy stage and subsequent decisions by world leaders. This really manifests in his description of Obama’s long-game in a post-primacy world:

“Statesmen rarely succeed if they don’t have a sense of strategy – a set of assumptions about the world they seek to navigate, clear purposes and priorities, means matched to ends, and the discipline required to hold all those pieces together and stay focused. They also, however, have to be endlessly adaptable – quick to adjust to the unexpected, massage the anxieties of allies and partners, maneuver past adversaries, and manage change rather than be paralyzed by it. (…) Playing the long game is essential, but it’s the short game – coping with stuff that happens unexpectedly – that preoccupies policymakers and often shapes their legacies.”

But aside from candid leadership lessons and rich history insights, what makes the Back Channel so captivating is the upbeat and fervent case for diplomacy. Burns goes out of his way detailing the daily grind that is required to serve and succeed in the State Department:

“As undersecretary, and then later as deputy secretary, I probably spent more time with my colleagues in the claustrophobic, windowless confines of the White House Situation Room than I did with anyone else, including my own family. (…) Our job was to propose, test, argue, and, when possible, settle policy debates and options, or tee them up for the decision of cabinet officials and the president. None of the president’s deputy national security advisors, however, lost sight of the human element of the process. (…) We were, after all, a collection of human beings, not an abstraction – always operating with incomplete information, despite the unceasing waves of open-source and classified intelligence washing over us; often trying to choose between bad and worse options.”

Moreover Burns offers lessons for aspiring career diplomats:

“Effective diplomats (also) embody many qualities, but at their heart is a crucial trinity: judgment, balance, and discipline. All three demand a nuanced grasp of history and culture, mastery of foreign languages, hard-nosed facility in negotiations, and the capacity to translate American interests in ways that other governments can see as consistent with their own – or at least in ways that drive home the cost of alternative courses. (…) What cannot be overstated, however, is the importance of sound judgment in a world of fallible and flawed humans – weighing ends and means, anticipating the unintended consequences of well-intentioned actions, and measuring the hard reality of limits against the potential of American agency.”

All taken together make the Back Channel a must-read of highest quality for anyone interested in U.S. foreign policy or diplomacy. I would even think the shrewd political observations captured in the Back Channel make for a valuable read with regard to domestic policy or current affairs, but a modicum of international policy awareness is still required. The Back Channel’s only drawback is its predominant focus on American interests in the Middle East and Europe. I can’t help but wonder how the United States would look like today had its political leadership opted for a strategy of offshore-balancing instead of a grand strategy of primacy; more focused on pressing domestic issues such as trade or immigration with our immediate neighbors Canada, Mexico and northern Latin America. I’m curious to hear Burns’ thoughts on this. Perhaps he’ll cover this arena after finishing his term as director of the Central Intelligence Agency.

Ballistic Books: Ultimate Frisbee

Ballistic books is a series to present literature of interest. Each edition is dedicated to a specific topic. I found it challenging to discover and distinguish good from great literature. With this series, I aim to mitigate that challenge.

tl;dr

Ultimate Frisbee is arguably The Greatest sport ever invented by man. Ultimate is a fast-moving, low-contact sport with elements of American football and basketball but completely self-officiated even at the highest level. On June 4 the American Ultimate Disc League will return to play after nearly two years of forced hiatus due to the Covid-19 pandemic. I’ve been playing ultimate throughout my entire adult life starting at university and continuing beyond. Unlike other team sports, ultimate frisbee attracts a special type of counterculture individual. This creates a hodgepodge of interesting, competitive, and intelligent people, who are fun to be around. Playing tournaments is tantamount to living through ten music festivals in one weekend: high-intensity games during the day, high-intensity parties at night. And when all is said and done, we travel home with a bunch of new friends and lasting memories. There are a number of instructional books on gameplay, tactics, etc. Few books address the novel experience that is ultimate frisbee. In this post, I’ll focus on story-telling only. Ultimate Glory represents the foundations of “the ultimate life”. Gessner’s story is fascinating and he has a wonderful way of describing the ultimate experience for a wide audience. I have read it before and I will read it again. Universe Point is a collection of short stories and articles mostly written for Skyd Magazine. Cramer has been a pillar of the ultimate community for decades contributing to many ultimate discussions and topics with his avid writing style. It’s on my summer reading list. The Ultimate Outsider is the only unknown on my list. It seems to be a novel or adaptation. I found it on Amazon searching for non-instruction books about Ultimate. Immediately, I recognized the wonderful writing style, which leaves me anxious to get my hands on a copy this summer. 

1. Ultimate Glory: Frisbee, Obsession, and My Wild Youth by David Gessner

David Gessner is an American author, publisher, and lecturer. You can find David Gessner on Twitter @DavidGessner

2. Universe Point: A book About Ultimate by Kevin Cramer

Kevin Cramer is an American author and screenplay writer.

3. The Ultimate Outsider by Alexander Rummelhart

Alexander Rummelhart is an American author and teacher. You can find Alexander Rummelhart on Twitter @UBER_IHUC

Ultimate: The Greatest Sport Ever Invented by Man by Pasquale Anthony Leonardo and Cade Beaulieu receives my honorable mention least because it has its own, awesome website but it seems to be an incarnation of the spirit that makes ultimate frisbee so incredibly addictive: delusions of grandeur, not-so-serious dry jokes, and extremely-serious competitive spirit. You can find Pasquale Anthony Leonardo on Twitter @leobasq

Here’s one of my favorite highlight reels of ultimate frisbee:

Falsehoods And The First Amendment

Our society is built around freedom of expression. How can regulators mitigate the negative consequences of misinformation without restricting speech? And should we strive for a legal solution rather than improving upon our social contract? In this paper, constitutional law professor, bestselling author and former administrator of the Office of Information and Regulatory Affairs Cass Sunstein reviews the status of falsehoods against our current constitutional regime to regulate speech and offers his perspective to control libel and other false statements of fact. 

tl;dr

What is the constitutional status of falsehoods? From the standpoint of the First Amendment, does truth or falsity matter? These questions have become especially pressing with the increasing power of social media, the frequent contestation of established facts, and the current focus on “fake news,” disseminated by both foreign and domestic agents in an effort to drive politics in the United States and elsewhere in particular directions. In 2012, the Supreme Court ruled for the first time that intentional falsehoods are protected by the First Amendment, at least when they do not cause serious harm. But in important ways, 2012 seems like a generation ago, and the Court has yet to give an adequate explanation for its conclusion. Such an explanation must begin with the risk of a “chilling effect,” by which an effort to punish or deter falsehoods might also in the process chill truth. But that is hardly the only reason to protect falsehoods, intentional or otherwise; there are several others. Even so, the various arguments suffer from abstraction and high-mindedness; they do not amount to decisive reasons to protect falsehoods. These propositions bear on old questions involving defamation and on new questions involving fake news, deepfakes, and doctored videos.

Make sure to read the full paper titled Falsehoods and the First Amendment by Cass R. Sunstein at https://jolt.law.harvard.edu/assets/articlePDFs/v33/33HarvJLTech387.pdf 

(Source: Los Angeles Times)

As a democracy, why should we bother to protect misinformation? We already prohibit various kinds of falsehoods including perjury, false advertising, and fraud. Why not extend these regulations to online debates, deepfakes, etc? Sunstein offers a basic truth of democratic systems: freedom of expression is a core tenet to promote self-government; it is enshrined in the first amendment. People need to be free to say what they think, even if what they think is false. A society that punishes people for spreading falsehoods inevitably creates a chilling effect for those who (want to) speak the truth. Possible criminal prosecution for spreading misinformation should not itself have a chilling effect on the public discussion about the misinformation. Of course, the delineator is a clear and present danger manifested in the misinformation that creates real-world harm. The dilemma for regulators lies in the difficult task to identify a clear and present danger and real-world harm. It’s not a binary right versus wrong but rather a right versus another right dilemma. 

Sunstein points out a few factors that make it so difficult to strike an acceptable balance between restrictions and free speech. A prominent concern is collateral censorship aka official fallibility. That is where a government would censor what it deems to be false but ends up restricting truth as well. Government officials may act in self-interest to preserve their status, which inevitably invites the risk of censorship of critical voices. Even if the government correctly identifies and isolates misinformation, who has the burden of proof? How detailed must it be demonstrated that a false statement of fact is in fact false and does indeed present a clear danger in causality with real-world harm? As mentioned earlier, any ban of speech may impose a chilling effect on people who aim to speak the truth but fear government retaliation. In some cases, misinformation may be helpful to magnify the truth. Misinformation offers a clear contrast that allows people to make up their minds. Learning falsehoods from others also increases the chances to learn what other people think, how they process and label misinformation, and where they ultimately stand. The free flow of information is another core tenet of democratic systems. It is therefore preferred to have all information in the open so people can choose and pick whichever they believe in. Lastly, a democracy may consider counterspeech as a preferred method to deal with misinformation. Studies have shown that media literacy, fact-checking labels, and accuracy cues help people to better assess misinformation and its social value. Banning a falsehood, however, would drive the false information and its creators underground. Isn’t it better to find common ground, rather than to silence people?

With all this in mind, striking a balance between permitting falsehoods in some cases, enforcing upon them should be the exception and nuanced on a case-by-case basis. Sunstein shares his ideas to protect people from falsehoods without producing excessive chilling effects from the potential threat of costly lawsuits. First, there should be limits on monetary damages and schedules should be limited to address specific scenarios. Second, a general right to correct or retract misinformation should pre-empt proceedings seeking damages. And, third, online environments may benefit from notice-and-takedown protocols similar to the existing copyright practice under the Digital Millennium Copyright Act (DMCA). Germany’s Network Enforcement Act (NetzDG) is a prominent example of notice-and-takedown regulations aimed at harmful, but not necessarily false speech. I think a well-functioning society must work towards a social contract that facilitates intrinsic motivation and curiosity to seek and speak the truth. Lies should not get a platform, but cannot be outlawed either. If the legal domain is sought to adjudicate misinformation, it should be done expeditiously with few formal, procedural hurdles. The burden of proof has to be on the plaintiff and the bar for false statements of fact must be calibrated against the reach of the defendant, i.e. influencers and public figures should have less freedom to spread misinformation due to their reach is far more damaging than that of John Doe. Lastly, shifting the regulatory enforcement on carriers or social media platforms is tantamount to hold responsible the construction worker of a marketplace – it fails to identify the bad actor, which is the person disseminating the misinformation. Perhaps enforcement of misinformation can be done through crowdsourced communities, accuracy cues at the point of submission or supporting information on a given topic. Here are a few noteworthy decisions for further reading: 

Learn To Discern: How To Take Ownership Of Your News Diet

I am tired of keeping up with the news these days. The sheer volume of information is intimidating. It creates a challenge to filter relevant news from political noise only to then begin a process of analyzing the information for its integrity and accuracy. I certainly struggle to identify subtle misinformation when faced with it. That’s why I became interested in the psychological triggers weaved into the news to better understand my decision-making and conclusions. Pennycook and Rand wrote an excellent research paper on the human psychology of fake news.

tl;dr

We synthesize a burgeoning literature investigating why people believe and share false or highly misleading news online. Contrary to a common narrative whereby politics drives susceptibility to fake news, people are ‘better’ at discerning truth from falsehood (despite greater overall belief) when evaluating politically concordant news. Instead, poor truth discernment is associated with lack of careful reasoning and relevant knowledge, and the use of heuristics such as familiarity. Furthermore, there is a substantial disconnect between what people believe and what they share on social media. This dissociation is largely driven by inattention, more so than by purposeful sharing of misinformation. Thus, interventions can successfully nudge social media users to focus more on accuracy. Crowdsourced veracity ratings can also be leveraged to improve social media ranking algorithms.


Make sure to read the full paper titled The Psychology of Fake News by Gordon Pennycook and David G. Rand at https://www.sciencedirect.com/science/article/pii/S1364661321000516

This recent research paper by psychologists of the University of Regina and the Sloan School of Management at Massachusetts Institute of Technology took a closer look at the sources of political polarization, hyperpartisan news, and the underlying psychology that influences our decision-making on whether news is accurate or misinformation. They answered the question of why people fall for misinformation on social media. Lessons that can be drawn from this research will be helpful to build effective tools to intercept and mitigate misinformation online. It will further advance our understanding of the underlying human psychology when interacting with information on social media. And while the topic could fill entire libraries, they limited their scope of research to individual examples of misinformation rather than organized, coordinated campaigns of inauthentic behavior excluding the spread of disinformation.

So, Why Do People Fall For Fake News?

There are two fundamental concepts that explain the psychological dynamics when faced with misinformation: truth discernment aims to establish a belief in the relative accuracy of news that is greater than known-to-be false information on the same event. Basically, this concept is rooted in active recognition and critical analysis of the information to capture people’s overall beliefs. Another concept that is used to explain why people fall for misinformation is the idea of truth acceptance. Thereunder the accuracy of news is not a factor but the overall belief of it. Instead of critical analysis of the information people chose to average or combine all available information, true or false, to establish an opinion about the veracity of the news that captures people’s overall belief. This commonly results in a biased perception of news. Other concepts related to this question look at motives. Political motivations can influence people’s willingness to reason based on their partisan, political identity. In other words, when faced with news that is consistent with their political beliefs, the information is regarded as true; when faced with news that is inconsistent with their political beliefs, the information is regarded as false. Loyalty to their political ideology can become so strong that it can override an apparent falsehood for the sake of party loyalty. Interestingly, the researchers found that political partisanship has much less weight than the actual veracity of news when assessing information. Misinformation that is in harmony with people’s political beliefs is less trustworthy than accurate information that is against people’s political beliefs. They also discovered that people tend to be better at analyzing information that is in harmony with our political beliefs, which helps to discern truth from falsehood. But if people hardly fall for misinformation consistent with our political beliefs, which characteristics make people fall for misinformation?

“People who are more reflective are less likely to believe false news content – and are better at discerning between truth and falsehood – regardless of whether the news is consistent or inconsistent with their partisanship”

Well, this brings us back to truth discernment. Belief in misinformation is commonly associated with overconfidence, lack of reflection, zealotry, delusionality, or overclaiming where an individual acts on completely fabricated information as a self-proclaimed expert. All of these factors indicate an inability of analytical thinking. On the opposite side of the spectrum, people determine the veracity of information through cognitive reflection and tapping into their relevant existing knowledge. This can be general political knowledge, a basic understanding of established scientific theories, or simple online media literacy.

“Thus, when it comes to the role of reasoning, it seems that people fail to discern truth from falsehood because they do not stop to reflect sufficiently on their prior knowledge (or have insufficient or inaccurate prior knowledge) – and not because their reasoning abilities are hijacked by political motivations.” 

The researchers found that the truth has little impact on sharing intentions. They describe three types of information-sharing on social media:

  • Confusion-based sharing: this concept encompasses a genuine belief in the veracity of the information-shared (even though the person is mistaken)
  • Preference-based sharing: this concept places political ideology, or related motives such as virtue signaling, above the truth of the information shared accepting misinformation as a collateral
  • Inattention-based sharing: thereunder people are only intending to share accurate information, but are distracted by the social media environment

Steps To Own What You Know

If prior knowledge is a critical factor to identify misinformation, then familiarity with accurate information goes a long way. An awareness of familiar information is critical to determine whether the information presented is the information that you already know or a slightly manipulated version. Be familiar with social media products. What does virality look like on platform XYZ? Is the uploader a verified actor? What is the source of the news? In general, sources are a critical signal to determine veracity. The more credible and established a source, the likelier the information is well-researched and accurate. Finally, a red flag for misinformation is emotional headlines, provocative captions, or shocking images.

Challenges To Identify Misinformation

Truth is not a binary metric. In order to determine the veracity of news, a piece of information may be falsified or laced with inaccuracies or compared against established, known information. Therefore the accuracy and precision, or overall quality, of a machine learning classifier for misinformation hinges on the clarity of the provided training data times the depth of exposure on the platform where the classifier will be deployed. Another challenge to consider is the almost ever-changing landscape of misinformation. Misinformation is rapidly evolving, convulsing into conspiracy theories and maybe (mistakenly) supported by established influencers and institutions. This creates problems to discern the elements of a news story, which undermines the chances to determine accuracy. Inoculation (deliberate exposure to misinformation to improve recognition abilities) is in part ineffective because people fail to stop, reflect and consider the accuracy of the information at all. Therefore successful interventions to minimize misinformation may start with efforts to slow down interactions on social media. This can be achieved by changing the user interface to introduce friction and prompts to help induce active reflection. Lastly, human fact-checking is not scalable. For so many reasons: time, accuracy, integrity, etc. Leveraging a community-based (crowd-sourced) fact-checking model might be an alternative until a more automated solution will be ready. Twitter has recently introduced experiments with these types of crowd-sourced products. Their platform is called Birdwatch.

This research paper didn’t unearth breakthrough findings or new material. It rather helped me to learn more about the dynamics of human psychology when exposed to a set of information. Looking at the individual concepts people use to determine the accuracy of information, the underlying motives that drive our attention, and the dynamics for when we decide to share news made this paper a worthwhile read. Its concluding remarks to improve the technical environment by leveraging technology to facilitate a more reflective, conscious experience of news on social media leaves me optimistic for better products to come. 

Left Of Launch

The Perfect Weapon is an intriguing account of history’s most cunning cyberwarfare operations. I learned about the incremental evolution of cyberspace as the fifth domain of war and how policymakers, military leaders and the private technology sector continue to adapt to this new threat landscape.  

Much has been written about influence operations or cyber criminals, but few accounts present so clearly a link between national security, cyberspace and foreign policy. Some of the stories told in The Perfect Weapon touch upon the Russian interference in the 2016 presidential elections, the 2015 hack of the Ukrainian power grid, the 2014 Sony hack, the 2013 revelations by Edward Snowden and many other notable breaches of cybersecurity. These aren’t news anymore, but help to understand America’s 21st century vulnerabilities.

Chapter 8 titled “The Fumble” left a particular mark on me. In it, Sanger details the handling of Russian hackers infiltrating the computer and server networks of the Democratic National Committee. The sheer lethargy by officials at the time demonstrated over months on end, including Obama’s failure to openly address the ongoing cyber influence operations perpetrated by the Russians ahead of the elections, was nothing particularly new yet I still felt outraged by what now seems to be obvious. The chapter illustrates some governance shortcomings that we as a society need to overcome in order to address cyberattacks but also build better cyber defense mechanisms.

Left of Launch is a strategy to leverage cyberwarfare or other infrastructure sabotage to prevent ballistic missiles from being launched

But the most insights for me came from the books cross-cutting between the cyberspace/cybersecurity domain to the public policy domain. It showed me how much work is still left to be done to educate our elected officials, our leaders and ourselves about a growing threat landscape in cyberspace. While technology regulation is a partisan issue, only bi-partisan solutions will yield impactful results.

David E. Sanger is a great journalist, bestselling author and an excellent writer. His storytelling is concise, easy to read and accessible for a wide audience. Throughout the book, I never felt that Sanger allowed himself to get caught up in the politics of it but rather maintained a refreshing neutrality. His outlook is simple: we need to redefine our sense of national security and come up with an international solution for cyberspace. We need to think broadly about the consequences of cyber-enabled espionage and cyberattacks against critical infrastructures. And we need to act now.

Tales Of Invincible Frogmen

Men In Green Faces is a gripping fictional combat novel. It shows the cruelty, intensity, but also the strategic intelligence and psychological resilience needed to prevail in war.

The story follows Gene Michaels and his team of highly-trained, elite commandos on their tour of duty during the Vietnam War. They are stationed on Seafloat, a floating Mobile Advanced Tactical Support Base (MATSB), somewhere in the Mekong Delta at the southern tip of Vietnam. Throughout their deployment, the team goes on different missions roaming the thick tropical jungle in search for specific targets and evading enemy positions. With each mission, the reader learns a little more about the complex, individual characters. They’re not just warriors devoid of emotions, but live and struggle through the atrocities of war – far away from home and their families.

Men in Green Faces is a dialogue-heavy fictional combat novel. It’s the kind of book that poses a situation and you’d want to discuss it with someone else or, if you’re so adventurous, enlist in the Navy right away. I learned about this book when Jonny Kim shared that his motivation to become a U.S. Navy SEAL was partly inspired by this book. To illustrate why this book is in part so powerful, I’ll leave you with the below excerpt of one of the teams early missions to extract a potential target for interrogation:

“Almost without a sound, the squad, already in file formation, came on line and dropped down to conceal themselves within the foliage. The last thing they wanted was contact. Through the bushes and trees Gene caught movement. It was one lone VC (Viet Cong) in black pajamas, talking to himself even as he strolled closer to their location. Not another person in sight. Just ten feet farther to the left, and the VC would have seen their tracks in the mud. The squad was dead quiet. Their personal discipline never faltered in combat. Almost mesmerized, Gene watched the VC strolling closer. The man passed Doc without detection, then Cruz and Alex. He came within eighteen inches of Brian, who was still in Gene’s position. The VC, carrying an AK-47 over his shoulder, holding it by its barrel, continued to talk to himself, just walking along within inches now of Jim. Jim grabbed the VC, slapped a hand over his mouth, and took him down. There was virtually no sound. Before Gene realized he’d moved, he had the VC’s AK-47 in his hand and the rest of the squad had backed in around the three of them, ensuring 360-degree security. Gene positioned his 60 inches from the VC’s head. The man’s eyes were stretched wide, almost popping from their sockets. He knew about the men in green faces, and it showed.”

Nations Fail, But Why?

Is it because the culture of some nations is inferior to that of others? Is it because the natural resources of some nations are less fertile and valuable? Or is it because some nations are in more advantageous geographical locations? Daron Acemoglu and James A. Robinson argue the wealth of some nations can be traced back to their institutions – inclusive institutions to be precise that enable its citizens to partake in the political process and economic agenda. It’s an argument for a decentralized, democratic control structure with checks and balances to hold elected officials accountable and ensure shared economic benefits. Thus they conclude nations fail when a ruling elite creates extractive institutions designed to enrich only themselves on the back of the masses. More democracy is the answer to our looming political and economic problems according to the authors. Therefore political leaders must focus on the disenfranchised, the forgotten – those who have been left behind. It’s a conclusion hard to contend with. 

Altogether, though, this book is disappointing. Among the various economic theories that try to explain the wealth of nations, the authors fail to create quantifiable definitions for their premise. By failing to define inclusion and extraction the reader never learns about required elements, political structures and economic (minimum) metrics that can be measured or produce reliable data. Instead the authors appear to cherry-pick historic examples to demonstrate the perils of extraction and highlight the benefits of inclusive institutions. Throughout the book this reaches an absurd level of comparing contemporary nations with ancient nations without regard to (then) current affairs, social cohesion, trade or world events. This creates a confusing storyline jumping through unrelated examples from Venice to China to Zimbabwe to Argentina to the United States. I found the repetition of their inclusiveness and extraction argument quite draining for it seems to appear on every page. 

Why Nations Fail is an excellent history book full of examples for the success or failure of governance. The stories alone are well-researched, detailed and certainly a pleasure to read. However the author’s explanation for the economic failure of nations is vague and conjecture at best. They fail to answer the origins of power with quantifiable evidence and how prosperous (or poor) nations manipulate power. Altogether this book would have been awesome if it were reduced to a few hundred pages and less repetitive.

The Future Of Political Elections On Social Media

Should private companies decide what politician people will hear about? How can tech policy make our democracy stronger? What is the role of social media and journalism in an increasingly polarized society? Katie Harbath, a former director for global elections at Facebook discusses these questions in a lecture about politics, policy and democracy. Her unparalleled experience as a political operative combined with her decade long experience working on political elections across the globe make her a leading intellectual voice to shape the future of civic engagement online. In her lecture to honor the legacy of former Wisconsin State senator Paul Offner she shares historical context on the evolution of technology and presidential election campaigns. She also talks about the impact of the 2016 election and the post-truth reality online that came with the election of Donald Trump. In her concluding remarks she offers some ideas for future regulations of technology to strengthen civic integrity as well as our democracy and she answers questions during her Q&A.

tl;dr

As social media companies face growing scrutiny among lawmakers and the general public, the La Follette School of Public Affairs at University of Wisconsin–Madison welcomed Katie Harbath, a former global public policy director at Facebook for the past 10 years, for a livestreamed public presentation. Harbath’s presentation focused on her experiences and thoughts on the future of social media, especially how tech companies are addressing civic integrity issues such as free and hate speech, misinformation and political advertising.

Make sure to watch the full lecture titled Politics and Policy: Democracy in the Digital Age at https://lafollette.wisc.edu/outreach-public-service/events/politics-and-policy-democracy-in-the-digital-age (or below)

Timestamps

03:04 – Opening remarks by Susan Webb Yackee
05:19 – Introduction of the speaker by Amber Joshway
06:59 – Opening remarks by Katie Harbath
08:24 – Historical context of tech policy
14:39 – The promise of technology and the 2016 Facebook Election
17:31 – 2016 Philippine presidential election
18:55 – Post-truth politics and the era of Donald J. Trump
20:04 – Social media for social good
20:27 – 2020 US presidential elections 
22:52 – The Capitol attacks, deplatforming and irreversible change
23:49 – Legal aspects of tech policy
24:37 – Refresh Section 230 CDA and political advertising
26:03 – Code aspects of tech policy
28:00 – Developing new social norms
30:41 – More diversity, more inclusion, more openness to change
33:24 – Tech policy has no finishing line
34:48 – Technology as a force for social good and closing remarks

Q&A

(Click on the question to watch the answer)

1. In a digitally democratized world how can consumers exercise their influence over companies to ensure that online platforms are free of bias?

2. What should we expect from the congressional hearing on disinformation?

3. Is Facebook a platform or a publisher?

4. Is social media going to help us to break the power of money in politics?

4. How have political campaigns changed over time?

5. What is the relationship between social media and the ethics of journalism?

6. Will the Oversight Board truly impact Facebook’s content policy?

7. How is Facebook handling COVID-19 related misinformation?

8. What is Facebook’s approach to moderating content vs encryption/data privacy?

9. Does social media contribute to social fragmentation (polarization)? If so, how can social media be a solution for reducing polarization?

10. What type of regulation should we advocate for as digitally evolving voters?

11. What are Katies best and worst career memories? What’s next for Katie post Facebook?

Last but not least: Katie mentioned a number of books (and a blog) as a recommended read that I will list below:

Cyber Security and the Financial System

The financial sector is a highly regulated marketplace. Deepfakes or artificially-generated synthetic media are associated with political disinformation but have not yet been linked to the financial system. The Carnegie Endowment for International Peace issued a scintillating working paper series titled “Cyber Security and the Financial System” covering a wide range of cutting edge issues from the European framework for Threat Intelligence-Based Ethical Red Teaming (TIBER) to assessing cyber resilience measures for financial organizations to global policies to combat manipulation of financial data. Jon Bateman’s contribution titled “Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios” takes a closer look on how deepfakes can impact the financial system. 

tl;dr

Rapid advances in artificial intelligence (AI) are enabling novel forms of deception. AI algorithms can produce realistic “deepfake” videos, as well as authentic-looking fake photos and writing. Collectively called synthetic media, these tools have triggered widespread concern about their potential in spreading political disinformation. Yet the same technology can also facilitate financial harm. Recent months have seen the first publicly documented cases of deepfakes used for fraud and extortion. Today the financial threat from synthetic media is low, so the key policy question is how much this threat will grow over time. Leading industry experts diverge widely in their assessments. Some believe firms and regulators should act now to head off serious risks. Others believe the threat will likely remain minor and the financial system should focus on more pressing technology challenges. A lack of data has stymied the discussion. In the absence of hard data, a close analysis of potential scenarios can help to better gauge the problem. In this paper, ten scenarios illustrate how criminals and other bad actors could abuse synthetic media technology to inflict financial harm on a broad swath of targets. Based on today’s synthetic media technology and the realities of financial crime, the scenarios explore whether and how synthetic media could alter the threat landscape.

Make sure to read the full paper titled Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios by Jon Bateman at https://carnegieendowment.org/2020/07/08/deepfakes-and-synthetic-media-in-financial-system-assessing-threat-scenarios-pub-82237

(Source: Daily Swig)

Deepfakes are a variation of manipulated media. In essence, a successful deepfake requires a sample data set of a original that is used to train a deep learning algorithm. It will learn to alter the training data to a degree that another algorithm is unable to distinguish whether the presented result is altered training data or the original. Think of it as a police sketch artist who will create a facial composite based on eye-witness accounts. The more available data and time the artist has to render a draft, the higher the likelihood of creating a successful mugshot sketch. In this paper, the term deepfake relates to a subset of synthetic media including videos, images and voice created through artificial intelligence.

The financial sector is particularly vulnerable in the know-your-customer space. It’s a unique entry point for malicious actors to submit manipulated identity verification or deploy deepfake technology to fool authenticity mechanisms. While anti-fraud prevention tools are an industry-wide standard to prevent impersonation or identity theft, the onset of cheaper, more readily available deepfake technology marks a turning point for the financial sector. Deepfakes may be used to leverage a blend of false or hacked personal identifiable information (PII) data to gain access or open bank accounts, initiate financial transactions, or redistribute private equity assets. Bateman focused on two categories of synthetic media that are most relevant for the financial sector: (1) narrowcast synthetic media, which encompasses one-off, tailored manipulated data deployed directly to the target via private channels and (2) broadcast synthetic media, which is designed for mass-audiences deployed directly or indirectly via publicly available channels, e.g. social media. An example for the first variation is the story of a cybercrime that took place in 2019. A Chief Executive Officer of a UK-based energy company received a phone call from – what he believed – his boss, the CEO of the parent corporation based in Germany. In the phone call, the voice of the German CEO was an impersonation created by artificial intelligence and publicly available voice recordings (speeches, transcripts etc). The voice directed the UK CEO to immediately initiate a financial transaction to pay a Hungarian supplier. This type of attack is also known as deepfake voice phishing (vishing). These fabricated directions resulted in the fraudulent transfer of $234,000. An example for the second variation is commonly found in widespread pump and dump schemes on social media. These could range from malicious actors creating false, incriminating deepfakes of key-personnel of a stock-listed company to artificially lower the stock price or creating synthetic media that misrepresents product results to manipulate a higher stock price and garner more interest from potential investors. Going off the two categories of synthetic media, Bateman presents ten scenarios that are layered into four stages: (1) Targeting Individuals, e.g. identity theft or impersonation, (2) Targeting Companies, e.g. Payment Fraud or Stock Manipulation, (3) Targeting Financial Markets, e.g. creating malicious flash crashes through state-sponsored hacking or cybercriminals backed a foreign government, and (4) Targeting Central Banks and Financial Regulators, e.g. regulatory astroturfing. 

In conclusion, Bateman finds that at this point in time, deepfakes aren’t potent enough to destabilize global financial systems in mature, healthy economies. They are more threatening, however, to individuals and business. To take precautions against malicious actors with deepfake technology, a number of resiliency measures can be implemented: broadcast synthetic media is potent to amplify and prolong already existing crises or scandals. Aside from building trust with key audiences, a potential remedy to deepfakes amplifying false narratives is the readiness to create counter-narratives with evidence. To prevent other companies from potential threats that would decrease the trust in the financial sector, an industry wide sharing of information on cyber attacks is a viable option to mitigate coordinated criminal activity. Lastly, the technology landscape is improving its integrity at a rapid succession rate. A multi-stakeholder response bringing together leaders from the financial sector, the technology sector and experts on consumer behavior with policymakers will help to create more efficient regulations to combat deepfakes in the financial system.