Men In Green Faces is a gripping fictional combat novel. It shows the cruelty, intensity, but also the strategic intelligence and psychological resilience needed to prevail in war.
The story follows Gene Michaels and his team of highly-trained, elite commandos on their tour of duty during the Vietnam War. They are stationed on Seafloat, a floating Mobile Advanced Tactical Support Base (MATSB), somewhere in the Mekong Delta at the southern tip of Vietnam. Throughout their deployment, the team goes on different missions roaming the thick tropical jungle in search for specific targets and evading enemy positions. With each mission, the reader learns a little more about the complex, individual characters. They’re not just warriors devoid of emotions, but live and struggle through the atrocities of war – far away from home and their families.
Men in Green Faces is a dialogue-heavy fictional combat novel. It’s the kind of book that poses a situation and you’d want to discuss it with someone else or, if you’re so adventurous, enlist in the Navy right away. I learned about this book when Jonny Kim shared that his motivation to become a U.S. Navy SEAL was partly inspired by this book. To illustrate why this book is in part so powerful, I’ll leave you with the below excerpt of one of the teams early missions to extract a potential target for interrogation:
“Almost without a sound, the squad, already in file formation, came on line and dropped down to conceal themselves within the foliage. The last thing they wanted was contact. Through the bushes and trees Gene caught movement. It was one lone VC (Viet Cong) in black pajamas, talking to himself even as he strolled closer to their location. Not another person in sight. Just ten feet farther to the left, and the VC would have seen their tracks in the mud. The squad was dead quiet. Their personal discipline never faltered in combat. Almost mesmerized, Gene watched the VC strolling closer. The man passed Doc without detection, then Cruz and Alex. He came within eighteen inches of Brian, who was still in Gene’s position. The VC, carrying an AK-47 over his shoulder, holding it by its barrel, continued to talk to himself, just walking along within inches now of Jim. Jim grabbed the VC, slapped a hand over his mouth, and took him down. There was virtually no sound. Before Gene realized he’d moved, he had the VC’s AK-47 in his hand and the rest of the squad had backed in around the three of them, ensuring 360-degree security. Gene positioned his 60 inches from the VC’s head. The man’s eyes were stretched wide, almost popping from their sockets. He knew about the men in green faces, and it showed.”
Is it because the culture of some nations is inferior to that of others? Is it because the natural resources of some nations are less fertile and valuable? Or is it because some nations are in more advantageous geographical locations? Daron Acemoglu and James A. Robinson argue the wealth of some nations can be traced back to their institutions – inclusive institutions to be precise that enable its citizens to partake in the political process and economic agenda. It’s an argument for a decentralized, democratic control structure with checks and balances to hold elected officials accountable and ensure shared economic benefits. Thus they conclude nations fail when a ruling elite creates extractive institutions designed to enrich only themselves on the back of the masses. More democracy is the answer to our looming political and economic problems according to the authors. Therefore political leaders must focus on the disenfranchised, the forgotten – those who have been left behind. It’s a conclusion hard to contend with.
Altogether, though, this book is disappointing. Among the various economic theories that try to explain the wealth of nations, the authors fail to create quantifiable definitions for their premise. By failing to define inclusion and extraction the reader never learns about required elements, political structures and economic (minimum) metrics that can be measured or produce reliable data. Instead the authors appear to cherry-pick historic examples to demonstrate the perils of extraction and highlight the benefits of inclusive institutions. Throughout the book this reaches an absurd level of comparing contemporary nations with ancient nations without regard to (then) current affairs, social cohesion, trade or world events. This creates a confusing storyline jumping through unrelated examples from Venice to China to Zimbabwe to Argentina to the United States. I found the repetition of their inclusiveness and extraction argument quite draining for it seems to appear on every page.
Why Nations Fail is an excellent history book full of examples for the success or failure of governance. The stories alone are well-researched, detailed and certainly a pleasure to read. However the author’s explanation for the economic failure of nations is vague and conjecture at best. They fail to answer the origins of power with quantifiable evidence and how prosperous (or poor) nations manipulate power. Altogether this book would have been awesome if it were reduced to a few hundred pages and less repetitive.
Should private companies decide what politician people will hear about? How can tech policy make our democracy stronger? What is the role of social media and journalism in an increasingly polarized society? Katie Harbath, a former director for global elections at Facebook discusses these questions in a lecture about politics, policy and democracy. Her unparalleled experience as a political operative combined with her decade long experience working on political elections across the globe make her a leading intellectual voice to shape the future of civic engagement online. In her lecture to honor the legacy of former Wisconsin State senator Paul Offner she shares historical context on the evolution of technology and presidential election campaigns. She also talks about the impact of the 2016 election and the post-truth reality online that came with the election of Donald Trump. In her concluding remarks she offers some ideas for future regulations of technology to strengthen civic integrity as well as our democracy and she answers questions during her Q&A.
As social media companies face growing scrutiny among lawmakers and the general public, the La Follette School of Public Affairs at University of Wisconsin–Madison welcomed Katie Harbath, a former global public policy director at Facebook for the past 10 years, for a livestreamed public presentation. Harbath’s presentation focused on her experiences and thoughts on the future of social media, especially how tech companies are addressing civic integrity issues such as free and hate speech, misinformation and political advertising.
03:04 – Opening remarks by Susan Webb Yackee 05:19 – Introduction of the speaker by Amber Joshway 06:59 – Opening remarks by Katie Harbath 08:24 – Historical context of tech policy 14:39 – The promise of technology and the 2016 Facebook Election 17:31 – 2016 Philippine presidential election 18:55 – Post-truth politics and the era of Donald J. Trump 20:04 – Social media for social good 20:27 – 2020 US presidential elections 22:52 – The Capitol attacks, deplatforming and irreversible change 23:49 – Legal aspects of tech policy 24:37 – Refresh Section 230 CDA and political advertising 26:03 – Code aspects of tech policy 28:00 – Developing new social norms 30:41 – More diversity, more inclusion, more openness to change 33:24 – Tech policy has no finishing line 34:48 – Technology as a force for social good and closing remarks
The financial sector is a highly regulated marketplace. Deepfakes or artificially-generated synthetic media are associated with political disinformation but have not yet been linked to the financial system. The Carnegie Endowment for International Peace issued a scintillating working paper series titled “Cyber Security and the Financial System” covering a wide range of cutting edge issues from the European framework for Threat Intelligence-Based Ethical Red Teaming (TIBER) to assessing cyber resilience measures for financial organizations to global policies to combat manipulation of financial data. Jon Bateman’s contribution titled “Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios” takes a closer look on how deepfakes can impact the financial system.
Rapid advances in artificial intelligence (AI) are enabling novel forms of deception. AI algorithms can produce realistic “deepfake” videos, as well as authentic-looking fake photos and writing. Collectively called synthetic media, these tools have triggered widespread concern about their potential in spreading political disinformation. Yet the same technology can also facilitate financial harm. Recent months have seen the first publicly documented cases of deepfakes used for fraud and extortion. Today the financial threat from synthetic media is low, so the key policy question is how much this threat will grow over time. Leading industry experts diverge widely in their assessments. Some believe firms and regulators should act now to head off serious risks. Others believe the threat will likely remain minor and the financial system should focus on more pressing technology challenges. A lack of data has stymied the discussion. In the absence of hard data, a close analysis of potential scenarios can help to better gauge the problem. In this paper, ten scenarios illustrate how criminals and other bad actors could abuse synthetic media technology to inflict financial harm on a broad swath of targets. Based on today’s synthetic media technology and the realities of financial crime, the scenarios explore whether and how synthetic media could alter the threat landscape.
Deepfakes are a variation of manipulated media. In essence, a successful deepfake requires a sample data set of a original that is used to train a deep learning algorithm. It will learn to alter the training data to a degree that another algorithm is unable to distinguish whether the presented result is altered training data or the original. Think of it as a police sketch artist who will create a facial composite based on eye-witness accounts. The more available data and time the artist has to render a draft, the higher the likelihood of creating a successful mugshot sketch. In this paper, the term deepfake relates to a subset of synthetic media including videos, images and voice created through artificial intelligence.
The financial sector is particularly vulnerable in the know-your-customer space. It’s a unique entry point for malicious actors to submit manipulated identity verification or deploy deepfake technology to fool authenticity mechanisms. While anti-fraud prevention tools are an industry-wide standard to prevent impersonation or identity theft, the onset of cheaper, more readily available deepfake technology marks a turning point for the financial sector. Deepfakes may be used to leverage a blend of false or hacked personal identifiable information (PII) data to gain access or open bank accounts, initiate financial transactions, or redistribute private equity assets. Bateman focused on two categories of synthetic media that are most relevant for the financial sector: (1) narrowcast synthetic media, which encompasses one-off, tailored manipulated data deployed directly to the target via private channels and (2) broadcast synthetic media, which is designed for mass-audiences deployed directly or indirectly via publicly available channels, e.g. social media. An example for the first variation is the story of a cybercrime that took place in 2019. A Chief Executive Officer of a UK-based energy company received a phone call from – what he believed – his boss, the CEO of the parent corporation based in Germany. In the phone call, the voice of the German CEO was an impersonation created by artificial intelligence and publicly available voice recordings (speeches, transcripts etc). The voice directed the UK CEO to immediately initiate a financial transaction to pay a Hungarian supplier. This type of attack is also known as deepfake voice phishing (vishing). These fabricated directions resulted in the fraudulent transfer of $234,000. An example for the second variation is commonly found in widespread pump and dump schemes on social media. These could range from malicious actors creating false, incriminating deepfakes of key-personnel of a stock-listed company to artificially lower the stock price or creating synthetic media that misrepresents product results to manipulate a higher stock price and garner more interest from potential investors. Going off the two categories of synthetic media, Bateman presents ten scenarios that are layered into four stages: (1) Targeting Individuals, e.g. identity theft or impersonation, (2) Targeting Companies, e.g. Payment Fraud or Stock Manipulation, (3) Targeting Financial Markets, e.g. creating malicious flash crashes through state-sponsored hacking or cybercriminals backed a foreign government, and (4) Targeting Central Banks and Financial Regulators, e.g. regulatory astroturfing.
In conclusion, Bateman finds that at this point in time, deepfakes aren’t potent enough to destabilize global financial systems in mature, healthy economies. They are more threatening, however, to individuals and business. To take precautions against malicious actors with deepfake technology, a number of resiliency measures can be implemented: broadcast synthetic media is potent to amplify and prolong already existing crises or scandals. Aside from building trust with key audiences, a potential remedy to deepfakes amplifying false narratives is the readiness to create counter-narratives with evidence. To prevent other companies from potential threats that would decrease the trust in the financial sector, an industry wide sharing of information on cyber attacks is a viable option to mitigate coordinated criminal activity. Lastly, the technology landscape is improving its integrity at a rapid succession rate. A multi-stakeholder response bringing together leaders from the financial sector, the technology sector and experts on consumer behavior with policymakers will help to create more efficient regulations to combat deepfakes in the financial system.