Cyber Security and the Financial System

The financial sector is a highly regulated marketplace. Deepfakes or artificially-generated synthetic media are associated with political disinformation but have not yet been linked to the financial system. The Carnegie Endowment for International Peace issued a scintillating working paper series titled “Cyber Security and the Financial System” covering a wide range of cutting edge issues from the European framework for Threat Intelligence-Based Ethical Red Teaming (TIBER) to assessing cyber resilience measures for financial organizations to global policies to combat manipulation of financial data. Jon Bateman’s contribution titled “Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios” takes a closer look on how deepfakes can impact the financial system. 

tl;dr

Rapid advances in artificial intelligence (AI) are enabling novel forms of deception. AI algorithms can produce realistic “deepfake” videos, as well as authentic-looking fake photos and writing. Collectively called synthetic media, these tools have triggered widespread concern about their potential in spreading political disinformation. Yet the same technology can also facilitate financial harm. Recent months have seen the first publicly documented cases of deepfakes used for fraud and extortion. Today the financial threat from synthetic media is low, so the key policy question is how much this threat will grow over time. Leading industry experts diverge widely in their assessments. Some believe firms and regulators should act now to head off serious risks. Others believe the threat will likely remain minor and the financial system should focus on more pressing technology challenges. A lack of data has stymied the discussion. In the absence of hard data, a close analysis of potential scenarios can help to better gauge the problem. In this paper, ten scenarios illustrate how criminals and other bad actors could abuse synthetic media technology to inflict financial harm on a broad swath of targets. Based on today’s synthetic media technology and the realities of financial crime, the scenarios explore whether and how synthetic media could alter the threat landscape.

Make sure to read the full paper titled Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios by Jon Bateman at https://carnegieendowment.org/2020/07/08/deepfakes-and-synthetic-media-in-financial-system-assessing-threat-scenarios-pub-82237

(Source: Daily Swig)

Deepfakes are a variation of manipulated media. In essence, a successful deepfake requires a sample data set of a original that is used to train a deep learning algorithm. It will learn to alter the training data to a degree that another algorithm is unable to distinguish whether the presented result is altered training data or the original. Think of it as a police sketch artist who will create a facial composite based on eye-witness accounts. The more available data and time the artist has to render a draft, the higher the likelihood of creating a successful mugshot sketch. In this paper, the term deepfake relates to a subset of synthetic media including videos, images and voice created through artificial intelligence.

The financial sector is particularly vulnerable in the know-your-customer space. It’s a unique entry point for malicious actors to submit manipulated identity verification or deploy deepfake technology to fool authenticity mechanisms. While anti-fraud prevention tools are an industry-wide standard to prevent impersonation or identity theft, the onset of cheaper, more readily available deepfake technology marks a turning point for the financial sector. Deepfakes may be used to leverage a blend of false or hacked personal identifiable information (PII) data to gain access or open bank accounts, initiate financial transactions, or redistribute private equity assets. Bateman focused on two categories of synthetic media that are most relevant for the financial sector: (1) narrowcast synthetic media, which encompasses one-off, tailored manipulated data deployed directly to the target via private channels and (2) broadcast synthetic media, which is designed for mass-audiences deployed directly or indirectly via publicly available channels, e.g. social media. An example for the first variation is the story of a cybercrime that took place in 2019. A Chief Executive Officer of a UK-based energy company received a phone call from – what he believed – his boss, the CEO of the parent corporation based in Germany. In the phone call, the voice of the German CEO was an impersonation created by artificial intelligence and publicly available voice recordings (speeches, transcripts etc). The voice directed the UK CEO to immediately initiate a financial transaction to pay a Hungarian supplier. This type of attack is also known as deepfake voice phishing (vishing). These fabricated directions resulted in the fraudulent transfer of $234,000. An example for the second variation is commonly found in widespread pump and dump schemes on social media. These could range from malicious actors creating false, incriminating deepfakes of key-personnel of a stock-listed company to artificially lower the stock price or creating synthetic media that misrepresents product results to manipulate a higher stock price and garner more interest from potential investors. Going off the two categories of synthetic media, Bateman presents ten scenarios that are layered into four stages: (1) Targeting Individuals, e.g. identity theft or impersonation, (2) Targeting Companies, e.g. Payment Fraud or Stock Manipulation, (3) Targeting Financial Markets, e.g. creating malicious flash crashes through state-sponsored hacking or cybercriminals backed a foreign government, and (4) Targeting Central Banks and Financial Regulators, e.g. regulatory astroturfing. 

In conclusion, Bateman finds that at this point in time, deepfakes aren’t potent enough to destabilize global financial systems in mature, healthy economies. They are more threatening, however, to individuals and business. To take precautions against malicious actors with deepfake technology, a number of resiliency measures can be implemented: broadcast synthetic media is potent to amplify and prolong already existing crises or scandals. Aside from building trust with key audiences, a potential remedy to deepfakes amplifying false narratives is the readiness to create counter-narratives with evidence. To prevent other companies from potential threats that would decrease the trust in the financial sector, an industry wide sharing of information on cyber attacks is a viable option to mitigate coordinated criminal activity. Lastly, the technology landscape is improving its integrity at a rapid succession rate. A multi-stakeholder response bringing together leaders from the financial sector, the technology sector and experts on consumer behavior with policymakers will help to create more efficient regulations to combat deepfakes in the financial system.

Advertisement

Microtargeted Deepfakes in Politics

The 2019 Wordwide Threat Assessment warned of deepfakes deployed to manipulate public opinion. And while the 2020 U.S. presidential elections did not see an onslaught of deepfakes undermining voter confidence, experts agree that the threat remains tangible. A recent study conducted by researchers of the University of Amsterdam investigated the impact of political deepfakes meant to discredit a politician that were microtargeted to a specific segment of the electorate.

tl;dr

Deepfakes are perceived as a powerful form of disinformation. Although many studies have focused on detecting deepfakes, few have measured their effects on political attitudes, and none have studied microtargeting techniques as an amplifier. We argue that microtargeting techniques can amplify the effects of deepfakes, by enabling malicious political actors to tailor deepfakes to susceptibilities of the receiver. In this study, we have constructed a political deepfake (video and audio), and study its effects on political attitudes in an online experiment. We find that attitudes toward the depicted politician are significantly lower after seeing the deepfake, but the attitudes toward the politician’s party remain similar to the control condition. When we zoom in on the microtargeted group, we see that both the attitudes toward the politician and the attitudes toward his party score significantly lower than the control condition, suggesting that microtargeting techniques can indeed amplify the effects of a deepfake, but for a much smaller subgroup than expected.

Make sure to read the full paper titled Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes? by Tom Dobber, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese at https://doi.org/10.1177/1940161220944364

Credits: UC Berkeley/Stephen McNally

Deepfakes are a subcategory of modern information warfare. The technology leverages machine learning to generate audio-visual content that imitates original content but differs in both intent and message. Its highly deceptive appearance renders it a potent weapon to influence public opinion, undermine strategic policies or disrupt civic engagement. An infamous deepfake example depicts former president Obama seemingly calling president Trump expletives. Online microtargeting is a form of social media marketing to disseminate advertisements tailored to the specific interests of an identifiable, curated audience. Within the political context microtargeting is used to spread a campaign message to a specific audience that is identified and grouped by characteristics to either convince the audience to vote for or against a candidate. There are a number of civic risks associated with deploying deepfakes: 

  • Deepfake content is hard to tell apart from original and authentic content. While deepfake videos may signal some nefarious intent to a cautious audience, the potential impact of deepfake radio or deepfake text on voter behavior hasn’t been researched as of this writing
  • Political actors may leverage deepfakes to discredit opponents, undermine news reporting or equip trailing third-party candidates with sufficient influence to erode voter confidence  
  • Used in a political campaign deepfakes may be strategically deployed to incite a political scandal or to reframe current affairs and regain control of an election narrative

The study created a deepfake video depicting an interview of a prominent center-right politician of a large christian democratic party. The manipulated part of the otherwise original and authentic content shows the politician seemingly making a joke about the crucifixion of Jesus Christ: 

“But, as Christ would say: don’t crucify me for it.”

This content was shown to a randomly selected group of christian voters, who had identified their religious, conservative beliefs or voted for this politician in past elections. The researchers found that deepfakes spread without microtargeting the audience would impact the behavior towards the politician but not necessarily his political party. However, deepfakes tailored to a specific audience using political microtargeting techniques amplified the discrediting message of the deepfake therefore impacting both the behavior towards the politican and the political party. Interestingly, staunch supporters of the politician might be shielded from a lasting behavioral change due their own motivated reasoning (bias) derived from the politician’s ideology. For this group, the researchers argue a certain degree of discomfort or deviation from previous political ideology conveyed in a deepfake may reach a tipping point for staunch supporters to align with the results of this study but the limitations of this study may also indicate room for some unforeseen outcomes. 

A roadmap to counter microtargeted deepfakes should include legislators passing regulations to limit political campaign spending online, which would directly confine a campaign to focus on their limited financial resources and weed out corporate interests. Second, new regulations should focus on the protection of personal-identifiable data. A microtargeting dataset includes location data, personal preferences and website interactions etc. While this data is valuable within a commercial context, it should be excluded from civic engagements such as elections. Academics will have an opportunity to discover insights on algorithm bias to improve upon the existing machine learning approach that is training generative adversarial networks with pre-conditioned datasets. Moreover, future research has an opportunity to further investigate the impact of manipulated media on voter education, confidence and behavior within and outside of political elections.     

Here’s one of my favorite deepfake videos of president Trump explaining money laundering to his son-in-law Jared Kushner in a deepfake(d) scene of “Breaking Bad”

Political Warfare Is A Threat To Democracy. And Free Speech Enables It

“I disapprove of what you say, but I will defend to the death your right to say it” is an interpretation of Voltaire’s principles by Evelyn Beatrice Hall. Freedom of expression is often cited as the last frontier before falling into authoritarian rule. But is free speech, our greatest strength, really our greatest weakness? Hostile authoritarian actors seem to exploit these individual liberties by engaging in layered political warfare to undermine trust in our democratic systems. These often clandestine operations pose an existential threat to our democracy.   

tl;dr

The digital age has permanently changed the way states conduct political warfare—necessitating a rebalancing of security priorities in democracies. The utilisation of cyberspace by state and non- state actors to subvert democratic elections, encourage the proliferation of violence and challenge the sovereignty and values of democratic states is having a highly destabilising effect. Successful political warfare campaigns also cause voters to question the results of democratic elections and whether special interests or foreign powers have been the decisive factor in a given outcome. This is highly damaging for the political legitimacy of democracies, which depend upon voters being able to trust in electoral processes and outcomes free from malign influence— perceived or otherwise. The values of individual freedom and political expression practised within democratic states challenges their ability to respond to political warfare. The continued failure of governments to understand this has undermined their ability to combat this emerging threat. The challenges that this new digitally enabled political warfare poses to democracies is set to rise with developments in machine learning and the emergence of digital tools such as ‘deep fakes’.

Make sure to read the full paper titled Political warfare in the digital age: cyber subversion, information operations and ‘deep fakes’ by Thomas Paterson and Lauren Hanley at https://www.tandfonline.com/doi/abs/10.1080/10357718.2020.1734772

MC2 Joseph Millar | Credit: U.S. Navy

This paper’s central theme is at the intersection of democratic integrity and political subversion operations. The authors describe an increase of cyber-enabled espionage and political warfare due to the global spread of the internet. They argue it has led to an imbalance between authoritarian and democratic state actors. Their argument rests on the notion that individual liberties such as freedom of expression put democratic states at a disadvantage compared to authoritarian states. Therefore authoritarian states are observed to more often choose political warfare and subversion operations versus democracies are confined to breaching cyber security and conducting cyber espionage. Cyber espionage is defined as

“the use of computer networks to gain illicit access to confidential information, typically that held by a government or other organization”

and is not a new concept. I disagree with the premise of illicit access because cyberspace specifically enables the free flow of information beyond any local regulation. Illicit is either redundant for espionage does not necessarily require breaking laws, rules or customs or it is duplicative with confidential information, which I interpret as synonymous with classified information. Though one might argue about the difference. From a legal perspective, the information does not need to be obtained through illicit access.

With regard to the broader term political warfare, I found the definition of political warfare as, 

“diverse operations to influence, persuade, and coerce nation states, organizations, and individuals to operate in accord with one’s strategic interests without employing kinetic force” 

most appropriate. It demonstrates the depth of political warfare, which encompasses influence and subversion operations outside of physical activity. Subversion operations are defined as 

“a subcategory of political warfare that aims to undermine institutional as well as individual legitimacy and authority”

I disagree with this definition for it fails to emphasize the difference between political warfare and subversion – both undermine legitimacy and authority. However, a subversion operation is specifically aimed to erode and deconstruct a political mandate. It is the logical next step after political warfare influenced a populace in order to achieve political power. The authors see the act of subversion culminating in a loss of trust in democratic principles. It leads to voter suppression, reduced voter participation, decreased and asymmetrical review of electoral laws but more importantly it poses a challenge to the democratic values of its citizens. It is an existential threat to a democracy. It favors authoritarian states detached from checks and balances that are usually present in democratic systems. These actors are not limited by law or civic popularity or reputational capital. Ironically, this bestows a certain amount of freedom upon them to deploy political warfare operations. Democracies on the other hand uphold individual liberties such as freedom of expression, freedom of the press, freedom of assembly or equal treatment under law and due process. As demonstrated during the 2016 U.S. presidential elections, a democracy generally struggles with identifying political warfare initiated by a foreign (hostile) state from certain segments of the population pursuing their strategic objectives by leveraging these exact individual freedoms. An example from the Mueller Report 

“stated that the Internet Research Agency (IRA), which had clear links to the Russian Government, used social media accounts and interest groups to sow discord in the US political system through what it termed ‘information warfare’ […] The IRA’s operation included the purchase of political advertisements on social media in the names of US persons and entities, as well as the staging of political rallies inside the United States.”

And it doesn’t stop in America. Russia is deploying influence operations in volatile regions on the African continent. China has a history of attempting to undermine democratic efforts in Africa. Both states aim to chip away power from former colonial powers such as France or at least suppress efforts to democratise regions in Africa. China is also deeply engaged in large-scale political warfare in the Southeast Asian region over regional dominance but also territorial expansion as observed in the South China Sea. New Zealand and Australia recorded numerous incidents of China’s attempted influence operations. Australia faced a real-world political crisis when Australian Labor Senator Sam Dastyari was found to be connected to political donor Huang Xiangmo, who has ties to the Chinese Communist Party. Therefore, China having a direct in-route to influence Australian policy decisions. 

The paper concludes with an overview of future challenges posed by political warfare. With more and more computing power readily available the development of new cyber tools and tactics to ideate political warfare operations is only going to increase. Authoritarian states are likely to expand their disinformation playbooks by tapping into the fears of people fueled by conspiracy theories. Developments of machine learning and artificial intelligence will further improvements of inauthentic behavior online. For example, partisan political bots will become more human and harder to discern from real human users. Deep fake technology will increase sampling rates by tapping into larger datasets from the social graph of every human being making it increasingly possible to impersonate individuals to gain access or achieve certain strategic objectives. Altogether, political warfare poses a greater challenge than cyber-enabled espionage in particular for democracies. Democracies need to understand the asymmetrical relationship with authoritarian actors and dedicate resources to effective countermeasures to political warfare without undoing civil liberties in the process.