Cyber Security and the Financial System

The financial sector is a highly regulated marketplace. Deepfakes or artificially-generated synthetic media are associated with political disinformation but have not yet been linked to the financial system. The Carnegie Endowment for International Peace issued a scintillating working paper series titled “Cyber Security and the Financial System” covering a wide range of cutting edge issues from the European framework for Threat Intelligence-Based Ethical Red Teaming (TIBER) to assessing cyber resilience measures for financial organizations to global policies to combat manipulation of financial data. Jon Bateman’s contribution titled “Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios” takes a closer look on how deepfakes can impact the financial system. 

tl;dr

Rapid advances in artificial intelligence (AI) are enabling novel forms of deception. AI algorithms can produce realistic “deepfake” videos, as well as authentic-looking fake photos and writing. Collectively called synthetic media, these tools have triggered widespread concern about their potential in spreading political disinformation. Yet the same technology can also facilitate financial harm. Recent months have seen the first publicly documented cases of deepfakes used for fraud and extortion. Today the financial threat from synthetic media is low, so the key policy question is how much this threat will grow over time. Leading industry experts diverge widely in their assessments. Some believe firms and regulators should act now to head off serious risks. Others believe the threat will likely remain minor and the financial system should focus on more pressing technology challenges. A lack of data has stymied the discussion. In the absence of hard data, a close analysis of potential scenarios can help to better gauge the problem. In this paper, ten scenarios illustrate how criminals and other bad actors could abuse synthetic media technology to inflict financial harm on a broad swath of targets. Based on today’s synthetic media technology and the realities of financial crime, the scenarios explore whether and how synthetic media could alter the threat landscape.

Make sure to read the full paper titled Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios by Jon Bateman at https://carnegieendowment.org/2020/07/08/deepfakes-and-synthetic-media-in-financial-system-assessing-threat-scenarios-pub-82237

(Source: Daily Swig)

Deepfakes are a variation of manipulated media. In essence, a successful deepfake requires a sample data set of a original that is used to train a deep learning algorithm. It will learn to alter the training data to a degree that another algorithm is unable to distinguish whether the presented result is altered training data or the original. Think of it as a police sketch artist who will create a facial composite based on eye-witness accounts. The more available data and time the artist has to render a draft, the higher the likelihood of creating a successful mugshot sketch. In this paper, the term deepfake relates to a subset of synthetic media including videos, images and voice created through artificial intelligence.

The financial sector is particularly vulnerable in the know-your-customer space. It’s a unique entry point for malicious actors to submit manipulated identity verification or deploy deepfake technology to fool authenticity mechanisms. While anti-fraud prevention tools are an industry-wide standard to prevent impersonation or identity theft, the onset of cheaper, more readily available deepfake technology marks a turning point for the financial sector. Deepfakes may be used to leverage a blend of false or hacked personal identifiable information (PII) data to gain access or open bank accounts, initiate financial transactions, or redistribute private equity assets. Bateman focused on two categories of synthetic media that are most relevant for the financial sector: (1) narrowcast synthetic media, which encompasses one-off, tailored manipulated data deployed directly to the target via private channels and (2) broadcast synthetic media, which is designed for mass-audiences deployed directly or indirectly via publicly available channels, e.g. social media. An example for the first variation is the story of a cybercrime that took place in 2019. A Chief Executive Officer of a UK-based energy company received a phone call from – what he believed – his boss, the CEO of the parent corporation based in Germany. In the phone call, the voice of the German CEO was an impersonation created by artificial intelligence and publicly available voice recordings (speeches, transcripts etc). The voice directed the UK CEO to immediately initiate a financial transaction to pay a Hungarian supplier. This type of attack is also known as deepfake voice phishing (vishing). These fabricated directions resulted in the fraudulent transfer of $234,000. An example for the second variation is commonly found in widespread pump and dump schemes on social media. These could range from malicious actors creating false, incriminating deepfakes of key-personnel of a stock-listed company to artificially lower the stock price or creating synthetic media that misrepresents product results to manipulate a higher stock price and garner more interest from potential investors. Going off the two categories of synthetic media, Bateman presents ten scenarios that are layered into four stages: (1) Targeting Individuals, e.g. identity theft or impersonation, (2) Targeting Companies, e.g. Payment Fraud or Stock Manipulation, (3) Targeting Financial Markets, e.g. creating malicious flash crashes through state-sponsored hacking or cybercriminals backed a foreign government, and (4) Targeting Central Banks and Financial Regulators, e.g. regulatory astroturfing. 

In conclusion, Bateman finds that at this point in time, deepfakes aren’t potent enough to destabilize global financial systems in mature, healthy economies. They are more threatening, however, to individuals and business. To take precautions against malicious actors with deepfake technology, a number of resiliency measures can be implemented: broadcast synthetic media is potent to amplify and prolong already existing crises or scandals. Aside from building trust with key audiences, a potential remedy to deepfakes amplifying false narratives is the readiness to create counter-narratives with evidence. To prevent other companies from potential threats that would decrease the trust in the financial sector, an industry wide sharing of information on cyber attacks is a viable option to mitigate coordinated criminal activity. Lastly, the technology landscape is improving its integrity at a rapid succession rate. A multi-stakeholder response bringing together leaders from the financial sector, the technology sector and experts on consumer behavior with policymakers will help to create more efficient regulations to combat deepfakes in the financial system.

Manipulated Social Media And The Stock Market

Earlier this year, Elon Musk tweeted that the Tesla stock price is too high. His twitter account had a reach of 33.4 million followers (41 million as of this writing). The immediate impact on Tesla’s stock price was a midday trading deficit of 9% compared to the closing stock price of the previous day. Tesla’s company valuation suffered an estimated loss of $14 billion. The episode triggered concerns over a potential violation of his 2018 settlement with the Securities and Exchange Commission after misleading tweets implying to have raised sufficient capital to take Tesla private. Presumably this time it was in response to California’s restrictive shelter-in-place lockdown measures to tackle the coronavirus pandemic. Maybe Elon Musk felt the weight of his responsibilities weigh twice as much during these challenging times. In any case, as an investor, his actions made me think of the power of social media at the hands of a social media influencer (i.e. stock promoter). Moreover, it made me think about content policies tailored to protect economic value while safeguarding information integrity. This empirical study conducted by researchers of the Pantheon-Sorbonne University discusses effects of illegal price manipulation by way of information operations on social media, specifically Twitter.

tl;dr

Social media can help investors gather and share information about stock markets. However, it also presents opportunities for fraudsters to spread false or misleading statements in the marketplace. Analyzing millions of messages sent on the social media platform Twitter about small capitalization firms, we find that an abnormally high number of messages on social media is associated with a large price increase on the event day and followed by a sharp price reversal over the next trading week. Examining users’ characteristics, and controlling for lagged abnormal returns, press releases, tweets sentiment and firms’ characteristics, we find that the price reversal pattern is stronger when the events are generated by the tweeting activity of stock promoters or by the tweeting activity of accounts dedicated to tracking pump-and-dump schemes. Overall, our findings are consistent with the patterns of a pump-and-dump scheme, where fraudsters/promoters use social media to temporarily inflate the price of small capitalization stocks.

Make sure to read the full paper titled Market Manipulation and Suspicious Stock Recommendations on Social Media by Thomas Renault at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3010850

Credit: Lucas Jackson/Reuters

Social media platforms are an accepted corporate communication channel nowadays closely monitored by investors and financial analysts. As an investor, social media offers swarm intelligence on trading the company’s stock, access to real-time information about the company, product updates and other financially relevant information. Large and small-cap companies usually maintain a presence on social media. Although in particular small-cap companies with low liquidity are vulnerable to stock price manipulations by way of information operations on social media. According to researchers of the Pantheon-Sorbonne University an information-based manipulation involves rumors, misleading or false press releases, stock analysis or price targets, etc. that is disseminated in a short, time-sensitive period. Nearly 50% of this disinformation is spread by an influencer. The investment terminology calls this pump-and-dump scheme: 

“Pump-and-dump schemes involve touting a company’s stock through false or misleading statements in the marketplace in order to artificially inflate (pump) the price of a stock. Once fraudsters stop hyping the stock and sell their shares (dump), the price typically falls.”

The empirical study collected tweets containing the cashtag ticker symbol of more than 5000 small-cap companies. Over an eleven-month period 248,748 distinct Twitter users posted 7,196,307 financially relevant tweets. They adjusted the data for overoptimistic noise traders and financially relevant news reporting. They found a spike in volume of tweets concerning a company’s stock on social media correlates with a spike in trading of the company’s stock from two days before peak activity on social media up to five days after it. Some content concerned positive financial signals advocating to buy the stock. Other content concerned disinformation about the company’s performance. It was spread to a large, unsophisticated Twitter audience by influencer in concert with a network of inauthentic accounts and bots. This was then followed by a price reversal over the ensuing trading days. In the aftermath, the actors part of the scheme went into hibernation or ceased social media activity altogether.

Risk And Opportunity For Social Media Platforms

Information operations to manipulate stock price are quite common on social media. Albeit hard to detect only few are investigated and successfully prosecuted. Consumers exposed to stock disinformation that fell victim tend to exit the stock market altogether. Moreover a consumer might reduce their footprint on social media after experiencing real-world financial harm. Depending on the severity of the loss incurred, this might even lead to litigation against social media platforms. The tools leveraged by bad actors undermine the integrity efforts of social media platforms, which in some cases or in conjunction with class-action litigation can lead to greater scrutiny by financial watchdogs pushing for tighter regulations.

To tackle these risks social media platforms must continue to expand enforcement of inauthentic coordinated behavior to eliminate botnetworks used to spread stock disinformation. Developing an account verification system that is dedicated to financial professionals, analysts and influencers will support and ease enforcement. Social media platforms should also ease onboarding of publicly traded companies to maintain a presence on social media. This decreases the effects of collateral price reversals. In order to mitigate stock disinformation social media platforms must develop content policies tailored to balance freedom of expression including price speculation with the inherent risk of market-making comments. The latter will hinge on reach and engagement metrics but also on detailed definitions of financial advice and time and location of the content. Here, a close working-relationship with watchdogs will improve operations. Added friction, for example an interstitial outlining the regulatory requirements before posting or a delayed time of posting or certain labels informing the consumer of the financial risks associated with acting on the information in the posting. There are obviously more measures that come to mind. This only serves as a start of a conversation.

So, was Elon Musk’s tweet “Tesla stock price too high imo” an information-based market manipulation, a market-making comment or just an exercise of his free speech?