Militarizing Influence

Our information environment is increasingly dependent on the inescapable, largely unregulated cyberspace. Beyond national and geographical boundaries, however, this comes with its unique challenges ranging from information accuracy, integrity and relevancy to weaponizing information to influence a target audience in the pursuit of a diplomatic or economic goal. 

tl;dr

This paper proposes the development and inclusion of Information Influence Operations (IIOs) in Cyberspace Operations. IIOs encompass the offensive and defensive use of cyberspace to influence a targeted population. This capability will enable the evolution of strategic messaging in cyberspace and allow response to near-peer efforts in information warfare.

Make sure to read the full paper titled Information Influence Operations: The Future of Information Dominance By Captain David Morin at https://cyberdefensereview.army.mil/CDR-Content/Articles/Article-View/Article/2537080/information-influence-operations-the-future-of-information-dominance/

(Source: DoD/Josef Cole)

The United States Cyber Command (USCYBERCOM) unifies the direction of cyber operations within the Department of Defense. This paper proposes to incorporate Information Influence Operations (IIOs) into its capability set. This influence will facilitate the exertion of soft power in the pursuit of US national interests. Moreover, IIOs will reduce the need for large-scale operations or use of critical cyber offensive operations. 

A notable omission in the paper is a clear definition of IIOs. The ‘Introduction’ suggests that “the unrealized value of cyberspace, and what makes it so dangerous, is it allows direct access to the individual and to the public at large. This access, when used correctly, provides actors in cyberspace the ability to influence public opinion and shape the narrative of ongoing operations.” This, however, appears to be conflating cyberspace with the general, public media landscape while it implies an operator could just hack Twitter accounts and send out some tweets with a favorable narrative. Applying the lessons and learnings from its efforts to counter foreign influence operations, Facebook views all “coordinated efforts to manipulate or corrupt public debate for a strategic goal” as an influence operation. By definition in Joint Publication 3-13, information operations are described as “the integrated use of electronic warfare, computer network operations, psychological operations (PSYOP), military deception, and operations security to influence, disrupt, or corrupt adversarial decision making while protecting our own.” Therefore a suitable definition combining all of three concepts would define IIOs as “a capability to shape and direct public opinion in order to influence, disrupt, or corrupt adversarial decision making by leveraging soft- and hardware technology supported by psychological weapons and tactics in the pursuit of a strategic national interest.”

The author could have expanded more on the principles of US combatant command structures and its basic chain of command. This would have helped the reader to understand the current discrepancies in ownership of IIOs. As it stands, US combatant commands are structured by geographic focus and functional capabilities. USCYBERCOM falls into the latter category. A functional command unifies different military branches to achieve its mission. It remains unclear which military branch currently takes ownership of IIOs, if any. Taking it a step further out of the frame, the author comes out short on delivering a convincing rationale of when and where IIOs should be deployed and under whose authority. Cyberspace is predominantly civilian space created and maintained by privately held servers all across the world. Would USCYBERCOM install a permanent Information Influence Operations Center to execute IIOs spanning multiple months and years? Would such action require presidential or congressional approval? And would approved missions cease at servers operated on US soil or exclude US citizens from manipulation? Would it release a transparency report detailing the measures taken against foreign and domestic threats and under whose authority? These and other important questions need to be considered when thinking about consolidation of government power.  

But not all is dark and gloomy. The author does detail his proposition with a few more insights. In revisiting Stuxnet, NotPetya or the Russian involvement in dividing the US electorate during the 2016 US Presidential elections the author builds a foundation to support the argument for a centralized command of IIOs. Two of these events were targeted cyber attacks on critical infrastructure, one allegedly driven by the US and Israel, and the third event was a carefully curated, multi-year effort to exploit vulnerabilities in the US democratic process. All of these events indeed demonstrate the power that can be wielded through cyberspace operations, but where I disagree with the author is the comparability of these unique events and a causality between cyberspace and influencing information. Combining cyber attacks to corrupt critical infrastructure with a targeted narrative to redirect the public’s attention is a serious threat to US national security. However, identifying the operator and the motive behind such an attack may reveal domestic, private actors with a mere criminal motive, if attribution is even possible. Take the coordinated social engineering attack on Twitter ahead of the 2020 US Presidential elections. Government accounts from Joe Biden to Barack Obama as well as the accounts of notable public figures such as Elon Musk or Jeff Bezos were hacked, hijacked and abused to distribute a bitcoin scam. Should USCYBERCOM have stepped in, take network control from Twitter, a private business, in order to mitigate and counter the attack? 

In the section ‘Influencers’ the author does raise valid concerns when he states that “influencers are capable of wielding influence over millions and have used this influence for a multitude of purposes from philanthropy and advertising to political ends.” Online reach is tantamount to circulation of a print paper with the difference being longevity – the internet never forgets. Unchecked influence of influencers is something our society needs to review and decide upon. Perhaps private businesses will recognize the powers that be and increase checks and balances for this specific type of user or automatically guardrail reach to create equity among users.

In the section ‘Operationalizing IIOs’ the author states “There is little brand loyalty in the online world. Consumers will go elsewhere to find what they need if their preference is slow or unavailable. Influencing and controlling that “someplace else” yields the opportunity to wield influence.” In essence, the author suggests to take advantage of users impatience by increasing the time it takes to load a website. Once this latency or lag is in place, an operator may incentivice users to shift their attention to an alternative information source. This can be achieved through well-targeted advertising campaigns. As an example, the author offers the case of Amazon losing over $72 million due to a 63 minute outage on Prime Day 2018.

There is research to support an increased impatience during ecommerce transactions. However, there is an equal amount of research on brand loyalty, which across markets sees about 75% and higher retention rates once a customer relationship has been successfully established. For example, an Amazon Prime user, who pays for the privilege of Prime is unlikely to switch a book order to Barnes & Noble simply because there is a few milliseconds of delay when placing the order. It takes a contrast in price and shipping time to break the established brand loyalty with Amazon. Furthermore, in the author’s example the IIO appears to be directed at an ecommerce transaction. Even in the hypothetical foreign policy scenario of introducing latency to Alibaba to redirect users to Amazon to decrease economic output/revenue or other feasible US objectives, the author doesn’t really explain how it could favorably influence future behavior.

IIOs offer a tremendous potential to support diplomacy while strengthening our national security. Allocating the responsibility to exert and drive information influence to a military institution, however, raises constitutional concerns. It would likely undermine the trust of our allies but also chill diplomatic relations with non-allied nations. From a military perspective, an effort to centralize capabilities can reduce overall cost of cyberspace operations and increase transparency among military stakeholders. On the other hand, all centralized command structures are vulnerable to a single-point of failure, which can be devastating when USCYBERCOM is facing a sophisticated, superior adversary. In addition, an effort to centralize IIOs might increase the response rate to attacks in cyberspace or efforts to coordinate foreign influence operations by an adversary due to the extended chain of command.   

Left Of Launch

The Perfect Weapon is an intriguing account of history’s most cunning cyberwarfare operations. I learned about the incremental evolution of cyberspace as the fifth domain of war and how policymakers, military leaders and the private technology sector continue to adapt to this new threat landscape.  

Much has been written about influence operations or cyber criminals, but few accounts present so clearly a link between national security, cyberspace and foreign policy. Some of the stories told in The Perfect Weapon touch upon the Russian interference in the 2016 presidential elections, the 2015 hack of the Ukrainian power grid, the 2014 Sony hack, the 2013 revelations by Edward Snowden and many other notable breaches of cybersecurity. These aren’t news anymore, but help to understand America’s 21st century vulnerabilities.

Chapter 8 titled “The Fumble” left a particular mark on me. In it, Sanger details the handling of Russian hackers infiltrating the computer and server networks of the Democratic National Committee. The sheer lethargy by officials at the time demonstrated over months on end, including Obama’s failure to openly address the ongoing cyber influence operations perpetrated by the Russians ahead of the elections, was nothing particularly new yet I still felt outraged by what now seems to be obvious. The chapter illustrates some governance shortcomings that we as a society need to overcome in order to address cyberattacks but also build better cyber defense mechanisms.

Left of Launch is a strategy to leverage cyberwarfare or other infrastructure sabotage to prevent ballistic missiles from being launched

But the most insights for me came from the books cross-cutting between the cyberspace/cybersecurity domain to the public policy domain. It showed me how much work is still left to be done to educate our elected officials, our leaders and ourselves about a growing threat landscape in cyberspace. While technology regulation is a partisan issue, only bi-partisan solutions will yield impactful results.

David E. Sanger is a great journalist, bestselling author and an excellent writer. His storytelling is concise, easy to read and accessible for a wide audience. Throughout the book, I never felt that Sanger allowed himself to get caught up in the politics of it but rather maintained a refreshing neutrality. His outlook is simple: we need to redefine our sense of national security and come up with an international solution for cyberspace. We need to think broadly about the consequences of cyber-enabled espionage and cyberattacks against critical infrastructures. And we need to act now.

Cyber Security and the Financial System

The financial sector is a highly regulated marketplace. Deepfakes or artificially-generated synthetic media are associated with political disinformation but have not yet been linked to the financial system. The Carnegie Endowment for International Peace issued a scintillating working paper series titled “Cyber Security and the Financial System” covering a wide range of cutting edge issues from the European framework for Threat Intelligence-Based Ethical Red Teaming (TIBER) to assessing cyber resilience measures for financial organizations to global policies to combat manipulation of financial data. Jon Bateman’s contribution titled “Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios” takes a closer look on how deepfakes can impact the financial system. 

tl;dr

Rapid advances in artificial intelligence (AI) are enabling novel forms of deception. AI algorithms can produce realistic “deepfake” videos, as well as authentic-looking fake photos and writing. Collectively called synthetic media, these tools have triggered widespread concern about their potential in spreading political disinformation. Yet the same technology can also facilitate financial harm. Recent months have seen the first publicly documented cases of deepfakes used for fraud and extortion. Today the financial threat from synthetic media is low, so the key policy question is how much this threat will grow over time. Leading industry experts diverge widely in their assessments. Some believe firms and regulators should act now to head off serious risks. Others believe the threat will likely remain minor and the financial system should focus on more pressing technology challenges. A lack of data has stymied the discussion. In the absence of hard data, a close analysis of potential scenarios can help to better gauge the problem. In this paper, ten scenarios illustrate how criminals and other bad actors could abuse synthetic media technology to inflict financial harm on a broad swath of targets. Based on today’s synthetic media technology and the realities of financial crime, the scenarios explore whether and how synthetic media could alter the threat landscape.

Make sure to read the full paper titled Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios by Jon Bateman at https://carnegieendowment.org/2020/07/08/deepfakes-and-synthetic-media-in-financial-system-assessing-threat-scenarios-pub-82237

(Source: Daily Swig)

Deepfakes are a variation of manipulated media. In essence, a successful deepfake requires a sample data set of a original that is used to train a deep learning algorithm. It will learn to alter the training data to a degree that another algorithm is unable to distinguish whether the presented result is altered training data or the original. Think of it as a police sketch artist who will create a facial composite based on eye-witness accounts. The more available data and time the artist has to render a draft, the higher the likelihood of creating a successful mugshot sketch. In this paper, the term deepfake relates to a subset of synthetic media including videos, images and voice created through artificial intelligence.

The financial sector is particularly vulnerable in the know-your-customer space. It’s a unique entry point for malicious actors to submit manipulated identity verification or deploy deepfake technology to fool authenticity mechanisms. While anti-fraud prevention tools are an industry-wide standard to prevent impersonation or identity theft, the onset of cheaper, more readily available deepfake technology marks a turning point for the financial sector. Deepfakes may be used to leverage a blend of false or hacked personal identifiable information (PII) data to gain access or open bank accounts, initiate financial transactions, or redistribute private equity assets. Bateman focused on two categories of synthetic media that are most relevant for the financial sector: (1) narrowcast synthetic media, which encompasses one-off, tailored manipulated data deployed directly to the target via private channels and (2) broadcast synthetic media, which is designed for mass-audiences deployed directly or indirectly via publicly available channels, e.g. social media. An example for the first variation is the story of a cybercrime that took place in 2019. A Chief Executive Officer of a UK-based energy company received a phone call from – what he believed – his boss, the CEO of the parent corporation based in Germany. In the phone call, the voice of the German CEO was an impersonation created by artificial intelligence and publicly available voice recordings (speeches, transcripts etc). The voice directed the UK CEO to immediately initiate a financial transaction to pay a Hungarian supplier. This type of attack is also known as deepfake voice phishing (vishing). These fabricated directions resulted in the fraudulent transfer of $234,000. An example for the second variation is commonly found in widespread pump and dump schemes on social media. These could range from malicious actors creating false, incriminating deepfakes of key-personnel of a stock-listed company to artificially lower the stock price or creating synthetic media that misrepresents product results to manipulate a higher stock price and garner more interest from potential investors. Going off the two categories of synthetic media, Bateman presents ten scenarios that are layered into four stages: (1) Targeting Individuals, e.g. identity theft or impersonation, (2) Targeting Companies, e.g. Payment Fraud or Stock Manipulation, (3) Targeting Financial Markets, e.g. creating malicious flash crashes through state-sponsored hacking or cybercriminals backed a foreign government, and (4) Targeting Central Banks and Financial Regulators, e.g. regulatory astroturfing. 

In conclusion, Bateman finds that at this point in time, deepfakes aren’t potent enough to destabilize global financial systems in mature, healthy economies. They are more threatening, however, to individuals and business. To take precautions against malicious actors with deepfake technology, a number of resiliency measures can be implemented: broadcast synthetic media is potent to amplify and prolong already existing crises or scandals. Aside from building trust with key audiences, a potential remedy to deepfakes amplifying false narratives is the readiness to create counter-narratives with evidence. To prevent other companies from potential threats that would decrease the trust in the financial sector, an industry wide sharing of information on cyber attacks is a viable option to mitigate coordinated criminal activity. Lastly, the technology landscape is improving its integrity at a rapid succession rate. A multi-stakeholder response bringing together leaders from the financial sector, the technology sector and experts on consumer behavior with policymakers will help to create more efficient regulations to combat deepfakes in the financial system.

Threat Assessment: Chinese Technology Platforms

The American University Washington College of Law and the Hoover Institution at Stanford University created a working group to understand and assess the risks posed by Chinese technology companies in the United States. They propose a framework to better assess and evaluate these risks by focusing on the interconnectivity of threats posed by China to the US economy, national security and civil liberties.

tl;dr

The Trump administration took various steps to effectively ban TikTok, WeChat, and other Chinese-owned apps from operating in the United States, at least in their current forms. The primary justification for doing so was national security. Yet the presence of these apps and related internet platforms presents a range of risks not traditionally associated with national security, including data privacy, freedom of speech, and economic competitiveness, and potential responses raise multiple considerations. This report offers a framework for both assessing and responding to the challenges of Chinese-owned platforms operating in the United States.

Make sure to read the full report titled Chinese Technology Platforms Operating In The United States by Gary P. Corn, Jennifer Daskal, Jack Goldsmith, John C. Inglis, Paul Rosenzweig, Samm Sacks, Bruce Schneier, Alex Stamos, Vincent Stewart at https://www.hoover.org/research/chinese-technology-platforms-operating-united-states 

(Source: New America)

China has experienced consistent growth since opening its economy in the late 1970s. With its economy at about x14 today, this growth trajectory dwarfs the growth of the US economy, which increased at about x2 with the S&P 500 being its most rewarding driver at about x5 increase. Alongside economic power comes a thirst for global expansion far beyond the asian-pacific region. China’s foreign policy seeks to advance the Chinese one-party model of authoritarian capitalism that could pose a threat to human rights, democracy and the basic rule of law. US political leaders see these developments as a threat to their own US foreign policy of primacy but perhaps more important a threat to the western ideology deeply rooted in individual liberties. Needless to say that over the years every administration independent of political affiliation put the screws on China. A most recent example is the presidential executive order addressing the threat posed by social media video app TikTok. Given the authoritarian model of governance and the Chinese government’s sphere of control over Chinese companies their expansion into the US market raises concerns about access to critical data and data protection or cyber-enabled attacks on critical US infrastructure among a wide range of other threats to national security. For example:

Internet Governance: China is pursuing regulation to shift the internet from open to closed and decentralized to centralized control. The US government has failed to adequately engage international stakeholders in order to maintain an open internet but rather has authorized large data collection programs that emulate Chinese surveillance.

Privacy, Cybersecurity and National Security: The internet’s continued democratization encourages more social media and e-commerce platforms to integrate and connect features for users to enable multi-surface products. Mass data collection, weak product cybersecurity and the absence of broader data protection regulations can be exploited to collect data on domestic users, their behavior and their travel pattern abroad. It can be exploited to influence or control members of government agencies through targeted intelligence or espionage. Here the key consideration is aggregated data, which even in the absence of identifiable actors can be used to create viable intelligence. China has ramped up its offensive cyber operations beyond cyber-enabled trade or IP-theft and possesses the capabilities and cyber-weaponry to destabilize national security in the United States.

Necessity And Proportionality 

Considering mitigating the threat to national security by taking actions against Chinese owned- or controlled communications technology including tech products manufactured in China the working group suggests an individual case-based analysis. They attempt to address the challenge of accurately identifying the specific risk in an ever-changing digital environment with a framework of necessity and proportionality. Technology standards change at a breathtaking pace. Data processing reaches new levels of intimacy due to the use of artificial intelligence and machine learning. Thoroughly assessing, vetting and weighing a tolerance to specific risks are at the core of this framework in order to calibrate a chosen response to avoid potential collateral consequences.

The working group’s framework of necessity and proportionality reminded me of a classic lean six sigma structure with a strong focus on understanding the threat to national security. Naturally, as a first step they suggest accurately identifying the threat’s nature, credibility, imminence and the chances of the threat becoming a reality. I found this first step incredibly important because a failure to identify a threat will likely lead to false attribution and undermine every subsequent step. In the context of technology companies the obvious challenge is data collection, data integrity and detection systems to tell the difference. By that I imply a Chinese actor may deploy a cyber ruse in concert with the Chinese government to obfuscate their intentions. Following the principle of proportionality, step two is looking into the potential collateral consequence to the United States, its strategic partners and most importantly its citizens. Policymakers must be aware of the unintended path a policy decision may take once a powerful adversary like China starts its propaganda machine. Therefore this step requires policymakers to include thresholds for when a measure to mitigate a threat to national security outweighs the need to act. In particular inalienable rights such as the freedom of expression, freedom of the press or freedom of assembly must be upheld at all times as they are fundamental American values. To quote the immortal Molly IvinsMany a time freedom has been rolled back – and always for the same sorry reason: fear.” The third and final step concerns mitigation measures. In other words: what are we going to do about it? The working group landed on two critical factors: data and compliance. The former might be restricted, redirected or recoded to adhere to national security standards. The latter might be audited to not only identify vulnerabilities but further instill built-in cybersecurity and foster an amicable working-relationship. 

The Biden administration is faced with a daunting challenge to review and develop appropriate cyber policies that will address the growing threat from Chinese technology companies in a coherent manner that is consistent with American values. Only a broad policy response that is tailored to specific threats and focused on stronger cybersecurity and stronger data protection will yield equitable results. International alliances alongside increased collaboration to develop better privacy and cybersecurity measures will lead to success. However, the US must focus on their own strengths first, leverage their massive private sector to identify the specific product capabilities and therefore threats and attack vectors, before taking short-sighted, irreversible actions.

Threat Mitigation In Cyberspace

Richard A. Clarke and Robert K. Knake provide a detailed rundown of the evolution and legislative history of cyberspace. The two leading cybersecurity experts encourage innovative cyber policy solutions to mitigate cyberwar, protect our critical infrastructure and help citizens to prevent cybercrime.

The Fifth Domain, commonly referred to as cyberspace, poses new challenges for governments, companies and citizens. Clarke and Knake discuss the historic milestones that led to modern cybersecurity and cyber policy. With detailed accounts of how governments implement security layers in cyberspace, gripping examples of breaches of cybersecurity and innovative solutions for policymakers, this book ended up rather dense in content – a positive signal for someone interested in cybersecurity, but fairly heavy for everybody else. Some of the content widely circulated the news media, other content is intriguing and through-provoking. While the policy solutions in this book aren’t ground-breaking, the authors provide fuel for policymakers and the public to take action on securing data, but, perhaps more importantly, to start developing transparent, effective cyber policies that account for the new, emerging technologies within machine learning and quantum computing. Personally, I found the hardcover edition too clunky and expensive. Six parts over 298 pages, however, made reading this book a breeze.