Militarizing Influence

Our information environment is increasingly dependent on the inescapable, largely unregulated cyberspace. Beyond national and geographical boundaries, however, this comes with its unique challenges ranging from information accuracy, integrity and relevancy to weaponizing information to influence a target audience in the pursuit of a diplomatic or economic goal. 

tl;dr

This paper proposes the development and inclusion of Information Influence Operations (IIOs) in Cyberspace Operations. IIOs encompass the offensive and defensive use of cyberspace to influence a targeted population. This capability will enable the evolution of strategic messaging in cyberspace and allow response to near-peer efforts in information warfare.

Make sure to read the full paper titled Information Influence Operations: The Future of Information Dominance By Captain David Morin at https://cyberdefensereview.army.mil/CDR-Content/Articles/Article-View/Article/2537080/information-influence-operations-the-future-of-information-dominance/

(Source: DoD/Josef Cole)

The United States Cyber Command (USCYBERCOM) unifies the direction of cyber operations within the Department of Defense. This paper proposes to incorporate Information Influence Operations (IIOs) into its capability set. This influence will facilitate the exertion of soft power in the pursuit of US national interests. Moreover, IIOs will reduce the need for large-scale operations or use of critical cyber offensive operations. 

A notable omission in the paper is a clear definition of IIOs. The ‘Introduction’ suggests that “the unrealized value of cyberspace, and what makes it so dangerous, is it allows direct access to the individual and to the public at large. This access, when used correctly, provides actors in cyberspace the ability to influence public opinion and shape the narrative of ongoing operations.” This, however, appears to be conflating cyberspace with the general, public media landscape while it implies an operator could just hack Twitter accounts and send out some tweets with a favorable narrative. Applying the lessons and learnings from its efforts to counter foreign influence operations, Facebook views all “coordinated efforts to manipulate or corrupt public debate for a strategic goal” as an influence operation. By definition in Joint Publication 3-13, information operations are described as “the integrated use of electronic warfare, computer network operations, psychological operations (PSYOP), military deception, and operations security to influence, disrupt, or corrupt adversarial decision making while protecting our own.” Therefore a suitable definition combining all of three concepts would define IIOs as “a capability to shape and direct public opinion in order to influence, disrupt, or corrupt adversarial decision making by leveraging soft- and hardware technology supported by psychological weapons and tactics in the pursuit of a strategic national interest.”

The author could have expanded more on the principles of US combatant command structures and its basic chain of command. This would have helped the reader to understand the current discrepancies in ownership of IIOs. As it stands, US combatant commands are structured by geographic focus and functional capabilities. USCYBERCOM falls into the latter category. A functional command unifies different military branches to achieve its mission. It remains unclear which military branch currently takes ownership of IIOs, if any. Taking it a step further out of the frame, the author comes out short on delivering a convincing rationale of when and where IIOs should be deployed and under whose authority. Cyberspace is predominantly civilian space created and maintained by privately held servers all across the world. Would USCYBERCOM install a permanent Information Influence Operations Center to execute IIOs spanning multiple months and years? Would such action require presidential or congressional approval? And would approved missions cease at servers operated on US soil or exclude US citizens from manipulation? Would it release a transparency report detailing the measures taken against foreign and domestic threats and under whose authority? These and other important questions need to be considered when thinking about consolidation of government power.  

But not all is dark and gloomy. The author does detail his proposition with a few more insights. In revisiting Stuxnet, NotPetya or the Russian involvement in dividing the US electorate during the 2016 US Presidential elections the author builds a foundation to support the argument for a centralized command of IIOs. Two of these events were targeted cyber attacks on critical infrastructure, one allegedly driven by the US and Israel, and the third event was a carefully curated, multi-year effort to exploit vulnerabilities in the US democratic process. All of these events indeed demonstrate the power that can be wielded through cyberspace operations, but where I disagree with the author is the comparability of these unique events and a causality between cyberspace and influencing information. Combining cyber attacks to corrupt critical infrastructure with a targeted narrative to redirect the public’s attention is a serious threat to US national security. However, identifying the operator and the motive behind such an attack may reveal domestic, private actors with a mere criminal motive, if attribution is even possible. Take the coordinated social engineering attack on Twitter ahead of the 2020 US Presidential elections. Government accounts from Joe Biden to Barack Obama as well as the accounts of notable public figures such as Elon Musk or Jeff Bezos were hacked, hijacked and abused to distribute a bitcoin scam. Should USCYBERCOM have stepped in, take network control from Twitter, a private business, in order to mitigate and counter the attack? 

In the section ‘Influencers’ the author does raise valid concerns when he states that “influencers are capable of wielding influence over millions and have used this influence for a multitude of purposes from philanthropy and advertising to political ends.” Online reach is tantamount to circulation of a print paper with the difference being longevity – the internet never forgets. Unchecked influence of influencers is something our society needs to review and decide upon. Perhaps private businesses will recognize the powers that be and increase checks and balances for this specific type of user or automatically guardrail reach to create equity among users.

In the section ‘Operationalizing IIOs’ the author states “There is little brand loyalty in the online world. Consumers will go elsewhere to find what they need if their preference is slow or unavailable. Influencing and controlling that “someplace else” yields the opportunity to wield influence.” In essence, the author suggests to take advantage of users impatience by increasing the time it takes to load a website. Once this latency or lag is in place, an operator may incentivice users to shift their attention to an alternative information source. This can be achieved through well-targeted advertising campaigns. As an example, the author offers the case of Amazon losing over $72 million due to a 63 minute outage on Prime Day 2018.

There is research to support an increased impatience during ecommerce transactions. However, there is an equal amount of research on brand loyalty, which across markets sees about 75% and higher retention rates once a customer relationship has been successfully established. For example, an Amazon Prime user, who pays for the privilege of Prime is unlikely to switch a book order to Barnes & Noble simply because there is a few milliseconds of delay when placing the order. It takes a contrast in price and shipping time to break the established brand loyalty with Amazon. Furthermore, in the author’s example the IIO appears to be directed at an ecommerce transaction. Even in the hypothetical foreign policy scenario of introducing latency to Alibaba to redirect users to Amazon to decrease economic output/revenue or other feasible US objectives, the author doesn’t really explain how it could favorably influence future behavior.

IIOs offer a tremendous potential to support diplomacy while strengthening our national security. Allocating the responsibility to exert and drive information influence to a military institution, however, raises constitutional concerns. It would likely undermine the trust of our allies but also chill diplomatic relations with non-allied nations. From a military perspective, an effort to centralize capabilities can reduce overall cost of cyberspace operations and increase transparency among military stakeholders. On the other hand, all centralized command structures are vulnerable to a single-point of failure, which can be devastating when USCYBERCOM is facing a sophisticated, superior adversary. In addition, an effort to centralize IIOs might increase the response rate to attacks in cyberspace or efforts to coordinate foreign influence operations by an adversary due to the extended chain of command.   

Advertisement

Left Of Launch

The Perfect Weapon is an intriguing account of history’s most cunning cyberwarfare operations. I learned about the incremental evolution of cyberspace as the fifth domain of war and how policymakers, military leaders and the private technology sector continue to adapt to this new threat landscape.  

Much has been written about influence operations or cyber criminals, but few accounts present so clearly a link between national security, cyberspace and foreign policy. Some of the stories told in The Perfect Weapon touch upon the Russian interference in the 2016 presidential elections, the 2015 hack of the Ukrainian power grid, the 2014 Sony hack, the 2013 revelations by Edward Snowden and many other notable breaches of cybersecurity. These aren’t news anymore, but help to understand America’s 21st century vulnerabilities.

Chapter 8 titled “The Fumble” left a particular mark on me. In it, Sanger details the handling of Russian hackers infiltrating the computer and server networks of the Democratic National Committee. The sheer lethargy by officials at the time demonstrated over months on end, including Obama’s failure to openly address the ongoing cyber influence operations perpetrated by the Russians ahead of the elections, was nothing particularly new yet I still felt outraged by what now seems to be obvious. The chapter illustrates some governance shortcomings that we as a society need to overcome in order to address cyberattacks but also build better cyber defense mechanisms.

Left of Launch is a strategy to leverage cyberwarfare or other infrastructure sabotage to prevent ballistic missiles from being launched

But the most insights for me came from the books cross-cutting between the cyberspace/cybersecurity domain to the public policy domain. It showed me how much work is still left to be done to educate our elected officials, our leaders and ourselves about a growing threat landscape in cyberspace. While technology regulation is a partisan issue, only bi-partisan solutions will yield impactful results.

David E. Sanger is a great journalist, bestselling author and an excellent writer. His storytelling is concise, easy to read and accessible for a wide audience. Throughout the book, I never felt that Sanger allowed himself to get caught up in the politics of it but rather maintained a refreshing neutrality. His outlook is simple: we need to redefine our sense of national security and come up with an international solution for cyberspace. We need to think broadly about the consequences of cyber-enabled espionage and cyberattacks against critical infrastructures. And we need to act now.

Cyber Security and the Financial System

The financial sector is a highly regulated marketplace. Deepfakes or artificially-generated synthetic media are associated with political disinformation but have not yet been linked to the financial system. The Carnegie Endowment for International Peace issued a scintillating working paper series titled “Cyber Security and the Financial System” covering a wide range of cutting edge issues from the European framework for Threat Intelligence-Based Ethical Red Teaming (TIBER) to assessing cyber resilience measures for financial organizations to global policies to combat manipulation of financial data. Jon Bateman’s contribution titled “Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios” takes a closer look on how deepfakes can impact the financial system. 

tl;dr

Rapid advances in artificial intelligence (AI) are enabling novel forms of deception. AI algorithms can produce realistic “deepfake” videos, as well as authentic-looking fake photos and writing. Collectively called synthetic media, these tools have triggered widespread concern about their potential in spreading political disinformation. Yet the same technology can also facilitate financial harm. Recent months have seen the first publicly documented cases of deepfakes used for fraud and extortion. Today the financial threat from synthetic media is low, so the key policy question is how much this threat will grow over time. Leading industry experts diverge widely in their assessments. Some believe firms and regulators should act now to head off serious risks. Others believe the threat will likely remain minor and the financial system should focus on more pressing technology challenges. A lack of data has stymied the discussion. In the absence of hard data, a close analysis of potential scenarios can help to better gauge the problem. In this paper, ten scenarios illustrate how criminals and other bad actors could abuse synthetic media technology to inflict financial harm on a broad swath of targets. Based on today’s synthetic media technology and the realities of financial crime, the scenarios explore whether and how synthetic media could alter the threat landscape.

Make sure to read the full paper titled Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios by Jon Bateman at https://carnegieendowment.org/2020/07/08/deepfakes-and-synthetic-media-in-financial-system-assessing-threat-scenarios-pub-82237

(Source: Daily Swig)

Deepfakes are a variation of manipulated media. In essence, a successful deepfake requires a sample data set of a original that is used to train a deep learning algorithm. It will learn to alter the training data to a degree that another algorithm is unable to distinguish whether the presented result is altered training data or the original. Think of it as a police sketch artist who will create a facial composite based on eye-witness accounts. The more available data and time the artist has to render a draft, the higher the likelihood of creating a successful mugshot sketch. In this paper, the term deepfake relates to a subset of synthetic media including videos, images and voice created through artificial intelligence.

The financial sector is particularly vulnerable in the know-your-customer space. It’s a unique entry point for malicious actors to submit manipulated identity verification or deploy deepfake technology to fool authenticity mechanisms. While anti-fraud prevention tools are an industry-wide standard to prevent impersonation or identity theft, the onset of cheaper, more readily available deepfake technology marks a turning point for the financial sector. Deepfakes may be used to leverage a blend of false or hacked personal identifiable information (PII) data to gain access or open bank accounts, initiate financial transactions, or redistribute private equity assets. Bateman focused on two categories of synthetic media that are most relevant for the financial sector: (1) narrowcast synthetic media, which encompasses one-off, tailored manipulated data deployed directly to the target via private channels and (2) broadcast synthetic media, which is designed for mass-audiences deployed directly or indirectly via publicly available channels, e.g. social media. An example for the first variation is the story of a cybercrime that took place in 2019. A Chief Executive Officer of a UK-based energy company received a phone call from – what he believed – his boss, the CEO of the parent corporation based in Germany. In the phone call, the voice of the German CEO was an impersonation created by artificial intelligence and publicly available voice recordings (speeches, transcripts etc). The voice directed the UK CEO to immediately initiate a financial transaction to pay a Hungarian supplier. This type of attack is also known as deepfake voice phishing (vishing). These fabricated directions resulted in the fraudulent transfer of $234,000. An example for the second variation is commonly found in widespread pump and dump schemes on social media. These could range from malicious actors creating false, incriminating deepfakes of key-personnel of a stock-listed company to artificially lower the stock price or creating synthetic media that misrepresents product results to manipulate a higher stock price and garner more interest from potential investors. Going off the two categories of synthetic media, Bateman presents ten scenarios that are layered into four stages: (1) Targeting Individuals, e.g. identity theft or impersonation, (2) Targeting Companies, e.g. Payment Fraud or Stock Manipulation, (3) Targeting Financial Markets, e.g. creating malicious flash crashes through state-sponsored hacking or cybercriminals backed a foreign government, and (4) Targeting Central Banks and Financial Regulators, e.g. regulatory astroturfing. 

In conclusion, Bateman finds that at this point in time, deepfakes aren’t potent enough to destabilize global financial systems in mature, healthy economies. They are more threatening, however, to individuals and business. To take precautions against malicious actors with deepfake technology, a number of resiliency measures can be implemented: broadcast synthetic media is potent to amplify and prolong already existing crises or scandals. Aside from building trust with key audiences, a potential remedy to deepfakes amplifying false narratives is the readiness to create counter-narratives with evidence. To prevent other companies from potential threats that would decrease the trust in the financial sector, an industry wide sharing of information on cyber attacks is a viable option to mitigate coordinated criminal activity. Lastly, the technology landscape is improving its integrity at a rapid succession rate. A multi-stakeholder response bringing together leaders from the financial sector, the technology sector and experts on consumer behavior with policymakers will help to create more efficient regulations to combat deepfakes in the financial system.

Threat Assessment: Chinese Technology Platforms

The American University Washington College of Law and the Hoover Institution at Stanford University created a working group to understand and assess the risks posed by Chinese technology companies in the United States. They propose a framework to better assess and evaluate these risks by focusing on the interconnectivity of threats posed by China to the US economy, national security and civil liberties.

tl;dr

The Trump administration took various steps to effectively ban TikTok, WeChat, and other Chinese-owned apps from operating in the United States, at least in their current forms. The primary justification for doing so was national security. Yet the presence of these apps and related internet platforms presents a range of risks not traditionally associated with national security, including data privacy, freedom of speech, and economic competitiveness, and potential responses raise multiple considerations. This report offers a framework for both assessing and responding to the challenges of Chinese-owned platforms operating in the United States.

Make sure to read the full report titled Chinese Technology Platforms Operating In The United States by Gary P. Corn, Jennifer Daskal, Jack Goldsmith, John C. Inglis, Paul Rosenzweig, Samm Sacks, Bruce Schneier, Alex Stamos, Vincent Stewart at https://www.hoover.org/research/chinese-technology-platforms-operating-united-states 

(Source: New America)

China has experienced consistent growth since opening its economy in the late 1970s. With its economy at about x14 today, this growth trajectory dwarfs the growth of the US economy, which increased at about x2 with the S&P 500 being its most rewarding driver at about x5 increase. Alongside economic power comes a thirst for global expansion far beyond the asian-pacific region. China’s foreign policy seeks to advance the Chinese one-party model of authoritarian capitalism that could pose a threat to human rights, democracy and the basic rule of law. US political leaders see these developments as a threat to their own US foreign policy of primacy but perhaps more important a threat to the western ideology deeply rooted in individual liberties. Needless to say that over the years every administration independent of political affiliation put the screws on China. A most recent example is the presidential executive order addressing the threat posed by social media video app TikTok. Given the authoritarian model of governance and the Chinese government’s sphere of control over Chinese companies their expansion into the US market raises concerns about access to critical data and data protection or cyber-enabled attacks on critical US infrastructure among a wide range of other threats to national security. For example:

Internet Governance: China is pursuing regulation to shift the internet from open to closed and decentralized to centralized control. The US government has failed to adequately engage international stakeholders in order to maintain an open internet but rather has authorized large data collection programs that emulate Chinese surveillance.

Privacy, Cybersecurity and National Security: The internet’s continued democratization encourages more social media and e-commerce platforms to integrate and connect features for users to enable multi-surface products. Mass data collection, weak product cybersecurity and the absence of broader data protection regulations can be exploited to collect data on domestic users, their behavior and their travel pattern abroad. It can be exploited to influence or control members of government agencies through targeted intelligence or espionage. Here the key consideration is aggregated data, which even in the absence of identifiable actors can be used to create viable intelligence. China has ramped up its offensive cyber operations beyond cyber-enabled trade or IP-theft and possesses the capabilities and cyber-weaponry to destabilize national security in the United States.

Necessity And Proportionality 

Considering mitigating the threat to national security by taking actions against Chinese owned- or controlled communications technology including tech products manufactured in China the working group suggests an individual case-based analysis. They attempt to address the challenge of accurately identifying the specific risk in an ever-changing digital environment with a framework of necessity and proportionality. Technology standards change at a breathtaking pace. Data processing reaches new levels of intimacy due to the use of artificial intelligence and machine learning. Thoroughly assessing, vetting and weighing a tolerance to specific risks are at the core of this framework in order to calibrate a chosen response to avoid potential collateral consequences.

The working group’s framework of necessity and proportionality reminded me of a classic lean six sigma structure with a strong focus on understanding the threat to national security. Naturally, as a first step they suggest accurately identifying the threat’s nature, credibility, imminence and the chances of the threat becoming a reality. I found this first step incredibly important because a failure to identify a threat will likely lead to false attribution and undermine every subsequent step. In the context of technology companies the obvious challenge is data collection, data integrity and detection systems to tell the difference. By that I imply a Chinese actor may deploy a cyber ruse in concert with the Chinese government to obfuscate their intentions. Following the principle of proportionality, step two is looking into the potential collateral consequence to the United States, its strategic partners and most importantly its citizens. Policymakers must be aware of the unintended path a policy decision may take once a powerful adversary like China starts its propaganda machine. Therefore this step requires policymakers to include thresholds for when a measure to mitigate a threat to national security outweighs the need to act. In particular inalienable rights such as the freedom of expression, freedom of the press or freedom of assembly must be upheld at all times as they are fundamental American values. To quote the immortal Molly IvinsMany a time freedom has been rolled back – and always for the same sorry reason: fear.” The third and final step concerns mitigation measures. In other words: what are we going to do about it? The working group landed on two critical factors: data and compliance. The former might be restricted, redirected or recoded to adhere to national security standards. The latter might be audited to not only identify vulnerabilities but further instill built-in cybersecurity and foster an amicable working-relationship. 

The Biden administration is faced with a daunting challenge to review and develop appropriate cyber policies that will address the growing threat from Chinese technology companies in a coherent manner that is consistent with American values. Only a broad policy response that is tailored to specific threats and focused on stronger cybersecurity and stronger data protection will yield equitable results. International alliances alongside increased collaboration to develop better privacy and cybersecurity measures will lead to success. However, the US must focus on their own strengths first, leverage their massive private sector to identify the specific product capabilities and therefore threats and attack vectors, before taking short-sighted, irreversible actions.

Threat Mitigation In Cyberspace

Richard A. Clarke and Robert K. Knake provide a detailed rundown of the evolution and legislative history of cyberspace. The two leading cybersecurity experts encourage innovative cyber policy solutions to mitigate cyberwar, protect our critical infrastructure and help citizens to prevent cybercrime.

The Fifth Domain, commonly referred to as cyberspace, poses new challenges for governments, companies and citizens. Clarke and Knake discuss the historic milestones that led to modern cybersecurity and cyber policy. With detailed accounts of how governments implement security layers in cyberspace, gripping examples of breaches of cybersecurity and innovative solutions for policymakers, this book ended up rather dense in content – a positive signal for someone interested in cybersecurity, but fairly heavy for everybody else. Some of the content widely circulated the news media, other content is intriguing and through-provoking. While the policy solutions in this book aren’t ground-breaking, the authors provide fuel for policymakers and the public to take action on securing data, but, perhaps more importantly, to start developing transparent, effective cyber policies that account for the new, emerging technologies within machine learning and quantum computing. Personally, I found the hardcover edition too clunky and expensive. Six parts over 298 pages, however, made reading this book a breeze.

The Nuclear Option in Cyberspace

Stuxnet was a malicious computer worm that caused substantial damage to Iran’s nuclear program. It was likely deployed to prevent a conventional military strike against Iran’s nuclear facilities. The 2015 cyber attacks on Ukranian critical infrastructure caused loss of energy for hundreds of thousands citizens of Ukraine in December. It was likely staged to test cyber operations for the upcoming 2016 U.S. presidential election. Both cases offer interesting takeaways: (a) offensive cyber operations often empower rather than deter an adversary and (b) offensive cyber operations resulting in a devastating cyber attack to the integrity of the target may be responded via conventional military means. But where exactly is the threshold for escalating a cyber attack into conventional domains? How can policymakers rethink escalation guidelines without compromising international relations? This paper discusses achieving strategic stability in cyberspace by way of transferring the concept of a nuclear no-first-use policy into the current U.S. cyber strategy.  

tl;dr

U.S. cyber strategy has a hypocrisy problem: it expects its cyberattacks to deter others (defend forward) without triggering escalatory responses outside cyberspace, while it is unclear about what it considers off-limits. A strategic cyber no-first-use declaration, like the one outlined in this article, could help solve risks of inadvertent instability while allowing cyber-​operations to continue.

Make sure to read the full paper titled A Strategic Cyber No-First-Use Policy? Addressing the U.S. Cyber Strategy Problem by Jacquelyn Schneider at https://www.tandfonline.com/doi/full/10.1080/0163660X.2020.1770970

Credit: J.M. Eddins Jr./Air Force

In 2018 the Trump administration adopted its progressive National Cyber Strategy. These sort of policy declarations are commonly filled with agreeable generalities, albeit this National Cyber Strategy read in conjunction with the 2018 Department of Defense Cyber Strategy introduced a new, rather reckless cyber posture of forward attack in cyberspace as a means of a preemptive cyber defense. Key themes, e.g. 

  • Using cyberspace to amplify military lethality and effectiveness;  
  • Defending forward, confronting threats before they reach U.S. networks;  
  • Proactively engaging in the day-to-day great power competition in cyberspace;  
  • Actively contesting the exfiltration of sensitive DoD information; 

raise important questions of national security. Why does an industrial superpower like the United States feel a need to start a cyber conflict when it could redirect resources toward building effective cyber defense systems? How many cyber attacks against critical U.S. infrastructure are successful that it would justify a forward leaning cyber defense? What is the long-term impact of charging the military with cyber strategy when the private sector in Silicon Valley is in a much better position to create built-in-cybersecurity and why aren’t resources invested back into the economy to spur cyber innovation? Each of these questions is material for future dissertations. Until then, instead of a defend forward strategy in cyberspace, a cyber policy of no-first-use might complement securing critical infrastructure while ensuring allies that the U.S. cyber capabilities are unmatched in the world and merciless if tested. 

No-first-use is a concept originating in the world of nuclear warfare. In essence, it means 

“a state declares that although it has nuclear weapons, and will continue to develop and rely on these weapons to deter nuclear strikes, it will not use nuclear weapons first.”

Instead conventional (non-nuclear) warfare will be utilized to respond to attacks on its sovereignty. These policies are not treaties with legal ramifications if violated. They’re neither agreements to ban production of certain weapon systems nor intended as arms control measures. In fact, no-first-use policies often take shape in form of a public commitment signaling restraint to friends and foes. They are made for strategic stability in a given domain. 

No-First-Use Cyber Policy 

Taking the no-first-use concept to cyberspace may be a national security strategy at low cost and high impact. Cyberspace is by its configuration transient, hard to control, low cost of entry and actor-independent. For example, a web crawler is at times a spiderbot indexing websites for search engines to produce better search results. At another time the same web crawler is configured to recon adversary cyber infrastructure and collect intelligence. Yet another time, the tool may carry a malicious payload while scraping website data. This level of ambiguity introduces a wealth of cyber policy hurdles to overcome when drafting a no-first-use cyber policy. Schneider recommends starting with distinguishing the elements of cyber operations in its strategic context. As mentioned before some actions in cyberspace are permissible, even expected, other actions using the same technology, are not. Now, there is no precedence for a cyber operation to be so effective at scale that it would compromise its target (state) altogether. For example, no known cyber operation has ever irreparably corrupted the energy infrastructure of a state, destroyed social security and health data of its citizens and redirected all government funds, bonds and securities without a trace or leaving the state in a position unable to respond within conventional warfare domains. This means the escalation risk from a cyber operation against critical infrastructure is lower in cyberspace compared to an attack with conventional weaponry. Therefore a successful no-first-use cyber policy must focus on the cyber operation that produces the most violent results and is effectively disrupting a conventional defense (by disrupting critical infrastructure). 

Another consideration for an effective no-first-use cyber policy is the rationale of continued development of cyber capabilities. A no-first-use cyber policy does not preclude its parties from actively testing adversaries’ cyber vulnerabilities; it only bars them from exploiting such weaknesses unless the adversary strikes first. 

A strong argument against adopting a no-first-use cyber policy is diplomatic appearances. First, it might signal a weakness on part of U.S. cyber capabilities or indicate to allies that the U.S. will not commit to protecting them if under attack. Second, it may also result in hypocrisy if the U.S. launches a first strike in cyberspace after political changes but is still bound to a no-first-use policy. For Schneider a successful no-first-use cyber policy 

“credibly convinces other states that the U.S. will restrain itself in cyberspace while it simultaneously conducts counter-cyber operations on a day-to-day basis.”

She also recommends strategic incentives through positive means: information sharing, foreign aid or exchange of cyber capabilities. The end goal then ought to be strategic deterrence through commitments in cyberspace to restraint high-severity cyber attacks.  

I found the idea of a no-first-use cyber policy captivating, albeit inconceivable to be implemented at scale in cyberspace. First, even though cyber operations with the potential to blackout a state are currently reserved for professional militaries or organized cyber operators in service of a state-actor, I don’t believe that a lone non-state actor is not capable of producing malicious code with equal destructive powers. Second, I see attribution still as a roadblock despite improving cyber forensics. Any democracy would see the hypocrisy of mistakenly engaging a non-state actor or the risk of misidentifying a state-actor as perpetrator. Moreover, the current state of attribution research in cyberspace is considering humans with certain intent as foundation when future cyber conflict may be initiated by a rogue or faulty autonomous weapon system under substantial control of an artificial intelligence. Third, any policy without legal or economic ramifications isn’t worth considering. An effective deterrence is hard to achieve without “skin in the game”. Perhaps an alternative to a no-first-use cyber policy would be a first-invest-into-cyber defense policy. Emulate the Paris Climate Accord for cyberspace by creating a normative environment that obligates states to achieve and maintain a minimum of cybersecurity by investing into cyber defense. This way constant innovation within the private sector reduces vulnerabilities, which will lead to a self-sustaining deterrence.   

How Cyberwarfare Is Used to Influence Public Policy

Cyberspace differs from physical domains. How do we know a hacker’s motive or allegiance? Among the many cyber conflicts in cyberspace only a few escalate into a real world conflict. Those which do, however, beckon a reevaluation of existing policies. This paper argues current research is underrating the second-order impact from cyber-enabled political warfare on public policy. It makes a case for policy makers to consider changes of public policy beyond mere retaliation. Moreover it offers insights into the complex investigations process tied to cyber operations that fall out-of-pattern.

tl;dr

At present, most scholarship on the potential for escalation in cyberspace couches analysis in terms of the technological dynamics of the domain for relative power maneuvering. The result has been a conceptualisation of the logic of operation in cyberspace as one of ‘tit-for-tat’ exchanges motivated by attribution problems and limited opportunity for strategic gain. This article argues that this dominant perspective overlooks alternative notions of how cyber tools are used to influence. This, in turn, has largely led scholars to ignore second-order effects – meaning follow-on effects triggered by a more direct outcome of an initial cyber action – on domestic conditions, institutions, and individual stakeholders. This article uses the case of cyber-enabled political warfare targeting the United States in 2016 to show how escalation can occur as a second-order effect of cyber operations. Specifically, the episode led to a re-evaluation of foreign cyber strategy on the part of American defence thinkers that motivated an offensive shift in doctrine by 2018. The episode also directly affected both the political positions taken by important domestic actors and the attitude of parts of the electorate towards interference, both of which have reinforced the commitment of military planners towards assertive cyber actions.

Make sure to read the full paper titled Beyond tit-for-tat in cyberspace: Political warfare and lateral sources of escalation online by Christopher Whyte at https://doi.org/10.1017/eis.2020.2

Credit: Jozsef Hunor Vilhelem

Cyber-enabled political warfare takes place on a daily basis. It is orchestrated by democracies and authoritarian states alike. A prevailing academic school of thought evaluates these cyber operations by a four-prong perimeter guidance: 

(1) Common intelligence-gathering
(2) Signal testing
(3) Strategic reconnaissance which may result in a
(4) Major cyber assault on critical infrastructure

On both sides, attacker and defender, it is incredibly difficult to determine whether a cyber operation is a tolerated everyday occurrence or a prelude to, if not the final attack against national security. This overpowering imbalance between signal-to-noise ratio has led to a dominant academic perspective that argues cyber operations are an endless loop of retaliatory instances overlooking clandestine long-term objectives. It begs the question: when does an instance of cybersecurity become a matter of national security? When does a cyber operation escalate into full-on warfare? In this paper, the author creates a notion for cyber operations as an instrument to influence public policy beyond mere breach of cybersecurity post escalation. Through examples of cyber-enabled political warfare, the author makes a case for vulnerabilities in democratic societies that originate from a failure to evaluate cyber-enabled political warfare under cyber conflict standards. Therefore creating a vacuum for policy development skewed to overstate potential cyber risks in public policy.    

Cyber operations resulting in cyber conflict are here to stay. In an increasingly accessible space of computer science and affordable hardware, nation states as well as hostile individual fringe groups find more and more fertile ground to develop new generations of cyber tools to pursue anything from criminal objectives to ideological influence operations to subvert public opinion. In the context of cyber operations being part of an everyday occurrence this poses the first problem of identifying a targeted cyber operation as a departure from regular everyday probes in cyberspace. Aforementioned affordability increases difficulty to assess the situation since the cyber operation may originate from a state-actor or is a proxy action driven by individual fringe groups that may or may not be adherent to a state-actor. Here, states need to decide between tolerance, which may result in a failure to detect a major assault on critical infrastructure or a measured response, which will always result in giving away signal that an opponent may abuse for future cyber operations. Of course, the former carries risk of escalating into a real world conflict. Whereas the latter carries the risk of setting the stage for a real world conflict under even less favorable circumstances. In this latter scenario the author creates a notion to consider the second order effects on public policy. In other words, when investigating cyber operations, it is necessary to review beyond the technical means and parse the attack with current affairs. This notion reverberates into the policy development process for the event of a shift in strategic policy.

“What pressure points and vulnerabilities dictate the utility of cyber operations and, subsequently, the shape of potential escalation?”

Democracies delegate the power of the people to elected leaders based on an information exchange system that requires integrity. Cyber-enabled political warfare seeks to exploit integrity by sowing distrust in the political system and its elected leaders. By example of the 2016 U.S. presidential elections, the author builds a case for clarity on how the cyber operations were not only a ‘tit-for-tat’ engagement in support of a particular candidate but rather deployed with a strategic, long-term objective to subvert the integrity of U.S. democracy. The disruption of the democratic process took place by 

(1) Identifying a lack of government regulation for social media platforms that have critical reach with the electorate
(2) Understanding flaws in the algorithmic design of information distribution via social media
(3) Increased cyber attacks on private information that carry disruptive elements once published
(4) Increased deflection of attempts to specifically attribute cyber operations. Therefore enabling plausible deniability
(5) A domestic political landscape that is so polarized that it tolerates foreign interference or is even further divided by domestic agent’s rhetoric and 
(6) A foreign actor (Russia) who is willing to exploit these vulnerabilities

Through these various inter-connected and standalone stages of cyber-enabled political warfare, the Russians were able to effectively undermine public trust in both political candidates, the democratic process and beyond that to an extent that triggered a critical reevaluation of the U.S. cyber strategy resulting in new public policy. The implication for policy makers is to critically consider lateral side effects of cyber operations beyond the method employed and damage done. The potential to influence decision-making of state leaders might be enhanced by these second order effects especially when misinterpreted. Aside from attribution, an effective policy response must take a holistic approach beyond closing a vulnerability in national security.    

Political Warfare Is A Threat To Democracy. And Free Speech Enables It

“I disapprove of what you say, but I will defend to the death your right to say it” is an interpretation of Voltaire’s principles by Evelyn Beatrice Hall. Freedom of expression is often cited as the last frontier before falling into authoritarian rule. But is free speech, our greatest strength, really our greatest weakness? Hostile authoritarian actors seem to exploit these individual liberties by engaging in layered political warfare to undermine trust in our democratic systems. These often clandestine operations pose an existential threat to our democracy.   

tl;dr

The digital age has permanently changed the way states conduct political warfare—necessitating a rebalancing of security priorities in democracies. The utilisation of cyberspace by state and non- state actors to subvert democratic elections, encourage the proliferation of violence and challenge the sovereignty and values of democratic states is having a highly destabilising effect. Successful political warfare campaigns also cause voters to question the results of democratic elections and whether special interests or foreign powers have been the decisive factor in a given outcome. This is highly damaging for the political legitimacy of democracies, which depend upon voters being able to trust in electoral processes and outcomes free from malign influence— perceived or otherwise. The values of individual freedom and political expression practised within democratic states challenges their ability to respond to political warfare. The continued failure of governments to understand this has undermined their ability to combat this emerging threat. The challenges that this new digitally enabled political warfare poses to democracies is set to rise with developments in machine learning and the emergence of digital tools such as ‘deep fakes’.

Make sure to read the full paper titled Political warfare in the digital age: cyber subversion, information operations and ‘deep fakes’ by Thomas Paterson and Lauren Hanley at https://www.tandfonline.com/doi/abs/10.1080/10357718.2020.1734772

MC2 Joseph Millar | Credit: U.S. Navy

This paper’s central theme is at the intersection of democratic integrity and political subversion operations. The authors describe an increase of cyber-enabled espionage and political warfare due to the global spread of the internet. They argue it has led to an imbalance between authoritarian and democratic state actors. Their argument rests on the notion that individual liberties such as freedom of expression put democratic states at a disadvantage compared to authoritarian states. Therefore authoritarian states are observed to more often choose political warfare and subversion operations versus democracies are confined to breaching cyber security and conducting cyber espionage. Cyber espionage is defined as

“the use of computer networks to gain illicit access to confidential information, typically that held by a government or other organization”

and is not a new concept. I disagree with the premise of illicit access because cyberspace specifically enables the free flow of information beyond any local regulation. Illicit is either redundant for espionage does not necessarily require breaking laws, rules or customs or it is duplicative with confidential information, which I interpret as synonymous with classified information. Though one might argue about the difference. From a legal perspective, the information does not need to be obtained through illicit access.

With regard to the broader term political warfare, I found the definition of political warfare as, 

“diverse operations to influence, persuade, and coerce nation states, organizations, and individuals to operate in accord with one’s strategic interests without employing kinetic force” 

most appropriate. It demonstrates the depth of political warfare, which encompasses influence and subversion operations outside of physical activity. Subversion operations are defined as 

“a subcategory of political warfare that aims to undermine institutional as well as individual legitimacy and authority”

I disagree with this definition for it fails to emphasize the difference between political warfare and subversion – both undermine legitimacy and authority. However, a subversion operation is specifically aimed to erode and deconstruct a political mandate. It is the logical next step after political warfare influenced a populace in order to achieve political power. The authors see the act of subversion culminating in a loss of trust in democratic principles. It leads to voter suppression, reduced voter participation, decreased and asymmetrical review of electoral laws but more importantly it poses a challenge to the democratic values of its citizens. It is an existential threat to a democracy. It favors authoritarian states detached from checks and balances that are usually present in democratic systems. These actors are not limited by law or civic popularity or reputational capital. Ironically, this bestows a certain amount of freedom upon them to deploy political warfare operations. Democracies on the other hand uphold individual liberties such as freedom of expression, freedom of the press, freedom of assembly or equal treatment under law and due process. As demonstrated during the 2016 U.S. presidential elections, a democracy generally struggles with identifying political warfare initiated by a foreign (hostile) state from certain segments of the population pursuing their strategic objectives by leveraging these exact individual freedoms. An example from the Mueller Report 

“stated that the Internet Research Agency (IRA), which had clear links to the Russian Government, used social media accounts and interest groups to sow discord in the US political system through what it termed ‘information warfare’ […] The IRA’s operation included the purchase of political advertisements on social media in the names of US persons and entities, as well as the staging of political rallies inside the United States.”

And it doesn’t stop in America. Russia is deploying influence operations in volatile regions on the African continent. China has a history of attempting to undermine democratic efforts in Africa. Both states aim to chip away power from former colonial powers such as France or at least suppress efforts to democratise regions in Africa. China is also deeply engaged in large-scale political warfare in the Southeast Asian region over regional dominance but also territorial expansion as observed in the South China Sea. New Zealand and Australia recorded numerous incidents of China’s attempted influence operations. Australia faced a real-world political crisis when Australian Labor Senator Sam Dastyari was found to be connected to political donor Huang Xiangmo, who has ties to the Chinese Communist Party. Therefore, China having a direct in-route to influence Australian policy decisions. 

The paper concludes with an overview of future challenges posed by political warfare. With more and more computing power readily available the development of new cyber tools and tactics to ideate political warfare operations is only going to increase. Authoritarian states are likely to expand their disinformation playbooks by tapping into the fears of people fueled by conspiracy theories. Developments of machine learning and artificial intelligence will further improvements of inauthentic behavior online. For example, partisan political bots will become more human and harder to discern from real human users. Deep fake technology will increase sampling rates by tapping into larger datasets from the social graph of every human being making it increasingly possible to impersonate individuals to gain access or achieve certain strategic objectives. Altogether, political warfare poses a greater challenge than cyber-enabled espionage in particular for democracies. Democracies need to understand the asymmetrical relationship with authoritarian actors and dedicate resources to effective countermeasures to political warfare without undoing civil liberties in the process.