Is Transparency Really Reducing The Impact Of Misinformation?

A recent study investigated YouTube’s efforts to provide more transparency about the ownership of certain YouTube channels. The study concerned YouTube’s disclaimers displayed under the video that indicate the content was produced or is funded by a state-controlled media outlet. The study sought to shed light on whether or not these disclaimers are an efficient means to reduce the impact of misinformation.  

tl;dr

In order to test the efficacy of YouTube’s disclaimers, we ran two experiments presenting participants with one of four videos: A non-political control, an RT video without a disclaimer, an RT video with the real disclaimer, or the RT video with a custom implementation of the disclaimer superimposed onto the video frame. The first study, conducted in April 2020 (n = 580) used an RT video containing misinformation about Russian interference in the 2016 election. The second conducted in July 2020 (n = 1,275) used an RT video containing misinformation about Russian interference in the 2020 election. Our results show that misinformation in RT videos has some ability to influence the opinions and perceptions of viewers. Further, we find YouTube’s funding labels have the ability to mitigate the effects of misinformation, but only when they are noticed, and the information absorbed by the participants. The findings suggest that platforms should focus on providing increased transparency to users where misinformation is being spread. If users are informed, they can overcome the potential effects of misinformation. At the same time, our findings suggest platforms need to be intentional in how warning labels are implemented to avoid subtlety that may cause users to miss them.

Make sure to read the full article titled State media warning labels can counteract the effects of foreign misinformation by Jack Nassetta and Kimberly Gross at https://misinforeview.hks.harvard.edu/article/state-media-warning-labels-can-counteract-the-effects-of-foreign-misinformation/

Source: RT, 2020 US elections, Russia to blame for everything… again, Last accessed on Dec 31, 2020 at https://youtu.be/2qWANJ40V34?t=164

State-controlled media outlets are increasingly used for foreign interference in civic events. While independent media outlets can be categorized on social media and associated with a political ideology, a state-controlled media outlet generally appears independent or detached from a state-controlled political agenda. Yet they regularly create content concomitant with the controlling state’s political objectives and its leaders. This deceives the public about its state-affiliation and undermines civil liberties. The problem is magnified on social media platforms with their reach and potential for virality ahead of political elections. A prominent example is China’s foreign interference efforts in the referendum on the independence of Hong Kong.

An increasing number of social media platforms launched integrity measures to increase content transparency to counter the integrity risks associated with a state-controlled media outlet proliferating potential disinformation content. In 2018 YouTube began to roll out an information panel feature to provide additional context on state-controlled and publicly funded media outlets. These information panels or disclaimers are really warning labels that make the viewer aware about the potential political influence of a government on the information shown in the video. These warning labels don’t provide any additional context on the veracity of the content or whether the content was fact-checked. On desktop, they appear alongside a hyperlink leading to the wikipedia entry of the media outlet. As of this writing the feature applies to 27 governments including the United States government. 

Source: DW News, Massive explosion in Beirut, Last Accessed on Dec 31, 2020 at https://youtu.be/PLOwKTY81y4?t=7

The researchers focused on whether these warning labels would mitigate the effects on viewers’ perception created by misinformation shown in videos of the Russian state-controlled media outlet RT (Russia Today). RT evades deplatforming by complying with YouTube’s terms of service. This turned the RT channel into an influential resource for the Russian government to undermine confidence of the American public to trust established American media outlets and the United States government when reporting on the Russian interference in the 2016 U.S. presidential elections. An RT video downplaying the Russian influence operation was used for the study and shown to participants with and without a label identifying RT’s affiliation with the Russian government as well as a superimposed warning label with the same language and hyperlink to wikipedia. This surfaced the following findings: 

  1. Disinformation spread by RT does impact viewer’s perception and is effective at that.
  2. Videos without a warning label were more successful in reducing trust in established mainstream media and the government
  3. Videos without a warning label but a superimposed interstitial with the language of the warning label were most effective in preserving the integrity of viewer’ perceptions
Source: RT, $4,700 worth of ‘meddling’: Google questioned over ‘Russian interference’, Last accessed on Dec 31, 2020 at https://www.youtube.com/watch?v=wTCSbw3W4EI

The researchers further discovered small changes in coloring, design and placement of the warning label increase the viewer taking notice of it and it helps with absorbing the information. Both conditions must be met because noticing a label without comprehending its message had no significant impact on understanding the political connection of creator and content. 

I’m intrigued by these findings for the road ahead offers a great opportunity to shape how we distribute and consume information on social media without falling prey for foreign influence operations. Though open questions remain: 

  1. Are these warning labels equally effective on other social media platforms, e.g. Facebook, Instagram, Twitter, Reddit, TikTok, etc.? 
  2. Are these warning labels equally effective with other state-controlled media? This study focused on Russia, a large, globally acknowledged state actor. How does a warning label for content by the government of Venezuela or Australia impact the efficacy of misinformation? 
  3. This study seemed to be focused on the desktop version of YouTube. Are these findings transferable to the mobile version of YouTube?  
  4. What is the impact of peripheral content on viewer’s perception, e.g. YouTube’s recommendation showing videos in its sidebar that all claim RT is a hoax versus videos that all give RT independent credibility?
  5. The YouTube channels of C-SPAN and NPR did not appear to display a warning label within their videos. Yet the United States is among the 27 countries currently listed in YouTube’s policy. What are the criteria to be considered a publisher, publicly funded or state-controlled? How are these criteria met or impacted by a government, e.g. passing certain broadcasting legislation or declaration?
  6. Lastly, the cultural and intellectual background of the target audience is particularly interesting. Here is an opportunity to research the impact of warning labels with participants of different political ideologies, economic circumstances and age-groups in contrast to the actual civic engagement ahead, during and after an election   
Advertisement

Microtargeted Deepfakes in Politics

The 2019 Wordwide Threat Assessment warned of deepfakes deployed to manipulate public opinion. And while the 2020 U.S. presidential elections did not see an onslaught of deepfakes undermining voter confidence, experts agree that the threat remains tangible. A recent study conducted by researchers of the University of Amsterdam investigated the impact of political deepfakes meant to discredit a politician that were microtargeted to a specific segment of the electorate.

tl;dr

Deepfakes are perceived as a powerful form of disinformation. Although many studies have focused on detecting deepfakes, few have measured their effects on political attitudes, and none have studied microtargeting techniques as an amplifier. We argue that microtargeting techniques can amplify the effects of deepfakes, by enabling malicious political actors to tailor deepfakes to susceptibilities of the receiver. In this study, we have constructed a political deepfake (video and audio), and study its effects on political attitudes in an online experiment. We find that attitudes toward the depicted politician are significantly lower after seeing the deepfake, but the attitudes toward the politician’s party remain similar to the control condition. When we zoom in on the microtargeted group, we see that both the attitudes toward the politician and the attitudes toward his party score significantly lower than the control condition, suggesting that microtargeting techniques can indeed amplify the effects of a deepfake, but for a much smaller subgroup than expected.

Make sure to read the full paper titled Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes? by Tom Dobber, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese at https://doi.org/10.1177/1940161220944364

Credits: UC Berkeley/Stephen McNally

Deepfakes are a subcategory of modern information warfare. The technology leverages machine learning to generate audio-visual content that imitates original content but differs in both intent and message. Its highly deceptive appearance renders it a potent weapon to influence public opinion, undermine strategic policies or disrupt civic engagement. An infamous deepfake example depicts former president Obama seemingly calling president Trump expletives. Online microtargeting is a form of social media marketing to disseminate advertisements tailored to the specific interests of an identifiable, curated audience. Within the political context microtargeting is used to spread a campaign message to a specific audience that is identified and grouped by characteristics to either convince the audience to vote for or against a candidate. There are a number of civic risks associated with deploying deepfakes: 

  • Deepfake content is hard to tell apart from original and authentic content. While deepfake videos may signal some nefarious intent to a cautious audience, the potential impact of deepfake radio or deepfake text on voter behavior hasn’t been researched as of this writing
  • Political actors may leverage deepfakes to discredit opponents, undermine news reporting or equip trailing third-party candidates with sufficient influence to erode voter confidence  
  • Used in a political campaign deepfakes may be strategically deployed to incite a political scandal or to reframe current affairs and regain control of an election narrative

The study created a deepfake video depicting an interview of a prominent center-right politician of a large christian democratic party. The manipulated part of the otherwise original and authentic content shows the politician seemingly making a joke about the crucifixion of Jesus Christ: 

“But, as Christ would say: don’t crucify me for it.”

This content was shown to a randomly selected group of christian voters, who had identified their religious, conservative beliefs or voted for this politician in past elections. The researchers found that deepfakes spread without microtargeting the audience would impact the behavior towards the politician but not necessarily his political party. However, deepfakes tailored to a specific audience using political microtargeting techniques amplified the discrediting message of the deepfake therefore impacting both the behavior towards the politican and the political party. Interestingly, staunch supporters of the politician might be shielded from a lasting behavioral change due their own motivated reasoning (bias) derived from the politician’s ideology. For this group, the researchers argue a certain degree of discomfort or deviation from previous political ideology conveyed in a deepfake may reach a tipping point for staunch supporters to align with the results of this study but the limitations of this study may also indicate room for some unforeseen outcomes. 

A roadmap to counter microtargeted deepfakes should include legislators passing regulations to limit political campaign spending online, which would directly confine a campaign to focus on their limited financial resources and weed out corporate interests. Second, new regulations should focus on the protection of personal-identifiable data. A microtargeting dataset includes location data, personal preferences and website interactions etc. While this data is valuable within a commercial context, it should be excluded from civic engagements such as elections. Academics will have an opportunity to discover insights on algorithm bias to improve upon the existing machine learning approach that is training generative adversarial networks with pre-conditioned datasets. Moreover, future research has an opportunity to further investigate the impact of manipulated media on voter education, confidence and behavior within and outside of political elections.     

Here’s one of my favorite deepfake videos of president Trump explaining money laundering to his son-in-law Jared Kushner in a deepfake(d) scene of “Breaking Bad”

Trump’s Grand Strategy

Legacy matters these days. As President-elect Joe Biden is about to take office I thought it is worth my while to reflect on America’s leadership role in the world. How did Donald Trump fare with international relations? What happened to the immigration ban and withdrawal of U.S. military overseas? Is the world safer because of Trump’s ‘America First‘ rhetoric? This paper sheds light on the contrasting ideologies that governed U.S. foreign policy under Trump.

tl;dr

When a new President is elected in the United States, the first thing analysts do is define that President’s grand strategy; yet, naming Donald Trump’s grand strategy was a difficult task as his pre-election speeches often contradicted traditional US foreign policy norms. Trump’s ambiguous grand strategy combines two US foreign policy strategies: nationalism in the sense that his preference is for unilateral policies prioritising American interests, and a traditional foreign policy approach, as seen in the moves taken against China and Iran. Surprisingly, this grand strategy unintentionally contributes to cooperation in Eurasia, as actors like Russia, China, Turkey, India and the European Union continue to try to balance the threat from the United States instead of competing with each other, while smaller countries are reluctant to challenge the regional powers due to mistrust towards Trump.

Make sure to read the full paper titled Mixing Grand Strategies: Trump and International Security by Murat Ülgül at https://www.tandfonline.com/doi/full/10.1080/03932729.2020.1786928 

Image credit: Barbara Kelley

When Donald Trump assumed office as 45th President of the United States the world was facing a known unknown. A mercurial real-estate developer and reality show entertainer was suddenly in a position to reshape America’s international relations. Until then, Trump’s political record consisted of commentary on current affairs and one failed attempt to run for President in 2000. His business record was strained with few successful real estate developments in New York City and a number of unsuccessful business ventures in different industries.  

Historically U.S. foreign policy is set by the President. Entire presidencies rested on a sophisticated strategy to secure American interests at home and abroad. Following WWII the United States adopted a foreign policy of primacy, which according to Patrick Porter branches into a grand strategy of 

  1. Military preponderance 
  2. Allied relationships  
  3. Proliferation of U.S. capitalism 
  4. Absolute control of nuclear (power) weapons

However Trump’s world views stand in stark contrast with that of previous administrations. His nationalistic rhetoric of ‘America First’ struck a chord in harmony with authoritarian dictatorships. It created concerns among democratic nations whether President Trump would continue to invest into alliances and build amicable relationships or if he would lead the United States into isolationism. His chaotic leadership style had many scholars speculate whether Trump would recognize the power imbalance between America’s allies and Russia or China. It raised questions whether Make America Great Again rhetoric meant a complete withdrawal from the international stage and mark a pivot point in America’s pursuit of primacy as its grand strategy. 

“Grand strategy can be defined as a great power’s roadmap to realising its long-term objectives with its actual and/or potential resources”

In this paper, Murat Ülgül reframes the analysis of Trump’s grand strategy by focusing on the complementary elements of a nationalist traditionalism rather than its competing positions. Unlike other scholars have suggested, Trump’s grand strategy is not exclusive continuity of previous “business as usual”. Albeit divisive in rhetoric throughout his pre-election years and time in office, his grand strategy cannot be viewed as raw isolationism. Moreover Ülgül makes a case for a combination of nationalism and traditionalism. Nationalism can be observed in the character and image of Donald Trump himself. Traditionalism leaves its mark in Trump’s choices for his national security advisors, e.g. Michael Flynn, H. R. McMaster and John Bolton, which had gained significant influence over Trump throughout the course of his presidency. This unique but ambiguous combination appears to mitigate the negative effects of each individual strategy. Both are conflict-prone strategies yet the rate of international conflicts has steadily decreased during Trump’s tenure. America First has led the United States to a delayed or complete disengagement from international contests. All the while his administration is running a traditional, hawkish narrative that has led foreign powers known for the pursuit of authoritarian objectives to cooperate and resolve their disagreements with America’s allies against a potential fallout from the United States. In other words, the administration continues to influence global policy without military leverage or engagement. Nevertheless its impact is waning. As a result of this grand strategy, the United States has suffered some reputational damage for fewer countries retained faith into America’s ability to manage international relations or to be a beacon of democracy. 

While this paper goes into more depth than I can summarize here, I found this idea of a mixed grand strategy not as new as the paper suggests. Prior to WWII, the United States practiced a calibrated offshore balancing. In 2016, Stephen Walt suggested a deliberate withdrawal from conflict areas in favor of an intentional engagement of strategic partners. Walt’s propositions imply an element of deliberation of U.S. foreign policy which never seemed to register with Trump, but it helps in finding Ülgül’s argument even more convincing. It further helps to see some positive from this oddball presidency as he disappears from the international (relations) stage.    

How Cyberwarfare Is Used to Influence Public Policy

Cyberspace differs from physical domains. How do we know a hacker’s motive or allegiance? Among the many cyber conflicts in cyberspace only a few escalate into a real world conflict. Those which do, however, beckon a reevaluation of existing policies. This paper argues current research is underrating the second-order impact from cyber-enabled political warfare on public policy. It makes a case for policy makers to consider changes of public policy beyond mere retaliation. Moreover it offers insights into the complex investigations process tied to cyber operations that fall out-of-pattern.

tl;dr

At present, most scholarship on the potential for escalation in cyberspace couches analysis in terms of the technological dynamics of the domain for relative power maneuvering. The result has been a conceptualisation of the logic of operation in cyberspace as one of ‘tit-for-tat’ exchanges motivated by attribution problems and limited opportunity for strategic gain. This article argues that this dominant perspective overlooks alternative notions of how cyber tools are used to influence. This, in turn, has largely led scholars to ignore second-order effects – meaning follow-on effects triggered by a more direct outcome of an initial cyber action – on domestic conditions, institutions, and individual stakeholders. This article uses the case of cyber-enabled political warfare targeting the United States in 2016 to show how escalation can occur as a second-order effect of cyber operations. Specifically, the episode led to a re-evaluation of foreign cyber strategy on the part of American defence thinkers that motivated an offensive shift in doctrine by 2018. The episode also directly affected both the political positions taken by important domestic actors and the attitude of parts of the electorate towards interference, both of which have reinforced the commitment of military planners towards assertive cyber actions.

Make sure to read the full paper titled Beyond tit-for-tat in cyberspace: Political warfare and lateral sources of escalation online by Christopher Whyte at https://doi.org/10.1017/eis.2020.2

Credit: Jozsef Hunor Vilhelem

Cyber-enabled political warfare takes place on a daily basis. It is orchestrated by democracies and authoritarian states alike. A prevailing academic school of thought evaluates these cyber operations by a four-prong perimeter guidance: 

(1) Common intelligence-gathering
(2) Signal testing
(3) Strategic reconnaissance which may result in a
(4) Major cyber assault on critical infrastructure

On both sides, attacker and defender, it is incredibly difficult to determine whether a cyber operation is a tolerated everyday occurrence or a prelude to, if not the final attack against national security. This overpowering imbalance between signal-to-noise ratio has led to a dominant academic perspective that argues cyber operations are an endless loop of retaliatory instances overlooking clandestine long-term objectives. It begs the question: when does an instance of cybersecurity become a matter of national security? When does a cyber operation escalate into full-on warfare? In this paper, the author creates a notion for cyber operations as an instrument to influence public policy beyond mere breach of cybersecurity post escalation. Through examples of cyber-enabled political warfare, the author makes a case for vulnerabilities in democratic societies that originate from a failure to evaluate cyber-enabled political warfare under cyber conflict standards. Therefore creating a vacuum for policy development skewed to overstate potential cyber risks in public policy.    

Cyber operations resulting in cyber conflict are here to stay. In an increasingly accessible space of computer science and affordable hardware, nation states as well as hostile individual fringe groups find more and more fertile ground to develop new generations of cyber tools to pursue anything from criminal objectives to ideological influence operations to subvert public opinion. In the context of cyber operations being part of an everyday occurrence this poses the first problem of identifying a targeted cyber operation as a departure from regular everyday probes in cyberspace. Aforementioned affordability increases difficulty to assess the situation since the cyber operation may originate from a state-actor or is a proxy action driven by individual fringe groups that may or may not be adherent to a state-actor. Here, states need to decide between tolerance, which may result in a failure to detect a major assault on critical infrastructure or a measured response, which will always result in giving away signal that an opponent may abuse for future cyber operations. Of course, the former carries risk of escalating into a real world conflict. Whereas the latter carries the risk of setting the stage for a real world conflict under even less favorable circumstances. In this latter scenario the author creates a notion to consider the second order effects on public policy. In other words, when investigating cyber operations, it is necessary to review beyond the technical means and parse the attack with current affairs. This notion reverberates into the policy development process for the event of a shift in strategic policy.

“What pressure points and vulnerabilities dictate the utility of cyber operations and, subsequently, the shape of potential escalation?”

Democracies delegate the power of the people to elected leaders based on an information exchange system that requires integrity. Cyber-enabled political warfare seeks to exploit integrity by sowing distrust in the political system and its elected leaders. By example of the 2016 U.S. presidential elections, the author builds a case for clarity on how the cyber operations were not only a ‘tit-for-tat’ engagement in support of a particular candidate but rather deployed with a strategic, long-term objective to subvert the integrity of U.S. democracy. The disruption of the democratic process took place by 

(1) Identifying a lack of government regulation for social media platforms that have critical reach with the electorate
(2) Understanding flaws in the algorithmic design of information distribution via social media
(3) Increased cyber attacks on private information that carry disruptive elements once published
(4) Increased deflection of attempts to specifically attribute cyber operations. Therefore enabling plausible deniability
(5) A domestic political landscape that is so polarized that it tolerates foreign interference or is even further divided by domestic agent’s rhetoric and 
(6) A foreign actor (Russia) who is willing to exploit these vulnerabilities

Through these various inter-connected and standalone stages of cyber-enabled political warfare, the Russians were able to effectively undermine public trust in both political candidates, the democratic process and beyond that to an extent that triggered a critical reevaluation of the U.S. cyber strategy resulting in new public policy. The implication for policy makers is to critically consider lateral side effects of cyber operations beyond the method employed and damage done. The potential to influence decision-making of state leaders might be enhanced by these second order effects especially when misinterpreted. Aside from attribution, an effective policy response must take a holistic approach beyond closing a vulnerability in national security.    

A History Of Disinformation And Political Warfare

After political powerhouse Hillary Clinton lost in a spectacular fashion against underdog Donald J. Trump in the 2016 U.S. presidential elections, the world was flabbergasted to learn of foreign election interference orchestrated by the Russian Internet Research Agency. Its mission: to secretly divide the electorate and skew votes away from Clinton and towards Trump. In order to understand the present, one must know the past. This is the baseline of ‘Active Measures – The Secret History of Disinformation and Political Warfare’ by Johns Hopkins Professor of Strategic Studies Thomas Rid. 

I bought this book to study the methodology, strategy and tactics of disinformation and political warfare. To my surprise, the book only spends 11 pages on disinformation. The remaining 424 pages introduce historic examples of influence operations with the bulk of it dedicated to episodes of the cold war. Rid offers insights into the American approach to defend against a communist narrative in a politically divided Germany. He details Soviet influence operations to time-and-again smear American democracy and capitalism. The detail spent on the German Ministry of State Security known as “Stasi” is interesting and overwhelming. 

While my personal expectation wasn’t met with this book, I learned about retracing historic events to attribute world events to specific nations. Its readability is designed for a mass audience fraught with thrilling stories. What is the role of journalistic publications in political warfare? Did Germany politically regress under American and Soviet active measures? Was the constructive vote of no confidence on German chancellor Willy Brandt a product of active measures? Who did really spread the information the AIDS virus was a failed American experiment? On the downside, this book doesn’t really offer any new details into the specifics of disinformation operations. Most contemporary espionage accounts have already been recorded. Defectors told their stories. This makes these stories sometimes bloated and redundant. Nevertheless, I believe to understand our current affairs, we must connect the dots through the lens of political history. Rid presents the foundations for future research into influence operations.

How COVID19 Will Impact The 2020 US Presidential Election

The way Americans choose their President is complicated. Foreign election interference in previous elections as well as a polarized, partisan political arena at home only made it more complicated to elect the leader of the free world. In the 2020 U.S. presidential election, the coronavirus pandemic completely upended how presidential campaigns rally supporters and how to run for public office at large. An underfunded and stripped of its human capital Postal Service is facing an unprecedented volume of mail-in ballots. This leaves me with the question: What is the impact of COVID on the race for the White House? How are both campaigns using it (or not) to drive their political pitch to capture voters? An answer might be a recent data memo by researchers from the University of Leeds – School of Politics and International Studies.    

tl;dr

The impact of COVID on the upcoming November 2020 US election will be an important topic in the coming months. In order to contribute to these debates, this data memo, the final in our summer 2020 series on COVID, considers this question based on an analysis of social media discourse in two week-long periods in late May and early July. We find that only a very small proportion of tweets in election-related trends concern both the election and COVID. In the May period, there was much evidence of conspiracy-style and misinformative content, largely attacking the Democrats, the seriousness of COVID and postal-voting. Tweets also showed that the stances of the Presidential nominees towards the coronavirus has emerged as a major point of political differentiation. In the July period, tweets about COIVD and the election were dominated by the influence of a new anti-Trump Political Action Committee’s viral videos, with the hashtags associated with these videos found in 2.5% of all tweets in election-related trends across the period. However, this criticism was not mirrored in the wider dataset of election-related or political tweets in election-related trends. Criticism of Trump was frequent across all time periods and samples, but discourse focused far more on Trump especially in the July period in which tweets about Trump outnumbered tweets about Biden 2 to 1. We conclude that these patterns suggest the issue of COVID in the US has become so highly politicised that it is largely only one side of the political spectrum engaging with how COVID will impact the US election. Thus, we must ask going forward not how COVID will impact the process and outcome of the election but rather how COVID will be used as a political and campaign issue in the coming election.

Make sure to read the full data memo titled COVID’s Impact on the US 2020 Election: Insights from Social Media Discourse in the Early Campaign Period by Gillian Bolsover at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3714755 

Chip Somodevilla (Getty Images) & Gerry Broome (AP Photo)

We had a good run in the first three months of 2020. Then the COVID pandemic spread across the globe. Most industrial nations had to shut down their economies with rigorous lockdown and shelter-in-place policies paralyzing its citizens and economies alike. COVID also impacted the democratic processes of many developed nations. For example, New Zealand rescheduled their national election until late 2020. Elections for the Hong Kong city legislature were postponed until 2021. At least 88 elections were impacted in one way or another by COVID. This data memo focuses on the 2020 U.S. presidential elections, which will not only decide a highly polarized presidential campaign between incumbent President Donald Trump and his democratic challenger Joe Biden, but also elect the entire U.S. House of Representatives, a third of the U.S. Senate and decide numerous races for public office. It takes a snapshot from the campaign trail comparing social media data discussing the elections and COVID in conjunction with either candidate and it does demonstrate how the pandemic not only impacts the 2020 U.S. presidential elections but how COVID is used as a political weapon to advance campaign objectives.

For most people across the globe, Mid-March marks the beginning of indoor restrictions, facemasks, social distancing and the loss of minimal certainty that our socio-economic environments have to offer. It was a pivot-point for political operatives, who in past elections were able to rely on a candidate’s charisma at in-person political rallies. COVID forced both campaigns to adhere to public health guidelines. Republican and Democratic campaign events were either cancelled or moved indoors and online. Supporters and candidates alike were confined to choppy Zoom calls on small screens. Studies have shown that these national crises usually create an increase in favorability for an incumbent president, e.g. the popularity of George W. Bush spiked following the terrorist attacks on September 11. They also tend to benefit Republican or conservative leaders more in particular in conjunction with a patriotic narrative. This effect is known as the ‘Rally-Round-The-Flag’ effect. However, its impact is subject to media coverage and proliferation of the political narrative that is spun up by the incumbent’s campaign. And Trump as an experienced social media operative stood to benefit from the COVID pandemic: the U.S. economy was in good shape in Q1 2020, the tactical assassination of Qasem Soleimani did not start another war in the Middle East nor did the impeachment proceedings cause any apparent political damage to his reelection campaign. The administration’s track record could have been far worse. Despite these advantages, however, Trump’s often erratic behavior prompted social media platforms to more restrictively apply content policies to public figures and politicians. Twitter started labelling Trump’s tweets as misinformation or removed tweets for violating Twitter’s content policies.

Source: https://twitter.com/realDonaldTrump/status/1265255835124539392

The researchers focused on a snapshot of both campaigns by analyzing Twitter data in the months of May and July. To avoid limitations or cognitive biases, the researchers took sample data from all trending topics within the U.S. They found a small number of trends solely focusing on elections while a large number of trends concerned general politics. COVID and the election was found to be a significant topic of discussion. However, the researchers found little evidence for trends directly discussing correlations between COVID and the election. Few example cases demonstrated hyper-partisan, authoritarian labelling to divide an in-group from an out-group. For example, the unrestricted proliferation of baseless conspiracy claims against the Democratic party in conjunction with misinformation, as seen below: 

Source: https://twitter.com/realjameswoods/status/1264686760509882376

Other examples appeared to offer evidence but misunderstood it. Therefore shifting responsibilities away from the administration: 

Source: https://twitter.com/BasedSavannah/status/1265021232392650753

The majority of these hyper-partisan examples played out on the conservative end of the political spectrum. Democrats were found to capitalise on fewer opportunities to link administration failures to mitigate COVID with the election because Democrats treated finding a scalable policy solution as a serious health issue not to be trifled with. This imbalance between the two political campaigns created a further polarization of the issues: Democrats moving campaign activity online was used by Republicans as further ‘evidence’ for a conspiracy. Social media users reacted to this behavior by retracting support for both candidates. Anti-Biden content dropped as well while Trump remained at a consistent negative level. Trump, however, generated twice as much consistent traffic, which drove more voter attention to his campaign. Despite consistent criticism of the administration’s handling of the pandemic, the negative sentiment towards Trump did not create tangible support for Biden. And visibility can also be an indicator of success in the election. As the incumbent, Trump has the advantage over Biden. The researchers therefore conclude that a hyper-partisan narrative surrounding COVID and elections have been successful in using COVID as a political weapon to undermine the solutions-oriented approach driven by the Democratic party, which further entrenched both political camps at extreme ends making this election even more polarized. In summary, it can be argued that Republicans and Trump utilized COVID as a means for political gains. Democrats on the other side focused more on the issue of crisis management, which was spun by conservatives into a polarised partisan attack outside of scientific realities.

Political Warfare Is A Threat To Democracy. And Free Speech Enables It

“I disapprove of what you say, but I will defend to the death your right to say it” is an interpretation of Voltaire’s principles by Evelyn Beatrice Hall. Freedom of expression is often cited as the last frontier before falling into authoritarian rule. But is free speech, our greatest strength, really our greatest weakness? Hostile authoritarian actors seem to exploit these individual liberties by engaging in layered political warfare to undermine trust in our democratic systems. These often clandestine operations pose an existential threat to our democracy.   

tl;dr

The digital age has permanently changed the way states conduct political warfare—necessitating a rebalancing of security priorities in democracies. The utilisation of cyberspace by state and non- state actors to subvert democratic elections, encourage the proliferation of violence and challenge the sovereignty and values of democratic states is having a highly destabilising effect. Successful political warfare campaigns also cause voters to question the results of democratic elections and whether special interests or foreign powers have been the decisive factor in a given outcome. This is highly damaging for the political legitimacy of democracies, which depend upon voters being able to trust in electoral processes and outcomes free from malign influence— perceived or otherwise. The values of individual freedom and political expression practised within democratic states challenges their ability to respond to political warfare. The continued failure of governments to understand this has undermined their ability to combat this emerging threat. The challenges that this new digitally enabled political warfare poses to democracies is set to rise with developments in machine learning and the emergence of digital tools such as ‘deep fakes’.

Make sure to read the full paper titled Political warfare in the digital age: cyber subversion, information operations and ‘deep fakes’ by Thomas Paterson and Lauren Hanley at https://www.tandfonline.com/doi/abs/10.1080/10357718.2020.1734772

MC2 Joseph Millar | Credit: U.S. Navy

This paper’s central theme is at the intersection of democratic integrity and political subversion operations. The authors describe an increase of cyber-enabled espionage and political warfare due to the global spread of the internet. They argue it has led to an imbalance between authoritarian and democratic state actors. Their argument rests on the notion that individual liberties such as freedom of expression put democratic states at a disadvantage compared to authoritarian states. Therefore authoritarian states are observed to more often choose political warfare and subversion operations versus democracies are confined to breaching cyber security and conducting cyber espionage. Cyber espionage is defined as

“the use of computer networks to gain illicit access to confidential information, typically that held by a government or other organization”

and is not a new concept. I disagree with the premise of illicit access because cyberspace specifically enables the free flow of information beyond any local regulation. Illicit is either redundant for espionage does not necessarily require breaking laws, rules or customs or it is duplicative with confidential information, which I interpret as synonymous with classified information. Though one might argue about the difference. From a legal perspective, the information does not need to be obtained through illicit access.

With regard to the broader term political warfare, I found the definition of political warfare as, 

“diverse operations to influence, persuade, and coerce nation states, organizations, and individuals to operate in accord with one’s strategic interests without employing kinetic force” 

most appropriate. It demonstrates the depth of political warfare, which encompasses influence and subversion operations outside of physical activity. Subversion operations are defined as 

“a subcategory of political warfare that aims to undermine institutional as well as individual legitimacy and authority”

I disagree with this definition for it fails to emphasize the difference between political warfare and subversion – both undermine legitimacy and authority. However, a subversion operation is specifically aimed to erode and deconstruct a political mandate. It is the logical next step after political warfare influenced a populace in order to achieve political power. The authors see the act of subversion culminating in a loss of trust in democratic principles. It leads to voter suppression, reduced voter participation, decreased and asymmetrical review of electoral laws but more importantly it poses a challenge to the democratic values of its citizens. It is an existential threat to a democracy. It favors authoritarian states detached from checks and balances that are usually present in democratic systems. These actors are not limited by law or civic popularity or reputational capital. Ironically, this bestows a certain amount of freedom upon them to deploy political warfare operations. Democracies on the other hand uphold individual liberties such as freedom of expression, freedom of the press, freedom of assembly or equal treatment under law and due process. As demonstrated during the 2016 U.S. presidential elections, a democracy generally struggles with identifying political warfare initiated by a foreign (hostile) state from certain segments of the population pursuing their strategic objectives by leveraging these exact individual freedoms. An example from the Mueller Report 

“stated that the Internet Research Agency (IRA), which had clear links to the Russian Government, used social media accounts and interest groups to sow discord in the US political system through what it termed ‘information warfare’ […] The IRA’s operation included the purchase of political advertisements on social media in the names of US persons and entities, as well as the staging of political rallies inside the United States.”

And it doesn’t stop in America. Russia is deploying influence operations in volatile regions on the African continent. China has a history of attempting to undermine democratic efforts in Africa. Both states aim to chip away power from former colonial powers such as France or at least suppress efforts to democratise regions in Africa. China is also deeply engaged in large-scale political warfare in the Southeast Asian region over regional dominance but also territorial expansion as observed in the South China Sea. New Zealand and Australia recorded numerous incidents of China’s attempted influence operations. Australia faced a real-world political crisis when Australian Labor Senator Sam Dastyari was found to be connected to political donor Huang Xiangmo, who has ties to the Chinese Communist Party. Therefore, China having a direct in-route to influence Australian policy decisions. 

The paper concludes with an overview of future challenges posed by political warfare. With more and more computing power readily available the development of new cyber tools and tactics to ideate political warfare operations is only going to increase. Authoritarian states are likely to expand their disinformation playbooks by tapping into the fears of people fueled by conspiracy theories. Developments of machine learning and artificial intelligence will further improvements of inauthentic behavior online. For example, partisan political bots will become more human and harder to discern from real human users. Deep fake technology will increase sampling rates by tapping into larger datasets from the social graph of every human being making it increasingly possible to impersonate individuals to gain access or achieve certain strategic objectives. Altogether, political warfare poses a greater challenge than cyber-enabled espionage in particular for democracies. Democracies need to understand the asymmetrical relationship with authoritarian actors and dedicate resources to effective countermeasures to political warfare without undoing civil liberties in the process.

Essential Politics: Watch This Ahead Of The 2020 Election

I put together a list of my favorite political documentaries. The focus is on the upcoming election. It’s not a complete list, it never will be. It’s taking pulse ahead of the conclusion of a polarized campaign year. Maybe it will help you to unwind before you cast your vote (if you haven’t already)!

tl;dr

Make sure to vote on November 3rd. Also make sure to watch The Choice 2020 and Dark Money. Both documentaries offer clear, unfiltered insights into the minds of Joe Biden and Donald Trump. Both documentaries are designed to critically reflect on the impact your vote has on the health of our democracy. Lastly, I recommend taking a look at The Circus too. It’s an entertaining yet jaw-dropping show about the inner workings of campaigns. It’s a reminder that as long as not everybody can afford to run for office, our democracy needs to improve. 

Get Me Roger Stone (2017)

This 1 ½ hour long documentary film details the rise and fall of Republican political operative, former Nixon-aide, long-time friend of Donald Trump and convicted felon Roger Stone. The life of Roger Stone is marked by political victories without ever becoming successful in politics. A long-time political fixture in Republican politics, Roger Stone was never accepted by the Republican establishment. Stone is the central piece in this film showing his eccentric character in interviews about his journey from Richard Nixon to Ronald Reagan to Donald Trump. While the stories told are intriguing in nature, the film allocates too much credit to a marginalized political strategist, who merely deployed shock and awe techniques to stun his opposition without much long-term strategy at play.

Watch on Netflix: https://www.netflix.com/title/80114666

Dark Money (2018)

The Supreme Court decision in Citizens United v. FEC was a landmark ruling to pave the way for political interest groups influencing U.S. elections. Dark Money is a 1 ½ hour long documentary film that follows John S. Adams of the Montana Free Press on his journey to investigate how corporate money is undermining American politics. It is a harrowing contemporary film that should rattle all Americans for it details the existential threat to local journalism in being the last check to properly balance corporate exploitation. This shocking story of intentional deceit of the American voters contributes to a larger debate the U.S. electorate must face about how their democracy is eroded when elections are bought and sold.   

More infos: https://www.darkmoneyfilm.com

Knock Down The House (2019)

This 1 ½ hour long documentary film follows four female democratic candidates in their 2018 midterm election campaigns for the U.S. House of Representatives. The candidates are Alexandria Ocasio-Cortez (New York), Amy Vilela (Nevada), Cori Bush (Missouri), Paula Jean Swearengin (West Virginia) each facing a long-term democratic incumbent in their respective campaigns that hasn’t been challenged in decades. The film is a gripping story about the absurdity of election campaigning in American politics. But more importantly, it tells the stories of four women, who not only face a long-term incumbent with years of political experience and a large donor base, but the challenge of overcoming the stigma that a woman can be a leader in an executive role. With the documentary following four candidates, I found too much screen time focusing on Alexandria Ocasio-Cortez, who has a polarizing nature but really is only one fourth of this larger, systemic issue. Nevertheless, this documentary is a must-see to get an understanding of the critical failures in diversity and equality that America has turned a blind on.

More infos: https://knockdownthehouse.com 

Hillary (2020)

Across four episodes, this 4 ½ hour long documentary series about Hillary Clinton’s unsuccessful 2016 presidential campaign is chronicling the roller coaster emotions of running a national campaign. It reveals an unseen personal side of Hillary Clinton by looking at milestones in her career and personal life that shaped her thinking as a woman, mother and politician. Archival footage is seamlessly interlaced with contemporary interviews. This documentary delivers a perfect balance of Hillary Clinton’s extraordinary intellectual breadth and impact on foreign and domestic political affairs while keeping it captivating and worth your while.    

Watch on hulu: https://www.hulu.com/series/hillary-793891ec-5bb7-4200-ba93-e3629532d670

The Circus: Inside The Craziest Campaign On Earth, Season 5 (2020)

I am not sure if The Circus needs any introduction. Running for five seasons with currently 84 individual episodes each 30 minutes and longer, this epic TV documentary series is a real-time political insider show you simply cannot miss out on. In season 5, the show begins in the aftermath of the impeachment trial against Donald Trump quickly transitioning to the first primary elections for the 2020 U.S. presidential elections in Iowa, New Hampshire and Nevada. Then COVID-19 hit the country. The restrictions and nationwide lockdown measures changed this election like it has never seen before in history. After episode 8 in March 2020, the show stops due to the lockdown measures only to resume “The New Abnormal” with episode 9 in August 2020. The long break changed everything yet it didn’t change the tension and excitement that is a run for the highest office in the land of the free. The hosts of the Circus John Heilemann, Mark McKinnon, and Alex Wagner are doing a stellar job in observing the candidates’ campaigns and asking the hard questions while reflecting on the bizarre experience it is to run for president during a pandemic.

Watch on Showtime: https://www.sho.com/the-circus-inside-the-greatest-political-show-on-earth/season/5 

The Swamp (2020)

This 1 ½ hour long documentary film follows three Republican Congressman Matt Gaetz (Florida), Thomas Massie (Kentucky), and Ken Buck (Colorado) in their efforts to leave a mark on the American political landscape. It details the need for fundraising and donations that are at liberty of interest groups and lobbyists. It is another concerning picture of American politics chained to corporate interests. The documentary is hosted by Harvard Law professor Lawrence Lessig, who founded “Rootstrikers” to reduce the corrosive influence of corporate money in the American democracy. 

Watch on HBO: https://www.hbo.com/documentaries/the-swamp 

The Choice 2020: Trump vs. Biden (2020)

The name is the game of this 2 hour long documentary film by Frontline. Ahead of the most polarized elections in U.S. history, the film focuses on interviews with family, friends and foes of both candidates Joe Biden and Donald Trump. It chronicles how both Biden and Trump handled themselves during times of crisis and how they responded to adversity. It’s about who these men are that are asking for your vote. 

Watch on PBS: https://www.pbs.org/wgbh/frontline/film/the-choice-2020-trump-vs-biden/

+++++++++++++++++++++++++++++++

Make sure to vote on November 3rd


2020-11-03T18:00:00

  days

  hours  minutes  seconds

until

2020 U.S. Presidential Election

More infos here: https://www.usa.gov/voting

How Political Bots Worsen Polarization

Do you always know who you are dealing with? Probably not. Do you always recognize when you are influenced? Unlikely. I found it hard to pick up on human signals without succumbing to my own predisposed biases. In other words maintaining “an open mindset” is easier said than done. A recent study found this to be true in particular for dealing with political bots.

tl;dr

Political bots are social media algorithms that impersonate political actors and interact with other users, aiming to influence public opinion. This research investigates the ability to differentiate bots with partisan personas from humans on Twitter. This online experiment (N = 656) explores how various characteristics of the participants and of the stimulus profiles bias recognition accuracy. The analysis reveals asymmetrical partisan-motivated reasoning, in that conservative profiles appear to be more confusing and Republican participants perform less well in the recognition task. Moreover, Republican users are more likely to confuse conservative bots with humans, whereas Democratic users are more likely to confuse conservative human users with bots. The research discusses implications for how partisan identities affect motivated reasoning and how political bots exacerbate political polarization.

Make sure to read the full paper titled Asymmetrical Perceptions of Partisan Political Bots by Harry Yaojun Yan, Kai-Cheng Yang, Filippo Menczer, James Shanahan at https://journals.sagepub.com/doi/full/10.1177/1461444820942744 

Illustration by C. R. Sasikumar

The modern democratic process is technological information warfare. Voters need to be enticed to engage with information about a candidate and election campaigns need to ensure accurate information is presented to build and expand an audience, or a voter base. Assurances for the integrity of information do not exist. And campaigns are incentivised to undercut the opponent’s narrative while amplifying its own candidates message. Advertisements are a potent weapon in any election campaign. Ad spending  on social media for the 2020 U.S. presidential election between Donald Trump and Joe Biden is already at a high with a projected, total bill of $10.8 billion driven by both campaigns. Grassroots campaigns are another potent weapon to decentralize a campaign, mobilize local leaders and impact a particular (untapped) electorate. While the impact of the coronoavirus on grassroots efficacy is yet to be determined, these campaigns are critical to solicit game-changing votes.

Which brings me to the central theme of this post and the paper: bots. When ad dollars or a human movement are out of reach bots are the cheap, fast and impactful alternative. Bots are algorithms to produce a programmed result automatically or human induced with the objective to copy and create the impression of human behavior. We have all seen or interacted with bots on social media after reaching out to customer service. We have all heard or received messages from bots trying to set us up with “the chance of a lifetime”. But do we always know when we’re interacting with bots? Are there ways of telling an algorithm apart from a human?

Researchers from Indiana University of Bloomington took on these important questions in their paper titled Asymmetrical Perceptions of Partisan Political Bots. It explains the psychological factors that impact our perception and decisioning when interacting with partisan political bots on social media. Political bots are used to impersonate political actors and interact with other users in an effort to influence public opinion. Bots have been known to facilitate the spread of spam mail and fake news. They have been used and abused to amplify conspiracy theories. Usage leads to improvement. In conjunction with enhanced algorithms and coding this poses three problems: 

(1) social media users become vulnerable to misreading a bot’s actions as human. 
(2) A partisan message, campaign success or sensitive information can be scaled up through networking effects and coordination of automation. A certainly frightening example would be the use of bots to declare to have won the election while voters are still casting their ballot. And
(3) political bots are by virtue partisan. A highly polarized media landscape, offers fertile ground for political bots to exploit biases and overcome political misconceptions. That means becoming vulnerable isn’t really necessary; a mere exposure to a partisan political bot can lay the groundwork for later manipulation or influence of opinion.

Examples of low-ambiguity liberal (left), low-ambiguity conservative (right) profiles used as stimuli. Identifiable information is blurred.

The research is focused on whether certain individuals or groups are easier to be influenced by partisan political bots than others. This recognition task depends on how skillful certain individuals or groups can detect a partisan narrative, recognize their own partisan bias and either navigate through motivated reasoning or drown in it. Motivated reasoning can be seen as in-group favoritism and out-group hostility, i.e. conservatives favor Republicans and displease democrats. Contemporary detection methods include (1) network-based, i.e. bots are presumed to be inter-connected – detecting one exposes connections to other bots. (2) Crowdsourcing, i.e. engaging experts in the manual detection of bots. And (3) feature-based, i.e. a supervised machine-learning classifier is trained with categorization statistics of political accounts and is constantly matching against inauthentic metrics. These methods can be combined to increase detection rates. At this interesting point in history, it is an arms race between writing code for better bots against building systems to better identify novel algorithms at scale. This arms race, however, is severely detrimental for democratic processes as they are potent enough to deter or at least undermine confidence of participants at the opposing end of the political spectrum.

Examples high-ambiguity liberal (left), and high-ambiguity conservative (right) profiles used as stimuli. Identifiable information is blurred.

The researchers found that knowingly interacting with partisan political bots only magnifies polarization eroding trust in the opposing party’s intentions. However, a regular user will presumably struggle to discern a political bot from a politically motivated (real) user. It leaves the potential voter base vulnerable to automated manipulation. To overcome this manipulation, the researchers focused on identifying the factors that make up the human perception when it comes to ambiguity between real user and political bot as well as recognition of the coded partisanship of the bot. Active users of social media were more likely to establish a mutual following with a political bot. These users happened to be more conservative while the political bots they chose to interact were likely conservative too. Time played a role insofar as active users who took more time to investigate and understand the political bot, which they only saw as a regular-looking, partisan social media account, were less likely to accurately discern real user from political bot. In their results, this demonstrated (1) a higher chance for conservative users to be deceived by conservative political bots and (2) a higher chance for liberal users to misidentify conservative (real) users for political bots. The researchers conclude that 

users have a moderate capability to identify political bots, but such a capability is also limited due to cognitive biases. In particular, bots with explicit political personas activate the partisan bias of users. ML algorithms to detect social bots provide one of the main countermeasures to malicious manipulation of social media. While adopting third-party bot detection methods is still advisable, our findings also suggest possible bias in human-annotated data used for training these ML models. This also calls for careful consideration of algorithmic bias in future development of artificial intelligence tools for political bot detection.

I have been intrigued with these findings. Humans tend to struggle to establish trust online. It’s surprising and concerning that conservative bots may be perceived as conservative humans by Republicans and conservative humans may be perceived as bots by Democrats. The potential to sow distrust to polarize public opinion is nearly limitless for a motivated interest group. While policymakers and technology companies are beginning to address these issues with targeted legislation, it will take a concerted multi-stakeholder approach to mitigated and reverse polarization spread by political bots.

The Influence of Technology on Presidential Primary Campaigns

Without social media, there would not be a President Trump. We all felt the Bern in 2016. And let’s not forget “pizzagate” or “Yes we can”. The power of technology has undeniably impacted elections for political office, but how does it influence voters’ decisions on election day? Is social media the lone culprit undermining the integrity of our democracy or does history offer insights of sobering nature? These and other questions are subject to analysis by Anthony J. Gaughan with Drake University. Here’s a rundown of his paper “The Influence of Technology on Presidential Primary Campaigns

The paper examines technological innovations impacting Presidential primary campaigns. Supreme Court decisions Buckley v. Valeo or Citizens United v. FEC appear to have paved the regulatory playing field towards unrestricted campaign spending. Contrary to popular belief, Presidential primary campaigns are not skewed to the wealthiest of candidates. They simply favor the Presidential candidate who is most savvy of current technology to leverage his audience for the benefit of the campaign.   

Gaughan starts his analysis as early as 1912 when Presidential primary campaigns relied on the rail network to expose the candidate to crucial constituents. The radio would bring about change by offering a low cost, easy and broad access medium available to the general public. It would demonstrate that a candidate with an ability to make his audience feel they know him like nobody else could overcome other limitations of his persona. By the early 1950s, television would enter the scene to influence voters. The Presidential candidate John F. Kennedy overcame incredible odds of winning the Protestant state of West Virginia despite being of Catholic belief by leveraging TV ads displaying himself as a handsome, professional leader who “would not take orders from any Pope, Cardinal, Bishop or Priest”. Television gave rise to Roger Ailes and others who would reshape the appearance of political candidates for public office. Most notable in the 1968 Presidential primary campaign of Richard Nixon. Nixon’s “funny-looking” appearance would be carefully marketed by only distributing selected shots, accompanied by strong sound tracks and professional flattery – in the process beating Nelson Rockefeller and former president Ronald Regan. 

Gaughan concludes his analysis with the emergence of the internet. Senator Barack Obama managed to outfox Senator Hillary Clinton in the 2008 Presidential primary campaign by leveraging social media that allowed his campaign to build a volunteer network that utilized data analytics to identify potential voters and could eventually outperform the Clinton campaign in the majority of voting metrics. The internet also offered a platform to establish direct communication with his constituents. Obama’s success, however, inspired a real-estate business man and reality TV-celebrity from the other end of the political spectrum. It gave rise to Donald J. Trump. Trump took the Regan playbook of celebrity fame gone politics and merged it with 21st century innovations. On Twitter and Facebook, the Trump campaign selectively spent ads in crucial swing states to gain political momentum with polarizing memes and divisive content. His existing television fame helped with national recognition, but the free coverage generated through the power of social networks put him over the edge to beat established Republican candidates, front and center of the voter.

The paper concludes that the 2016 Presidential primary campaign was a harbinger of things to come. It is not far-fetched to reason that internet communication will continue to boost political speech across new platforms like TikTok or through new mediums such as virtual or augmented realities. Political candidates entering a primary race can leverage these tools by hiring campaign staff who are native in social media communications, possess the ability to detect the pulse of not only millennial and adolescent voters but the party’s voter base beyond retirement age and everybody in between. New tools to analyze, scale and engage audiences that most platforms offer as part of the advertisement deal have the power to enable political novices to make a bid for office. From a regulatory point of view, legislators must revisit campaign spending to level the playing field for networking effects that come with social media. In the interest of the voter, fair and democratic elections, it might be advised to not focus future legislation on campaign spending in the sense of financial assets but the actual reach of audience including the means to facilitate the reach.