Is Transparency Really Reducing The Impact Of Misinformation?

A recent study investigated YouTube’s efforts to provide more transparency about the ownership of certain YouTube channels. The study concerned YouTube’s disclaimers displayed under the video that indicate the content was produced or is funded by a state-controlled media outlet. The study sought to shed light on whether or not these disclaimers are an efficient means to reduce the impact of misinformation.  

tl;dr

In order to test the efficacy of YouTube’s disclaimers, we ran two experiments presenting participants with one of four videos: A non-political control, an RT video without a disclaimer, an RT video with the real disclaimer, or the RT video with a custom implementation of the disclaimer superimposed onto the video frame. The first study, conducted in April 2020 (n = 580) used an RT video containing misinformation about Russian interference in the 2016 election. The second conducted in July 2020 (n = 1,275) used an RT video containing misinformation about Russian interference in the 2020 election. Our results show that misinformation in RT videos has some ability to influence the opinions and perceptions of viewers. Further, we find YouTube’s funding labels have the ability to mitigate the effects of misinformation, but only when they are noticed, and the information absorbed by the participants. The findings suggest that platforms should focus on providing increased transparency to users where misinformation is being spread. If users are informed, they can overcome the potential effects of misinformation. At the same time, our findings suggest platforms need to be intentional in how warning labels are implemented to avoid subtlety that may cause users to miss them.

Make sure to read the full article titled State media warning labels can counteract the effects of foreign misinformation by Jack Nassetta and Kimberly Gross at https://misinforeview.hks.harvard.edu/article/state-media-warning-labels-can-counteract-the-effects-of-foreign-misinformation/

Source: RT, 2020 US elections, Russia to blame for everything… again, Last accessed on Dec 31, 2020 at https://youtu.be/2qWANJ40V34?t=164

State-controlled media outlets are increasingly used for foreign interference in civic events. While independent media outlets can be categorized on social media and associated with a political ideology, a state-controlled media outlet generally appears independent or detached from a state-controlled political agenda. Yet they regularly create content concomitant with the controlling state’s political objectives and its leaders. This deceives the public about its state-affiliation and undermines civil liberties. The problem is magnified on social media platforms with their reach and potential for virality ahead of political elections. A prominent example is China’s foreign interference efforts in the referendum on the independence of Hong Kong.

An increasing number of social media platforms launched integrity measures to increase content transparency to counter the integrity risks associated with a state-controlled media outlet proliferating potential disinformation content. In 2018 YouTube began to roll out an information panel feature to provide additional context on state-controlled and publicly funded media outlets. These information panels or disclaimers are really warning labels that make the viewer aware about the potential political influence of a government on the information shown in the video. These warning labels don’t provide any additional context on the veracity of the content or whether the content was fact-checked. On desktop, they appear alongside a hyperlink leading to the wikipedia entry of the media outlet. As of this writing the feature applies to 27 governments including the United States government. 

Source: DW News, Massive explosion in Beirut, Last Accessed on Dec 31, 2020 at https://youtu.be/PLOwKTY81y4?t=7

The researchers focused on whether these warning labels would mitigate the effects on viewers’ perception created by misinformation shown in videos of the Russian state-controlled media outlet RT (Russia Today). RT evades deplatforming by complying with YouTube’s terms of service. This turned the RT channel into an influential resource for the Russian government to undermine confidence of the American public to trust established American media outlets and the United States government when reporting on the Russian interference in the 2016 U.S. presidential elections. An RT video downplaying the Russian influence operation was used for the study and shown to participants with and without a label identifying RT’s affiliation with the Russian government as well as a superimposed warning label with the same language and hyperlink to wikipedia. This surfaced the following findings: 

  1. Disinformation spread by RT does impact viewer’s perception and is effective at that.
  2. Videos without a warning label were more successful in reducing trust in established mainstream media and the government
  3. Videos without a warning label but a superimposed interstitial with the language of the warning label were most effective in preserving the integrity of viewer’ perceptions
Source: RT, $4,700 worth of ‘meddling’: Google questioned over ‘Russian interference’, Last accessed on Dec 31, 2020 at https://www.youtube.com/watch?v=wTCSbw3W4EI

The researchers further discovered small changes in coloring, design and placement of the warning label increase the viewer taking notice of it and it helps with absorbing the information. Both conditions must be met because noticing a label without comprehending its message had no significant impact on understanding the political connection of creator and content. 

I’m intrigued by these findings for the road ahead offers a great opportunity to shape how we distribute and consume information on social media without falling prey for foreign influence operations. Though open questions remain: 

  1. Are these warning labels equally effective on other social media platforms, e.g. Facebook, Instagram, Twitter, Reddit, TikTok, etc.? 
  2. Are these warning labels equally effective with other state-controlled media? This study focused on Russia, a large, globally acknowledged state actor. How does a warning label for content by the government of Venezuela or Australia impact the efficacy of misinformation? 
  3. This study seemed to be focused on the desktop version of YouTube. Are these findings transferable to the mobile version of YouTube?  
  4. What is the impact of peripheral content on viewer’s perception, e.g. YouTube’s recommendation showing videos in its sidebar that all claim RT is a hoax versus videos that all give RT independent credibility?
  5. The YouTube channels of C-SPAN and NPR did not appear to display a warning label within their videos. Yet the United States is among the 27 countries currently listed in YouTube’s policy. What are the criteria to be considered a publisher, publicly funded or state-controlled? How are these criteria met or impacted by a government, e.g. passing certain broadcasting legislation or declaration?
  6. Lastly, the cultural and intellectual background of the target audience is particularly interesting. Here is an opportunity to research the impact of warning labels with participants of different political ideologies, economic circumstances and age-groups in contrast to the actual civic engagement ahead, during and after an election   
Advertisement

Microtargeted Deepfakes in Politics

The 2019 Wordwide Threat Assessment warned of deepfakes deployed to manipulate public opinion. And while the 2020 U.S. presidential elections did not see an onslaught of deepfakes undermining voter confidence, experts agree that the threat remains tangible. A recent study conducted by researchers of the University of Amsterdam investigated the impact of political deepfakes meant to discredit a politician that were microtargeted to a specific segment of the electorate.

tl;dr

Deepfakes are perceived as a powerful form of disinformation. Although many studies have focused on detecting deepfakes, few have measured their effects on political attitudes, and none have studied microtargeting techniques as an amplifier. We argue that microtargeting techniques can amplify the effects of deepfakes, by enabling malicious political actors to tailor deepfakes to susceptibilities of the receiver. In this study, we have constructed a political deepfake (video and audio), and study its effects on political attitudes in an online experiment. We find that attitudes toward the depicted politician are significantly lower after seeing the deepfake, but the attitudes toward the politician’s party remain similar to the control condition. When we zoom in on the microtargeted group, we see that both the attitudes toward the politician and the attitudes toward his party score significantly lower than the control condition, suggesting that microtargeting techniques can indeed amplify the effects of a deepfake, but for a much smaller subgroup than expected.

Make sure to read the full paper titled Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes? by Tom Dobber, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese at https://doi.org/10.1177/1940161220944364

Credits: UC Berkeley/Stephen McNally

Deepfakes are a subcategory of modern information warfare. The technology leverages machine learning to generate audio-visual content that imitates original content but differs in both intent and message. Its highly deceptive appearance renders it a potent weapon to influence public opinion, undermine strategic policies or disrupt civic engagement. An infamous deepfake example depicts former president Obama seemingly calling president Trump expletives. Online microtargeting is a form of social media marketing to disseminate advertisements tailored to the specific interests of an identifiable, curated audience. Within the political context microtargeting is used to spread a campaign message to a specific audience that is identified and grouped by characteristics to either convince the audience to vote for or against a candidate. There are a number of civic risks associated with deploying deepfakes: 

  • Deepfake content is hard to tell apart from original and authentic content. While deepfake videos may signal some nefarious intent to a cautious audience, the potential impact of deepfake radio or deepfake text on voter behavior hasn’t been researched as of this writing
  • Political actors may leverage deepfakes to discredit opponents, undermine news reporting or equip trailing third-party candidates with sufficient influence to erode voter confidence  
  • Used in a political campaign deepfakes may be strategically deployed to incite a political scandal or to reframe current affairs and regain control of an election narrative

The study created a deepfake video depicting an interview of a prominent center-right politician of a large christian democratic party. The manipulated part of the otherwise original and authentic content shows the politician seemingly making a joke about the crucifixion of Jesus Christ: 

“But, as Christ would say: don’t crucify me for it.”

This content was shown to a randomly selected group of christian voters, who had identified their religious, conservative beliefs or voted for this politician in past elections. The researchers found that deepfakes spread without microtargeting the audience would impact the behavior towards the politician but not necessarily his political party. However, deepfakes tailored to a specific audience using political microtargeting techniques amplified the discrediting message of the deepfake therefore impacting both the behavior towards the politican and the political party. Interestingly, staunch supporters of the politician might be shielded from a lasting behavioral change due their own motivated reasoning (bias) derived from the politician’s ideology. For this group, the researchers argue a certain degree of discomfort or deviation from previous political ideology conveyed in a deepfake may reach a tipping point for staunch supporters to align with the results of this study but the limitations of this study may also indicate room for some unforeseen outcomes. 

A roadmap to counter microtargeted deepfakes should include legislators passing regulations to limit political campaign spending online, which would directly confine a campaign to focus on their limited financial resources and weed out corporate interests. Second, new regulations should focus on the protection of personal-identifiable data. A microtargeting dataset includes location data, personal preferences and website interactions etc. While this data is valuable within a commercial context, it should be excluded from civic engagements such as elections. Academics will have an opportunity to discover insights on algorithm bias to improve upon the existing machine learning approach that is training generative adversarial networks with pre-conditioned datasets. Moreover, future research has an opportunity to further investigate the impact of manipulated media on voter education, confidence and behavior within and outside of political elections.     

Here’s one of my favorite deepfake videos of president Trump explaining money laundering to his son-in-law Jared Kushner in a deepfake(d) scene of “Breaking Bad”

Manipulated Social Media And The Stock Market

Earlier this year, Elon Musk tweeted that the Tesla stock price is too high. His twitter account had a reach of 33.4 million followers (41 million as of this writing). The immediate impact on Tesla’s stock price was a midday trading deficit of 9% compared to the closing stock price of the previous day. Tesla’s company valuation suffered an estimated loss of $14 billion. The episode triggered concerns over a potential violation of his 2018 settlement with the Securities and Exchange Commission after misleading tweets implying to have raised sufficient capital to take Tesla private. Presumably this time it was in response to California’s restrictive shelter-in-place lockdown measures to tackle the coronavirus pandemic. Maybe Elon Musk felt the weight of his responsibilities weigh twice as much during these challenging times. In any case, as an investor, his actions made me think of the power of social media at the hands of a social media influencer (i.e. stock promoter). Moreover, it made me think about content policies tailored to protect economic value while safeguarding information integrity. This empirical study conducted by researchers of the Pantheon-Sorbonne University discusses effects of illegal price manipulation by way of information operations on social media, specifically Twitter.

tl;dr

Social media can help investors gather and share information about stock markets. However, it also presents opportunities for fraudsters to spread false or misleading statements in the marketplace. Analyzing millions of messages sent on the social media platform Twitter about small capitalization firms, we find that an abnormally high number of messages on social media is associated with a large price increase on the event day and followed by a sharp price reversal over the next trading week. Examining users’ characteristics, and controlling for lagged abnormal returns, press releases, tweets sentiment and firms’ characteristics, we find that the price reversal pattern is stronger when the events are generated by the tweeting activity of stock promoters or by the tweeting activity of accounts dedicated to tracking pump-and-dump schemes. Overall, our findings are consistent with the patterns of a pump-and-dump scheme, where fraudsters/promoters use social media to temporarily inflate the price of small capitalization stocks.

Make sure to read the full paper titled Market Manipulation and Suspicious Stock Recommendations on Social Media by Thomas Renault at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3010850

Credit: Lucas Jackson/Reuters

Social media platforms are an accepted corporate communication channel nowadays closely monitored by investors and financial analysts. As an investor, social media offers swarm intelligence on trading the company’s stock, access to real-time information about the company, product updates and other financially relevant information. Large and small-cap companies usually maintain a presence on social media. Although in particular small-cap companies with low liquidity are vulnerable to stock price manipulations by way of information operations on social media. According to researchers of the Pantheon-Sorbonne University an information-based manipulation involves rumors, misleading or false press releases, stock analysis or price targets, etc. that is disseminated in a short, time-sensitive period. Nearly 50% of this disinformation is spread by an influencer. The investment terminology calls this pump-and-dump scheme: 

“Pump-and-dump schemes involve touting a company’s stock through false or misleading statements in the marketplace in order to artificially inflate (pump) the price of a stock. Once fraudsters stop hyping the stock and sell their shares (dump), the price typically falls.”

The empirical study collected tweets containing the cashtag ticker symbol of more than 5000 small-cap companies. Over an eleven-month period 248,748 distinct Twitter users posted 7,196,307 financially relevant tweets. They adjusted the data for overoptimistic noise traders and financially relevant news reporting. They found a spike in volume of tweets concerning a company’s stock on social media correlates with a spike in trading of the company’s stock from two days before peak activity on social media up to five days after it. Some content concerned positive financial signals advocating to buy the stock. Other content concerned disinformation about the company’s performance. It was spread to a large, unsophisticated Twitter audience by influencer in concert with a network of inauthentic accounts and bots. This was then followed by a price reversal over the ensuing trading days. In the aftermath, the actors part of the scheme went into hibernation or ceased social media activity altogether.

Risk And Opportunity For Social Media Platforms

Information operations to manipulate stock price are quite common on social media. Albeit hard to detect only few are investigated and successfully prosecuted. Consumers exposed to stock disinformation that fell victim tend to exit the stock market altogether. Moreover a consumer might reduce their footprint on social media after experiencing real-world financial harm. Depending on the severity of the loss incurred, this might even lead to litigation against social media platforms. The tools leveraged by bad actors undermine the integrity efforts of social media platforms, which in some cases or in conjunction with class-action litigation can lead to greater scrutiny by financial watchdogs pushing for tighter regulations.

To tackle these risks social media platforms must continue to expand enforcement of inauthentic coordinated behavior to eliminate botnetworks used to spread stock disinformation. Developing an account verification system that is dedicated to financial professionals, analysts and influencers will support and ease enforcement. Social media platforms should also ease onboarding of publicly traded companies to maintain a presence on social media. This decreases the effects of collateral price reversals. In order to mitigate stock disinformation social media platforms must develop content policies tailored to balance freedom of expression including price speculation with the inherent risk of market-making comments. The latter will hinge on reach and engagement metrics but also on detailed definitions of financial advice and time and location of the content. Here, a close working-relationship with watchdogs will improve operations. Added friction, for example an interstitial outlining the regulatory requirements before posting or a delayed time of posting or certain labels informing the consumer of the financial risks associated with acting on the information in the posting. There are obviously more measures that come to mind. This only serves as a start of a conversation.

So, was Elon Musk’s tweet “Tesla stock price too high imo” an information-based market manipulation, a market-making comment or just an exercise of his free speech? 

 

Understanding America

When I first arrived in New York City, this most portrayed American city appeared intimidating with its never-ending concrete jungles, incessant traffic and an overwhelmingly fast-paced populace. It made me wonder, is this the land of the free? Is this what America is like? Fast-forward a couple decades when the United States finds itself polarized, divided and void of compassion, insecure about its future. In times like these, I was looking for its identity. An identity forged by openness not oppression. A dear family member recommended reading Travels with Charley in search of America by John Steinbeck. It would become a starting point of how we got here.    

Steinbeck’s travelogue is comprised of simple ingredients: a man and his best friend, a three-quarter-ton pick up truck, and the wide and open roads of America. His best friend, a poodle named Charley is a main character in this non-fiction novel. His pick up truck Rocinante loads a little camper designed for housing. The duo is road tripping across rural America, sleeping wherever Rocinante finds a parking spot and a theme of this philosophical journey is to engage strangers in conversation over a cup of coffee. While this adventure takes place  in the America of the 1960s, it is somehow a timeless reflection of America’s soul. In somber passages, Steinbeck describes the struggle of Black Americans for equality. In more uplifting parts, he paints an American identity imbued in the spirit of tall, green Sequoias, who have seen all of history’s main events – free of discrimination. It’s a book about America, the beautiful, the ugly and the never-finished. Much as Steinbeck didn’t know his country in the 1960s, I don’t know my country in the 2020s: 

“I did not know my own country. I, an American writer, writing about America, was working from memory, and the memory is at best a faulty, warpy reservoir. I had not heard the speech of America, smelled the grass and trees and sewage, seen its hills and water, its color and quality of light. I knew the changes only from books and newspapers. But more than this, I had not felt the country for twenty-five years. In short, I was writing of something I did not know about”

Nevertheless, acknowledging a lack of knowledge is the first step in learning. “Travels with Charley in search of America” is an important piece of American literature. Its authentic historical account, its poetic beauty and the felt tragedy that is this great American democracy live on in our generation. What will we learn from it? 

The Nuclear Option in Cyberspace

Stuxnet was a malicious computer worm that caused substantial damage to Iran’s nuclear program. It was likely deployed to prevent a conventional military strike against Iran’s nuclear facilities. The 2015 cyber attacks on Ukranian critical infrastructure caused loss of energy for hundreds of thousands citizens of Ukraine in December. It was likely staged to test cyber operations for the upcoming 2016 U.S. presidential election. Both cases offer interesting takeaways: (a) offensive cyber operations often empower rather than deter an adversary and (b) offensive cyber operations resulting in a devastating cyber attack to the integrity of the target may be responded via conventional military means. But where exactly is the threshold for escalating a cyber attack into conventional domains? How can policymakers rethink escalation guidelines without compromising international relations? This paper discusses achieving strategic stability in cyberspace by way of transferring the concept of a nuclear no-first-use policy into the current U.S. cyber strategy.  

tl;dr

U.S. cyber strategy has a hypocrisy problem: it expects its cyberattacks to deter others (defend forward) without triggering escalatory responses outside cyberspace, while it is unclear about what it considers off-limits. A strategic cyber no-first-use declaration, like the one outlined in this article, could help solve risks of inadvertent instability while allowing cyber-​operations to continue.

Make sure to read the full paper titled A Strategic Cyber No-First-Use Policy? Addressing the U.S. Cyber Strategy Problem by Jacquelyn Schneider at https://www.tandfonline.com/doi/full/10.1080/0163660X.2020.1770970

Credit: J.M. Eddins Jr./Air Force

In 2018 the Trump administration adopted its progressive National Cyber Strategy. These sort of policy declarations are commonly filled with agreeable generalities, albeit this National Cyber Strategy read in conjunction with the 2018 Department of Defense Cyber Strategy introduced a new, rather reckless cyber posture of forward attack in cyberspace as a means of a preemptive cyber defense. Key themes, e.g. 

  • Using cyberspace to amplify military lethality and effectiveness;  
  • Defending forward, confronting threats before they reach U.S. networks;  
  • Proactively engaging in the day-to-day great power competition in cyberspace;  
  • Actively contesting the exfiltration of sensitive DoD information; 

raise important questions of national security. Why does an industrial superpower like the United States feel a need to start a cyber conflict when it could redirect resources toward building effective cyber defense systems? How many cyber attacks against critical U.S. infrastructure are successful that it would justify a forward leaning cyber defense? What is the long-term impact of charging the military with cyber strategy when the private sector in Silicon Valley is in a much better position to create built-in-cybersecurity and why aren’t resources invested back into the economy to spur cyber innovation? Each of these questions is material for future dissertations. Until then, instead of a defend forward strategy in cyberspace, a cyber policy of no-first-use might complement securing critical infrastructure while ensuring allies that the U.S. cyber capabilities are unmatched in the world and merciless if tested. 

No-first-use is a concept originating in the world of nuclear warfare. In essence, it means 

“a state declares that although it has nuclear weapons, and will continue to develop and rely on these weapons to deter nuclear strikes, it will not use nuclear weapons first.”

Instead conventional (non-nuclear) warfare will be utilized to respond to attacks on its sovereignty. These policies are not treaties with legal ramifications if violated. They’re neither agreements to ban production of certain weapon systems nor intended as arms control measures. In fact, no-first-use policies often take shape in form of a public commitment signaling restraint to friends and foes. They are made for strategic stability in a given domain. 

No-First-Use Cyber Policy 

Taking the no-first-use concept to cyberspace may be a national security strategy at low cost and high impact. Cyberspace is by its configuration transient, hard to control, low cost of entry and actor-independent. For example, a web crawler is at times a spiderbot indexing websites for search engines to produce better search results. At another time the same web crawler is configured to recon adversary cyber infrastructure and collect intelligence. Yet another time, the tool may carry a malicious payload while scraping website data. This level of ambiguity introduces a wealth of cyber policy hurdles to overcome when drafting a no-first-use cyber policy. Schneider recommends starting with distinguishing the elements of cyber operations in its strategic context. As mentioned before some actions in cyberspace are permissible, even expected, other actions using the same technology, are not. Now, there is no precedence for a cyber operation to be so effective at scale that it would compromise its target (state) altogether. For example, no known cyber operation has ever irreparably corrupted the energy infrastructure of a state, destroyed social security and health data of its citizens and redirected all government funds, bonds and securities without a trace or leaving the state in a position unable to respond within conventional warfare domains. This means the escalation risk from a cyber operation against critical infrastructure is lower in cyberspace compared to an attack with conventional weaponry. Therefore a successful no-first-use cyber policy must focus on the cyber operation that produces the most violent results and is effectively disrupting a conventional defense (by disrupting critical infrastructure). 

Another consideration for an effective no-first-use cyber policy is the rationale of continued development of cyber capabilities. A no-first-use cyber policy does not preclude its parties from actively testing adversaries’ cyber vulnerabilities; it only bars them from exploiting such weaknesses unless the adversary strikes first. 

A strong argument against adopting a no-first-use cyber policy is diplomatic appearances. First, it might signal a weakness on part of U.S. cyber capabilities or indicate to allies that the U.S. will not commit to protecting them if under attack. Second, it may also result in hypocrisy if the U.S. launches a first strike in cyberspace after political changes but is still bound to a no-first-use policy. For Schneider a successful no-first-use cyber policy 

“credibly convinces other states that the U.S. will restrain itself in cyberspace while it simultaneously conducts counter-cyber operations on a day-to-day basis.”

She also recommends strategic incentives through positive means: information sharing, foreign aid or exchange of cyber capabilities. The end goal then ought to be strategic deterrence through commitments in cyberspace to restraint high-severity cyber attacks.  

I found the idea of a no-first-use cyber policy captivating, albeit inconceivable to be implemented at scale in cyberspace. First, even though cyber operations with the potential to blackout a state are currently reserved for professional militaries or organized cyber operators in service of a state-actor, I don’t believe that a lone non-state actor is not capable of producing malicious code with equal destructive powers. Second, I see attribution still as a roadblock despite improving cyber forensics. Any democracy would see the hypocrisy of mistakenly engaging a non-state actor or the risk of misidentifying a state-actor as perpetrator. Moreover, the current state of attribution research in cyberspace is considering humans with certain intent as foundation when future cyber conflict may be initiated by a rogue or faulty autonomous weapon system under substantial control of an artificial intelligence. Third, any policy without legal or economic ramifications isn’t worth considering. An effective deterrence is hard to achieve without “skin in the game”. Perhaps an alternative to a no-first-use cyber policy would be a first-invest-into-cyber defense policy. Emulate the Paris Climate Accord for cyberspace by creating a normative environment that obligates states to achieve and maintain a minimum of cybersecurity by investing into cyber defense. This way constant innovation within the private sector reduces vulnerabilities, which will lead to a self-sustaining deterrence.