An Economic Approach To Analyze Politics On YouTube

YouTube’s recommendation algorithm is said to be a gateway to introduce viewers to extremist content and a stepping stone towards online radicalization. However, two other factors are equally important when analyzing political ideologies on YouTube: the novel psychological effects of audio-visual content and the ability of monetization. This paper contributes to the field of political communications by offering an economic framework to explain behavioral patterns of right-wing radicalization. It attempts to answer how YouTube is used by right-wing creators and audiences and offers a way forward for future research.

tl;dr

YouTube is the most used social network in the United States and the only major platform that is more popular among right-leaning users. We propose the “Supply and Demand” framework for analyzing politics on YouTube, with an eye toward understanding dynamics among right-wing video producers and consumers. We discuss a number of novel technological affordances of YouTube as a platform and as a collection of videos, and how each might drive supply of or demand for extreme content. We then provide large-scale longitudinal descriptive information about the supply of and demand for conservative political content on YouTube. We demonstrate that viewership of far-right videos peaked in 2017.


Make sure to read the full paper titled Right-Wing YouTube: A Supply and Demand Perspective by Kevin Munger and Joseph Phillips at https://journals.sagepub.com/doi/full/10.1177/1940161220964767

YouTube is unique in its combination of leveraging Google’s powerful content discovery algorithms, i.e. recommending content to keep attention levels on its platform and offering a type of content that is arguably the most immersive and versatile: video. The resulting product is highly effective to distribute a narrative, which caused journalists and academics to categorize YouTube as an important tool for online radicalization. In particular right-wing commentators make use of YouTube to spread their political ideologies ranging from conservative views to far-right extremism. However, the researchers draft a firm argument that the ability to create and manage committed audiences around a political ideology who mutually create and reinforce their extreme views is not only highly contagious to impact less committed audiences but pure fuel to ignite online radicalization.

Radio replaced the written word. Television replaced the spoken word. And online audio-visual content will replace the necessity to observe and understand. YouTube offers an unlimited library across all genres, all topics, all public figures ranging from user-generated content to six-figure Hollywood productions. Its 24/7 availability, immersive setup by incentivising comments and creating videos, allows YouTube to draw in audiences on much stronger psychological triggers than its mostly text-based competitors Facebook, Twitter or Reddit. Moreover, YouTube transcends national borders. It enables political commentary from abroad ranging from American expats to foreigners to exiled politicians or expelled opposition. In particular the controversial presidency of Donald Trump triggered political commentators in Europe and elsewhere to comment (and influence) the political landscape, its voters and domestic policies in the United States. This is important to acknowledge because YouTube has more users in the United States than any other social network including Facebook and Instagram.

Monetizing The Right

YouTube has been proven valuable to “Alternative Influence Networks”. In essence, potent political commentators and small productions that collaborate in direct opposition of mass media, both with regard to reporting ethics and political ideology. Albeit relatively unknown to the general populous, they draw consistent, committed audiences and tend to base their content around conservative and right-wing political commentary. There is some evidence in psychological research that conservatives tend to respond more to emotional content than liberals.

As such, the supply side on YouTube is fueled by the easy and efficient means to create political content. Production costs of a video are usually the equipment. The required time to shoot a video on a social issue is exactly as long as the video. In comparison drafting a text-based political commentary on the same issue can take up several days. YouTube’s recommendation system in conjunction with tailored targeting of certain audiences and social classes enable right-wing commentators to reach like-minded individuals and build massive audiences. The monetization methods include

  • Ad revenue from display, overlay, and video ads (not including product placement or sponsored by videos)
  • Channel memberships
  • Merchandise
  • Highlighted messages in Super Chat & Super Stickers
  • Partial revenue of YouTube Premium service

While YouTube has expanded its policy enforcement of extremist content, conservative and right-wing creators have adapted to the fewer monetization methods on YouTube by increasingly relying on crowdfunded donations, product placement or sale of products through affiliate marketing or through their own distribution network. Perhaps the most convincing factor for right-wing commentators to flock to YouTube is, however, the ability to build a large audience from scratch without the need of legitimacy or credentials.

The demand side on YouTube is more difficult to determine. Following the active audience theory users would have made a deliberate choice to click on right-wing content, to search for it, and to continue to engage with it over time. The researchers of this paper demonstrate that it isn’t just that easy. Many social and economic factors drive middle class democrats to adopt more conservative and extreme views. For example economic decline of blue-collar employment, a broken educational system in conjunction with increasing social isolation and lack of future prospects contribute to susceptibility to extremists content leading up to radicalization. The researchers rightfully argue it is difficult to determine the particular drivers that made an individual seek and watch right-wing content on YouTube. Those who do watch or listen to a right-wing political commentator tend to seek for affirmation and validation with their fringe ideologies.

“the novel and disturbing fact of people consuming white nationalist video media was not caused by the supply of this media radicalizing an otherwise moderate audience, but merely reflects the novel ease of producing all forms of video media, the presence of audience demand for white nationalist media, and the decreased search costs due to the efficiency and accuracy of the political ecosystem in matching supply and demand.”

While I believe this paper deserves much more attention and a reader should discover its research questions in the process of studying this paper, I find it helpful to provide the author’s research questions here, in conjunction with my takeaways, to make it easier for readers to prioritize this study: 

Research Question 1: What technological affordances make YouTube distinct from other social media platforms, and distinctly popular among the online right? 

Answer 1: YouTube is a media company; media on YouTube is videos; YouTube is powered by recommendations.

Research Question 2: How have the supply of and demand for right-wing videos on YouTube changed over time?

Answer 2.1: YouTube viewership of the extreme right has been in decline since mid-2017, well before YouTube changed its algorithm to demote far-right content in January 2019.

Answer 2.2: The bulk of the growth in terms of both video production and viewership over the past two years has come from the entry of mainstream conservatives into the YouTube marketplace.

This paper offers insights into the supply side of right-wing content and gives a rationale why people tend to watch right-wing content. It contributes to understanding how right-wing content is spreading across YouTube. An active comment section indicates higher engagement rates which are unique to right-wing audiences. These interactions facilitate a communal experience between creator and audience. Increased policy enforcement effectively disrupted this communal experience. Nevertheless, the researchers found evidence that those who return to create or watch right-wing content are likely to engage intensely with the content as well. Future research may investigate the actual power of the recommendation algorithm on YouTube. While this paper focused on right-wing content, the opposing political spectrum including the extreme left are increasingly utilizing YouTube to proliferate their political commentary. Personally I am curious to better understand the influence of foreign audiences on domestic issues and how YouTube is diluting the local populous with foreign activist voices.

What Is Parler?

The capitol riots on January 6th, 2021 left four insurgents and one law enforcement officer dead. The alternative social media platform Parler immediately became the focus of the investigation of this violent attack against democratically elected leaders of the 117th United States Congress. Was a social media platform used to coordinate the insurgents? Did Parler facilitate online radicalization against democratic leaders by allowing extremist content? An independent data analyst used scraped geolocation data from Parler users to create an interactive map to identify insurgents, track their movements and establish links to content posted by Parler users as they attempt to disrupt the certification of the electoral college vote and beyond. Another group of researchers wrote a captivating research paper about Parler’s account and content mechanisms. The researchers drew upon a large set of data. Analyzing more than 120 million pieces of content from more than 2 million Parler users they demonstrate the inner workings of account level content and content level moderation. It is an invaluable read to better understand Parler’s operations and how the platform managed to grow throughout the second half of the Trump presidency culminating in the capitol riots. But, perhaps more importantly, their contribution offers insights into Parler’s role of creating a platform for extremist content, its detrimental influence on American politics and the preventable translation into real world harm. 

tl;dr

Parler is an alternative social network promoting itself as a service that allows its users to “Speak freely and express yourself openly, without fear of being deplatformed for your views.” Because of this promise, the platform become popular among users who were suspended on mainstream social networks for violating their terms of service, as well as those fearing censorship. In particular, the service was endorsed by several conservative public figures, encouraging people to migrate there from traditional social networks. After the events of January 6 2021, Parler has been progressively deplatformed, with its app being removed from popular mobile stores and the entire website being taken down by their hosting provider. In this paper, we provide the first data-driven characterization of Parler. We collected 120M posts from 2.1M users posted between 2018 and 2020 as well as metadata from 12M user profiles. We find that the platform has witnessed large influxes of new users after being endorsed by popular figures, as well as a reaction to the 2020 US Presidential Election. We also find that discussion on the platform is dominated by conservative topics, President Trump, as well as conspiracy theories like QAnon.

Make sure to read the full paper titled An Early Look at the Parler Online Social Network by Max Aliapoulios, Emmi Bevensee, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, and Savvas Zannettou at https://arxiv.org/abs/2101.03820

Source: https://9to5mac.com/2021/01/11/parler-app-and-website-go-offline/

 

Is Transparency Really Reducing The Impact Of Misinformation?

A recent study investigated YouTube’s efforts to provide more transparency about the ownership of certain YouTube channels. The study concerned YouTube’s disclaimers displayed under the video that indicate the content was produced or is funded by a state-controlled media outlet. The study sought to shed light on whether or not these disclaimers are an efficient means to reduce the impact of misinformation.  

tl;dr

In order to test the efficacy of YouTube’s disclaimers, we ran two experiments presenting participants with one of four videos: A non-political control, an RT video without a disclaimer, an RT video with the real disclaimer, or the RT video with a custom implementation of the disclaimer superimposed onto the video frame. The first study, conducted in April 2020 (n = 580) used an RT video containing misinformation about Russian interference in the 2016 election. The second conducted in July 2020 (n = 1,275) used an RT video containing misinformation about Russian interference in the 2020 election. Our results show that misinformation in RT videos has some ability to influence the opinions and perceptions of viewers. Further, we find YouTube’s funding labels have the ability to mitigate the effects of misinformation, but only when they are noticed, and the information absorbed by the participants. The findings suggest that platforms should focus on providing increased transparency to users where misinformation is being spread. If users are informed, they can overcome the potential effects of misinformation. At the same time, our findings suggest platforms need to be intentional in how warning labels are implemented to avoid subtlety that may cause users to miss them.

Make sure to read the full article titled State media warning labels can counteract the effects of foreign misinformation by Jack Nassetta and Kimberly Gross at https://misinforeview.hks.harvard.edu/article/state-media-warning-labels-can-counteract-the-effects-of-foreign-misinformation/

Source: RT, 2020 US elections, Russia to blame for everything… again, Last accessed on Dec 31, 2020 at https://youtu.be/2qWANJ40V34?t=164

State-controlled media outlets are increasingly used for foreign interference in civic events. While independent media outlets can be categorized on social media and associated with a political ideology, a state-controlled media outlet generally appears independent or detached from a state-controlled political agenda. Yet they regularly create content concomitant with the controlling state’s political objectives and its leaders. This deceives the public about its state-affiliation and undermines civil liberties. The problem is magnified on social media platforms with their reach and potential for virality ahead of political elections. A prominent example is China’s foreign interference efforts in the referendum on the independence of Hong Kong.

An increasing number of social media platforms launched integrity measures to increase content transparency to counter the integrity risks associated with a state-controlled media outlet proliferating potential disinformation content. In 2018 YouTube began to roll out an information panel feature to provide additional context on state-controlled and publicly funded media outlets. These information panels or disclaimers are really warning labels that make the viewer aware about the potential political influence of a government on the information shown in the video. These warning labels don’t provide any additional context on the veracity of the content or whether the content was fact-checked. On desktop, they appear alongside a hyperlink leading to the wikipedia entry of the media outlet. As of this writing the feature applies to 27 governments including the United States government. 

Source: DW News, Massive explosion in Beirut, Last Accessed on Dec 31, 2020 at https://youtu.be/PLOwKTY81y4?t=7

The researchers focused on whether these warning labels would mitigate the effects on viewers’ perception created by misinformation shown in videos of the Russian state-controlled media outlet RT (Russia Today). RT evades deplatforming by complying with YouTube’s terms of service. This turned the RT channel into an influential resource for the Russian government to undermine confidence of the American public to trust established American media outlets and the United States government when reporting on the Russian interference in the 2016 U.S. presidential elections. An RT video downplaying the Russian influence operation was used for the study and shown to participants with and without a label identifying RT’s affiliation with the Russian government as well as a superimposed warning label with the same language and hyperlink to wikipedia. This surfaced the following findings: 

  1. Disinformation spread by RT does impact viewer’s perception and is effective at that.
  2. Videos without a warning label were more successful in reducing trust in established mainstream media and the government
  3. Videos without a warning label but a superimposed interstitial with the language of the warning label were most effective in preserving the integrity of viewer’ perceptions
Source: RT, $4,700 worth of ‘meddling’: Google questioned over ‘Russian interference’, Last accessed on Dec 31, 2020 at https://www.youtube.com/watch?v=wTCSbw3W4EI

The researchers further discovered small changes in coloring, design and placement of the warning label increase the viewer taking notice of it and it helps with absorbing the information. Both conditions must be met because noticing a label without comprehending its message had no significant impact on understanding the political connection of creator and content. 

I’m intrigued by these findings for the road ahead offers a great opportunity to shape how we distribute and consume information on social media without falling prey for foreign influence operations. Though open questions remain: 

  1. Are these warning labels equally effective on other social media platforms, e.g. Facebook, Instagram, Twitter, Reddit, TikTok, etc.? 
  2. Are these warning labels equally effective with other state-controlled media? This study focused on Russia, a large, globally acknowledged state actor. How does a warning label for content by the government of Venezuela or Australia impact the efficacy of misinformation? 
  3. This study seemed to be focused on the desktop version of YouTube. Are these findings transferable to the mobile version of YouTube?  
  4. What is the impact of peripheral content on viewer’s perception, e.g. YouTube’s recommendation showing videos in its sidebar that all claim RT is a hoax versus videos that all give RT independent credibility?
  5. The YouTube channels of C-SPAN and NPR did not appear to display a warning label within their videos. Yet the United States is among the 27 countries currently listed in YouTube’s policy. What are the criteria to be considered a publisher, publicly funded or state-controlled? How are these criteria met or impacted by a government, e.g. passing certain broadcasting legislation or declaration?
  6. Lastly, the cultural and intellectual background of the target audience is particularly interesting. Here is an opportunity to research the impact of warning labels with participants of different political ideologies, economic circumstances and age-groups in contrast to the actual civic engagement ahead, during and after an election   

Microtargeted Deepfakes in Politics

The 2019 Wordwide Threat Assessment warned of deepfakes deployed to manipulate public opinion. And while the 2020 U.S. presidential elections did not see an onslaught of deepfakes undermining voter confidence, experts agree that the threat remains tangible. A recent study conducted by researchers of the University of Amsterdam investigated the impact of political deepfakes meant to discredit a politician that were microtargeted to a specific segment of the electorate.

tl;dr

Deepfakes are perceived as a powerful form of disinformation. Although many studies have focused on detecting deepfakes, few have measured their effects on political attitudes, and none have studied microtargeting techniques as an amplifier. We argue that microtargeting techniques can amplify the effects of deepfakes, by enabling malicious political actors to tailor deepfakes to susceptibilities of the receiver. In this study, we have constructed a political deepfake (video and audio), and study its effects on political attitudes in an online experiment. We find that attitudes toward the depicted politician are significantly lower after seeing the deepfake, but the attitudes toward the politician’s party remain similar to the control condition. When we zoom in on the microtargeted group, we see that both the attitudes toward the politician and the attitudes toward his party score significantly lower than the control condition, suggesting that microtargeting techniques can indeed amplify the effects of a deepfake, but for a much smaller subgroup than expected.

Make sure to read the full paper titled Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes? by Tom Dobber, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese at https://doi.org/10.1177/1940161220944364

Credits: UC Berkeley/Stephen McNally

Deepfakes are a subcategory of modern information warfare. The technology leverages machine learning to generate audio-visual content that imitates original content but differs in both intent and message. Its highly deceptive appearance renders it a potent weapon to influence public opinion, undermine strategic policies or disrupt civic engagement. An infamous deepfake example depicts former president Obama seemingly calling president Trump expletives. Online microtargeting is a form of social media marketing to disseminate advertisements tailored to the specific interests of an identifiable, curated audience. Within the political context microtargeting is used to spread a campaign message to a specific audience that is identified and grouped by characteristics to either convince the audience to vote for or against a candidate. There are a number of civic risks associated with deploying deepfakes: 

  • Deepfake content is hard to tell apart from original and authentic content. While deepfake videos may signal some nefarious intent to a cautious audience, the potential impact of deepfake radio or deepfake text on voter behavior hasn’t been researched as of this writing
  • Political actors may leverage deepfakes to discredit opponents, undermine news reporting or equip trailing third-party candidates with sufficient influence to erode voter confidence  
  • Used in a political campaign deepfakes may be strategically deployed to incite a political scandal or to reframe current affairs and regain control of an election narrative

The study created a deepfake video depicting an interview of a prominent center-right politician of a large christian democratic party. The manipulated part of the otherwise original and authentic content shows the politician seemingly making a joke about the crucifixion of Jesus Christ: 

“But, as Christ would say: don’t crucify me for it.”

This content was shown to a randomly selected group of christian voters, who had identified their religious, conservative beliefs or voted for this politician in past elections. The researchers found that deepfakes spread without microtargeting the audience would impact the behavior towards the politician but not necessarily his political party. However, deepfakes tailored to a specific audience using political microtargeting techniques amplified the discrediting message of the deepfake therefore impacting both the behavior towards the politican and the political party. Interestingly, staunch supporters of the politician might be shielded from a lasting behavioral change due their own motivated reasoning (bias) derived from the politician’s ideology. For this group, the researchers argue a certain degree of discomfort or deviation from previous political ideology conveyed in a deepfake may reach a tipping point for staunch supporters to align with the results of this study but the limitations of this study may also indicate room for some unforeseen outcomes. 

A roadmap to counter microtargeted deepfakes should include legislators passing regulations to limit political campaign spending online, which would directly confine a campaign to focus on their limited financial resources and weed out corporate interests. Second, new regulations should focus on the protection of personal-identifiable data. A microtargeting dataset includes location data, personal preferences and website interactions etc. While this data is valuable within a commercial context, it should be excluded from civic engagements such as elections. Academics will have an opportunity to discover insights on algorithm bias to improve upon the existing machine learning approach that is training generative adversarial networks with pre-conditioned datasets. Moreover, future research has an opportunity to further investigate the impact of manipulated media on voter education, confidence and behavior within and outside of political elections.     

Here’s one of my favorite deepfake videos of president Trump explaining money laundering to his son-in-law Jared Kushner in a deepfake(d) scene of “Breaking Bad”

Manipulated Social Media And The Stock Market

Earlier this year, Elon Musk tweeted that the Tesla stock price is too high. His twitter account had a reach of 33.4 million followers (41 million as of this writing). The immediate impact on Tesla’s stock price was a midday trading deficit of 9% compared to the closing stock price of the previous day. Tesla’s company valuation suffered an estimated loss of $14 billion. The episode triggered concerns over a potential violation of his 2018 settlement with the Securities and Exchange Commission after misleading tweets implying to have raised sufficient capital to take Tesla private. Presumably this time it was in response to California’s restrictive shelter-in-place lockdown measures to tackle the coronavirus pandemic. Maybe Elon Musk felt the weight of his responsibilities weigh twice as much during these challenging times. In any case, as an investor, his actions made me think of the power of social media at the hands of a social media influencer (i.e. stock promoter). Moreover, it made me think about content policies tailored to protect economic value while safeguarding information integrity. This empirical study conducted by researchers of the Pantheon-Sorbonne University discusses effects of illegal price manipulation by way of information operations on social media, specifically Twitter.

tl;dr

Social media can help investors gather and share information about stock markets. However, it also presents opportunities for fraudsters to spread false or misleading statements in the marketplace. Analyzing millions of messages sent on the social media platform Twitter about small capitalization firms, we find that an abnormally high number of messages on social media is associated with a large price increase on the event day and followed by a sharp price reversal over the next trading week. Examining users’ characteristics, and controlling for lagged abnormal returns, press releases, tweets sentiment and firms’ characteristics, we find that the price reversal pattern is stronger when the events are generated by the tweeting activity of stock promoters or by the tweeting activity of accounts dedicated to tracking pump-and-dump schemes. Overall, our findings are consistent with the patterns of a pump-and-dump scheme, where fraudsters/promoters use social media to temporarily inflate the price of small capitalization stocks.

Make sure to read the full paper titled Market Manipulation and Suspicious Stock Recommendations on Social Media by Thomas Renault at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3010850

Credit: Lucas Jackson/Reuters

Social media platforms are an accepted corporate communication channel nowadays closely monitored by investors and financial analysts. As an investor, social media offers swarm intelligence on trading the company’s stock, access to real-time information about the company, product updates and other financially relevant information. Large and small-cap companies usually maintain a presence on social media. Although in particular small-cap companies with low liquidity are vulnerable to stock price manipulations by way of information operations on social media. According to researchers of the Pantheon-Sorbonne University an information-based manipulation involves rumors, misleading or false press releases, stock analysis or price targets, etc. that is disseminated in a short, time-sensitive period. Nearly 50% of this disinformation is spread by an influencer. The investment terminology calls this pump-and-dump scheme: 

“Pump-and-dump schemes involve touting a company’s stock through false or misleading statements in the marketplace in order to artificially inflate (pump) the price of a stock. Once fraudsters stop hyping the stock and sell their shares (dump), the price typically falls.”

The empirical study collected tweets containing the cashtag ticker symbol of more than 5000 small-cap companies. Over an eleven-month period 248,748 distinct Twitter users posted 7,196,307 financially relevant tweets. They adjusted the data for overoptimistic noise traders and financially relevant news reporting. They found a spike in volume of tweets concerning a company’s stock on social media correlates with a spike in trading of the company’s stock from two days before peak activity on social media up to five days after it. Some content concerned positive financial signals advocating to buy the stock. Other content concerned disinformation about the company’s performance. It was spread to a large, unsophisticated Twitter audience by influencer in concert with a network of inauthentic accounts and bots. This was then followed by a price reversal over the ensuing trading days. In the aftermath, the actors part of the scheme went into hibernation or ceased social media activity altogether.

Risk And Opportunity For Social Media Platforms

Information operations to manipulate stock price are quite common on social media. Albeit hard to detect only few are investigated and successfully prosecuted. Consumers exposed to stock disinformation that fell victim tend to exit the stock market altogether. Moreover a consumer might reduce their footprint on social media after experiencing real-world financial harm. Depending on the severity of the loss incurred, this might even lead to litigation against social media platforms. The tools leveraged by bad actors undermine the integrity efforts of social media platforms, which in some cases or in conjunction with class-action litigation can lead to greater scrutiny by financial watchdogs pushing for tighter regulations.

To tackle these risks social media platforms must continue to expand enforcement of inauthentic coordinated behavior to eliminate botnetworks used to spread stock disinformation. Developing an account verification system that is dedicated to financial professionals, analysts and influencers will support and ease enforcement. Social media platforms should also ease onboarding of publicly traded companies to maintain a presence on social media. This decreases the effects of collateral price reversals. In order to mitigate stock disinformation social media platforms must develop content policies tailored to balance freedom of expression including price speculation with the inherent risk of market-making comments. The latter will hinge on reach and engagement metrics but also on detailed definitions of financial advice and time and location of the content. Here, a close working-relationship with watchdogs will improve operations. Added friction, for example an interstitial outlining the regulatory requirements before posting or a delayed time of posting or certain labels informing the consumer of the financial risks associated with acting on the information in the posting. There are obviously more measures that come to mind. This only serves as a start of a conversation.

So, was Elon Musk’s tweet “Tesla stock price too high imo” an information-based market manipulation, a market-making comment or just an exercise of his free speech?