Social media are interactive technologies that facilitate the creation and sharing of information, ideas, interests, and other forms of expression through virtual communities and networks.
Should private companies decide what politician people will hear about? How can tech policy make our democracy stronger? What is the role of social media and journalism in an increasingly polarized society? Katie Harbath, a former director for global elections at Facebook discusses these questions in a lecture about politics, policy and democracy. Her unparalleled experience as a political operative combined with her decade long experience working on political elections across the globe make her a leading intellectual voice to shape the future of civic engagement online. In her lecture to honor the legacy of former Wisconsin State senator Paul Offner she shares historical context on the evolution of technology and presidential election campaigns. She also talks about the impact of the 2016 election and the post-truth reality online that came with the election of Donald Trump. In her concluding remarks she offers some ideas for future regulations of technology to strengthen civic integrity as well as our democracy and she answers questions during her Q&A.
tl;dr
As social media companies face growing scrutiny among lawmakers and the general public, the La Follette School of Public Affairs at University of Wisconsin–Madison welcomed Katie Harbath, a former global public policy director at Facebook for the past 10 years, for a livestreamed public presentation. Harbath’s presentation focused on her experiences and thoughts on the future of social media, especially how tech companies are addressing civic integrity issues such as free and hate speech, misinformation and political advertising.
03:04 – Opening remarks by Susan Webb Yackee 05:19 – Introduction of the speaker by Amber Joshway 06:59 – Opening remarks by Katie Harbath 08:24 – Historical context of tech policy 14:39 – The promise of technology and the 2016 Facebook Election 17:31 – 2016 Philippine presidential election 18:55 – Post-truth politics and the era of Donald J. Trump 20:04 – Social media for social good 20:27 – 2020 US presidential elections 22:52 – The Capitol attacks, deplatforming and irreversible change 23:49 – Legal aspects of tech policy 24:37 – Refresh Section 230 CDA and political advertising 26:03 – Code aspects of tech policy 28:00 – Developing new social norms 30:41 – More diversity, more inclusion, more openness to change 33:24 – Tech policy has no finishing line 34:48 – Technology as a force for social good and closing remarks
The financial sector is a highly regulated marketplace. Deepfakes or artificially-generated synthetic media are associated with political disinformation but have not yet been linked to the financial system. The Carnegie Endowment for International Peace issued a scintillating working paper series titled “Cyber Security and the Financial System” covering a wide range of cutting edge issues from the European framework for Threat Intelligence-Based Ethical Red Teaming (TIBER) to assessing cyber resilience measures for financial organizations to global policies to combat manipulation of financial data. Jon Bateman’s contribution titled “Deepfakes and Synthetic Media in the Financial System: Assessing Threat Scenarios” takes a closer look on how deepfakes can impact the financial system.
tl;dr
Rapid advances in artificial intelligence (AI) are enabling novel forms of deception. AI algorithms can produce realistic “deepfake” videos, as well as authentic-looking fake photos and writing. Collectively called synthetic media, these tools have triggered widespread concern about their potential in spreading political disinformation. Yet the same technology can also facilitate financial harm. Recent months have seen the first publicly documented cases of deepfakes used for fraud and extortion. Today the financial threat from synthetic media is low, so the key policy question is how much this threat will grow over time. Leading industry experts diverge widely in their assessments. Some believe firms and regulators should act now to head off serious risks. Others believe the threat will likely remain minor and the financial system should focus on more pressing technology challenges. A lack of data has stymied the discussion. In the absence of hard data, a close analysis of potential scenarios can help to better gauge the problem. In this paper, ten scenarios illustrate how criminals and other bad actors could abuse synthetic media technology to inflict financial harm on a broad swath of targets. Based on today’s synthetic media technology and the realities of financial crime, the scenarios explore whether and how synthetic media could alter the threat landscape.
Deepfakes are a variation of manipulated media. In essence, a successful deepfake requires a sample data set of a original that is used to train a deep learning algorithm. It will learn to alter the training data to a degree that another algorithm is unable to distinguish whether the presented result is altered training data or the original. Think of it as a police sketch artist who will create a facial composite based on eye-witness accounts. The more available data and time the artist has to render a draft, the higher the likelihood of creating a successful mugshot sketch. In this paper, the term deepfake relates to a subset of synthetic media including videos, images and voice created through artificial intelligence.
The financial sector is particularly vulnerable in the know-your-customer space. It’s a unique entry point for malicious actors to submit manipulated identity verification or deploy deepfake technology to fool authenticity mechanisms. While anti-fraud prevention tools are an industry-wide standard to prevent impersonation or identity theft, the onset of cheaper, more readily available deepfake technology marks a turning point for the financial sector. Deepfakes may be used to leverage a blend of false or hacked personal identifiable information (PII) data to gain access or open bank accounts, initiate financial transactions, or redistribute private equity assets. Bateman focused on two categories of synthetic media that are most relevant for the financial sector: (1) narrowcast synthetic media, which encompasses one-off, tailored manipulated data deployed directly to the target via private channels and (2) broadcast synthetic media, which is designed for mass-audiences deployed directly or indirectly via publicly available channels, e.g. social media. An example for the first variation is the story of a cybercrime that took place in 2019. A Chief Executive Officer of a UK-based energy company received a phone call from – what he believed – his boss, the CEO of the parent corporation based in Germany. In the phone call, the voice of the German CEO was an impersonation created by artificial intelligence and publicly available voice recordings (speeches, transcripts etc). The voice directed the UK CEO to immediately initiate a financial transaction to pay a Hungarian supplier. This type of attack is also known as deepfake voice phishing (vishing). These fabricated directions resulted in the fraudulent transfer of $234,000. An example for the second variation is commonly found in widespread pump and dump schemes on social media. These could range from malicious actors creating false, incriminating deepfakes of key-personnel of a stock-listed company to artificially lower the stock price or creating synthetic media that misrepresents product results to manipulate a higher stock price and garner more interest from potential investors. Going off the two categories of synthetic media, Bateman presents ten scenarios that are layered into four stages: (1) Targeting Individuals, e.g. identity theft or impersonation, (2) Targeting Companies, e.g. Payment Fraud or Stock Manipulation, (3) Targeting Financial Markets, e.g. creating malicious flash crashes through state-sponsored hacking or cybercriminals backed a foreign government, and (4) Targeting Central Banks and Financial Regulators, e.g. regulatory astroturfing.
In conclusion, Bateman finds that at this point in time, deepfakes aren’t potent enough to destabilize global financial systems in mature, healthy economies. They are more threatening, however, to individuals and business. To take precautions against malicious actors with deepfake technology, a number of resiliency measures can be implemented: broadcast synthetic media is potent to amplify and prolong already existing crises or scandals. Aside from building trust with key audiences, a potential remedy to deepfakes amplifying false narratives is the readiness to create counter-narratives with evidence. To prevent other companies from potential threats that would decrease the trust in the financial sector, an industry wide sharing of information on cyber attacks is a viable option to mitigate coordinated criminal activity. Lastly, the technology landscape is improving its integrity at a rapid succession rate. A multi-stakeholder response bringing together leaders from the financial sector, the technology sector and experts on consumer behavior with policymakers will help to create more efficient regulations to combat deepfakes in the financial system.
What is the fuel of our social media habits? To answer that question researchers from the University of Southern California in Los Angeles analyzed user behavior across established social media platforms. They offer insights into user habit formation, but also explain the dynamics and technology that prevent users from gaining control over the daily-use habits on social media.
tl;dr
If platforms such as Facebook, Instagram, and Twitter are the engines of social media use, what is the gasoline? The answer can be found in the psychological dynamics behind consumer habit formation and performance. In fact, the financial success of different social media sites is closely tied to the daily-use habits they create among users. We explain how the rewards of social media sites motivate user habit formation, how social media design provides cues that automatically activate habits and nudge continued use, and how strong habits hinder quitting social media. Demonstrating that use habits are tied to cues, we report a novel test of a 2008 change in Facebook design, showing that it impeded posting only of frequent, habitual users, suggesting that the change disrupted habit automaticity. Finally, we offer predictions about the future of social media sites, highlighting the features most likely to promote user habits.
Social media platforms serve our communities in a variety of functions. Anybody can participate, share stories or become a community leader by creating user-generated content that is available to a specific group of people or the entire public. Connecting with people is human, but the frequency, means and reach as well as the how and who we connect with is not. In particular the dichotomy of conflicting social interests and user habits is discussed in this paper, which explains on a high level fundamental social media platform’s need to draw on user habits and how these habits are cultivated by sophisticated technology. In fact, social media platforms are designed to encourage habit formation through repeat use. This is demonstrated by its ever-expanding options to find new people to connect and share content, new entertainment products and means to build larger online communities. This is to generate consistent revenue through effective, targeted marketing of its users.
One aspect of the paper explores whether frequent use of social media is habitual use and if overuse is tantamount to an addiction. What are the contributing factors that make people form a habit to frequently check their social media profiles? How can you manage these habits more effectively? What does it take to rewire these habits? The researchers found that users who post more frequently also reported increased automation of their actions. In other words, these users logged onto Facebook or Twitter posted about something without deliberately thinking about the act of posting itself. Some of the factors that contribute to forming a habit are the repeated steps it takes to participate on social media. For example, the login process, posting original content, exploring new content from others, liking, sharing or discussing content. In psychology this phenomenon is called ideomotor response wherein a user unconsciously completes an order of steps to perform a process. Of course the formation of a habit is not only due to repetition but rewards of continuous use. Likes, shares and general interaction with people on social media are a double-edged sword for it brings us closer together while also appealing to our subconscious need for affirmation. The former helps us to build positive attitudes linked with the particular platform. Whereas the latter often remains unrecognized until the habit is already established in one’s daily routine. Initial rewards subside fast, however, as these motivations are replaced by habitual use that is linked to a specific gain arising from a certain community engagement. These habits, once formed and established, are hard to overcome as demonstrated by an experiment with well-known, sugared beverages:
“In an experiment demonstrating habit persistence despite conflicting attitudes, consumers continued to choose their habitual sugared beverages for a taste test even after reading a persuasive message that convinced them of the health risks of sugar”.
It must be noted that social media use is not the same as drinking soda pop, smoking cigarettes or snorting cocaine. Social media use is also not a mindless, repetitive action. Rather it is a composition of different, highly individualized behaviors, attitudes and motivations that compound depending on the particular use case. For example a community organizer who uses Facebook Groups to bring together and coordinate high-school students across a county to play pickup ultimate frisbee will establish different habitual behaviors from someone using social media purely to connect online with a closed-circle of family and friends. The researchers found that active engagement on social media is linked to positive subjective experiences of well-being. Users who are more passive, scroll and read only reported lower levels of life satisfaction. Scrolling introduces an element of uncertainty for the user. Thus it is among the top rewards that don’t require active engagement. Unexpected posts tend to surprise users with sometimes highly emotional content such as misinformation or community nostalgia. Needless to state, controversial content tends to spread fast and far increasing the reward for engagement. Moreover it entrenches habitual use for users to come back to discover more emotional content.
To put this into perspective: social media habits form because the platform highlights signals that makes us feel good and keep us engaged. Preexisting emotional and social needs are captured by an easy process to use the platform. Notifications, likes, comments and shares increase participatory experiences that emulate real-world communities. Reciprocity between family, friends and others as well as elements of uncertainty are adjusted based on tailored content delivery through sophisticated algorithms. These lines of code ensure that once a user establishes a footprint on the platform, enough incentives are created to encourage and facilitate repeat use. Therefore further ingraining the platform in our daily lives, daily-use habits.
Maybe We Should Take A Break
In my thought provoking headline I challenge the notion that it is impossible to reduce or quit social media altogether. Note I wouldn’t want anybody to reduce or quit social media if it adds value to your life. Facebook is invaluable with regard to connecting with family and friends. YouTube or TikTok offer some of my favorite pastimes. And Twitter has become the newsstand of the 21st century. Nevertheless I believe this research paper is an important contribution to raise awareness of our daily habits, our time management and how we consume information. I would be remiss to not contemplate options to improve my social media diet. In psychology research the terminology for quitting a habit is coined discontinuance intention. Forming an intent to cease social media is a decision process at times overshadowed by feelings of regret, lack of alternative means to communicate across our social graph and general, societal inertia (take these Google search queries pictured below as an indicator for the impact of societal inertia). If you find yourself wanting to change your social media diet then be on the lookout for these factors:
Familiarity Breeds Inaction: the longer a user is with a social media platform, the more likely feelings of familiarity and a sense of control prevent actions to reduce time spent on the platform
Habits Trump Intentions: everyday signals manifested in our phones, computers or environment trigger ideomotor responses to use social media despite social norms, resolutions etc. Remember the old saying “the road to hell is paved with good intentions” is true for managing our social media habits
Straight-forward self-control has been found to be an effective strategy to reduce the use of social media. Discipline to use social media with a specific intent and for a specific purpose equals freedom from habitual, time-consuming use. However, the researchers found that self-control is hard to maintain and a more effective strategy is changing the signals upon which we use social media. For example, leveraging silent or airplane mode on our phones, turning off push-notifications or unsubscribing from notification emails help to dig a moat between a healthy daily routine and mindless use of social media. Interestingly, the researchers found short term absences from social media, i.e. only a few days, is less effective than an entire week or longer breaks from social media. It will depend on an individual’s preferences, needs and benefits that must be carefully balanced against the inherent cost of social media use. Of course all of this is highly subjective. I recommend reading this well-written research paper as a start. It helps to formulate a balanced strategy for social media use and online habit management.
The American University Washington College of Law and the Hoover Institution at Stanford University created a working group to understand and assess the risks posed by Chinese technology companies in the United States. They propose a framework to better assess and evaluate these risks by focusing on the interconnectivity of threats posed by China to the US economy, national security and civil liberties.
tl;dr
The Trump administration took various steps to effectively ban TikTok, WeChat, and other Chinese-owned apps from operating in the United States, at least in their current forms. The primary justification for doing so was national security. Yet the presence of these apps and related internet platforms presents a range of risks not traditionally associated with national security, including data privacy, freedom of speech, and economic competitiveness, and potential responses raise multiple considerations. This report offers a framework for both assessing and responding to the challenges of Chinese-owned platforms operating in the United States.
China has experienced consistent growth since opening its economy in the late 1970s. With its economy at about x14 today, this growth trajectory dwarfs the growth of the US economy, which increased at about x2 with the S&P 500 being its most rewarding driver at about x5 increase. Alongside economic power comes a thirst for global expansion far beyond the asian-pacific region. China’s foreign policy seeks to advance the Chinese one-party model of authoritarian capitalism that could pose a threat to human rights, democracy and the basic rule of law. US political leaders see these developments as a threat to their own US foreign policy of primacy but perhaps more important a threat to the western ideology deeply rooted in individual liberties. Needless to say that over the years every administration independent of political affiliation put the screws on China. A most recent example is the presidential executive order addressing the threat posed by social media video app TikTok. Given the authoritarian model of governance and the Chinese government’s sphere of control over Chinese companies their expansion into the US market raises concerns about access to critical data and data protection or cyber-enabled attacks on critical US infrastructure among a wide range of other threats to national security. For example:
Internet Governance: China is pursuing regulation to shift the internet from open to closed and decentralized to centralized control. The US government has failed to adequately engage international stakeholders in order to maintain an open internet but rather has authorized large data collection programs that emulate Chinese surveillance.
Privacy, Cybersecurity and National Security: The internet’s continued democratization encourages more social media and e-commerce platforms to integrate and connect features for users to enable multi-surface products. Mass data collection, weak product cybersecurity and the absence of broader data protection regulations can be exploited to collect data on domestic users, their behavior and their travel pattern abroad. It can be exploited to influence or control members of government agencies through targeted intelligence or espionage. Here the key consideration is aggregated data, which even in the absence of identifiable actors can be used to create viable intelligence. China has ramped up its offensive cyber operations beyond cyber-enabled trade or IP-theft and possesses the capabilities and cyber-weaponry to destabilize national security in the United States.
Necessity And Proportionality
Considering mitigating the threat to national security by taking actions against Chinese owned- or controlled communications technology including tech products manufactured in China the working group suggests an individual case-based analysis. They attempt to address the challenge of accurately identifying the specific risk in an ever-changing digital environment with a framework of necessity and proportionality. Technology standards change at a breathtaking pace. Data processing reaches new levels of intimacy due to the use of artificial intelligence and machine learning. Thoroughly assessing, vetting and weighing a tolerance to specific risks are at the core of this framework in order to calibrate a chosen response to avoid potential collateral consequences.
The working group’s framework of necessity and proportionality reminded me of a classic lean six sigma structure with a strong focus on understanding the threat to national security. Naturally, as a first step they suggest accurately identifying the threat’s nature, credibility, imminence and the chances of the threat becoming a reality. I found this first step incredibly important because a failure to identify a threat will likely lead to false attribution and undermine every subsequent step. In the context of technology companies the obvious challenge is data collection, data integrity and detection systems to tell the difference. By that I imply a Chinese actor may deploy a cyber ruse in concert with the Chinese government to obfuscate their intentions. Following the principle of proportionality, step two is looking into the potential collateral consequence to the United States, its strategic partners and most importantly its citizens. Policymakers must be aware of the unintended path a policy decision may take once a powerful adversary like China starts its propaganda machine. Therefore this step requires policymakers to include thresholds for when a measure to mitigate a threat to national security outweighs the need to act. In particular inalienable rights such as the freedom of expression, freedom of the press or freedom of assembly must be upheld at all times as they are fundamental American values. To quote the immortal Molly Ivins “Many a time freedom has been rolled back – and always for the same sorry reason: fear.” The third and final step concerns mitigation measures. In other words: what are we going to do about it? The working group landed on two critical factors: data and compliance. The former might be restricted, redirected or recoded to adhere to national security standards. The latter might be audited to not only identify vulnerabilities but further instill built-in cybersecurity and foster an amicable working-relationship.
The Biden administration is faced with a daunting challenge to review and develop appropriate cyber policies that will address the growing threat from Chinese technology companies in a coherent manner that is consistent with American values. Only a broad policy response that is tailored to specific threats and focused on stronger cybersecurity and stronger data protection will yield equitable results. International alliances alongside increased collaboration to develop better privacy and cybersecurity measures will lead to success. However, the US must focus on their own strengths first, leverage their massive private sector to identify the specific product capabilities and therefore threats and attack vectors, before taking short-sighted, irreversible actions.
YouTube’s recommendation algorithm is said to be a gateway to introduce viewers to extremist content and a stepping stone towards online radicalization. However, two other factors are equally important when analyzing political ideologies on YouTube: the novel psychological effects of audio-visual content and the ability of monetization. This paper contributes to the field of political communications by offering an economic framework to explain behavioral patterns of right-wing radicalization. It attempts to answer how YouTube is used by right-wing creators and audiences and offers a way forward for future research.
tl;dr
YouTube is the most used social network in the United States and the only major platform that is more popular among right-leaning users. We propose the “Supply and Demand” framework for analyzing politics on YouTube, with an eye toward understanding dynamics among right-wing video producers and consumers. We discuss a number of novel technological affordances of YouTube as a platform and as a collection of videos, and how each might drive supply of or demand for extreme content. We then provide large-scale longitudinal descriptive information about the supply of and demand for conservative political content on YouTube. We demonstrate that viewership of far-right videos peaked in 2017.
YouTube is unique in its combination of leveraging Google’s powerful content discovery algorithms, i.e. recommending content to keep attention levels on its platform and offering a type of content that is arguably the most immersive and versatile: video. The resulting product is highly effective to distribute a narrative, which caused journalists and academics to categorize YouTube as an important tool for online radicalization. In particular right-wing commentators make use of YouTube to spread their political ideologies ranging from conservative views to far-right extremism. However, the researchers draft a firm argument that the ability to create and manage committed audiences around a political ideology who mutually create and reinforce their extreme views is not only highly contagious to impact less committed audiences but pure fuel to ignite online radicalization.
Radio replaced the written word. Television replaced the spoken word. And online audio-visual content will replace the necessity to observe and understand. YouTube offers an unlimited library across all genres, all topics, all public figures ranging from user-generated content to six-figure Hollywood productions. Its 24/7 availability, immersive setup by incentivising comments and creating videos, allows YouTube to draw in audiences on much stronger psychological triggers than its mostly text-based competitors Facebook, Twitter or Reddit. Moreover, YouTube transcends national borders. It enables political commentary from abroad ranging from American expats to foreigners to exiled politicians or expelled opposition. In particular the controversial presidency of Donald Trump triggered political commentators in Europe and elsewhere to comment (and influence) the political landscape, its voters and domestic policies in the United States. This is important to acknowledge because YouTube has more users in the United States than any other social network including Facebook and Instagram.
Monetizing The Right
YouTube has been proven valuable to “Alternative Influence Networks”. In essence, potent political commentators and small productions that collaborate in direct opposition of mass media, both with regard to reporting ethics and political ideology. Albeit relatively unknown to the general populous, they draw consistent, committed audiences and tend to base their content around conservative and right-wing political commentary. There is some evidence in psychological research that conservatives tend to respond more to emotional content than liberals.
As such, the supply side on YouTube is fueled by the easy and efficient means to create political content. Production costs of a video are usually the equipment. The required time to shoot a video on a social issue is exactly as long as the video. In comparison drafting a text-based political commentary on the same issue can take up several days. YouTube’s recommendation system in conjunction with tailored targeting of certain audiences and social classes enable right-wing commentators to reach like-minded individuals and build massive audiences. The monetization methods include
Ad revenue from display, overlay, and video ads (not including product placement or sponsored by videos)
Channel memberships
Merchandise
Highlighted messages in Super Chat & Super Stickers
Partial revenue of YouTube Premium service
While YouTube has expanded its policy enforcement of extremist content, conservative and right-wing creators have adapted to the fewer monetization methods on YouTube by increasingly relying on crowdfunded donations, product placement or sale of products through affiliate marketing or through their own distribution network. Perhaps the most convincing factor for right-wing commentators to flock to YouTube is, however, the ability to build a large audience from scratch without the need of legitimacy or credentials.
The demand side on YouTube is more difficult to determine. Following the active audience theory users would have made a deliberate choice to click on right-wing content, to search for it, and to continue to engage with it over time. The researchers of this paper demonstrate that it isn’t just that easy. Many social and economic factors drive middle class democrats to adopt more conservative and extreme views. For example economic decline of blue-collar employment, a broken educational system in conjunction with increasing social isolation and lack of future prospects contribute to susceptibility to extremists content leading up to radicalization. The researchers rightfully argue it is difficult to determine the particular drivers that made an individual seek and watch right-wing content on YouTube. Those who do watch or listen to a right-wing political commentator tend to seek for affirmation and validation with their fringe ideologies.
“the novel and disturbing fact of people consuming white nationalist video media was not caused by the supply of this media radicalizing an otherwise moderate audience, but merely reflects the novel ease of producing all forms of video media, the presence of audience demand for white nationalist media, and the decreased search costs due to the efficiency and accuracy of the political ecosystem in matching supply and demand.”
While I believe this paper deserves much more attention and a reader should discover its research questions in the process of studying this paper, I find it helpful to provide the author’s research questions here, in conjunction with my takeaways, to make it easier for readers to prioritize this study:
Research Question 1: What technological affordances make YouTube distinct from other social media platforms, and distinctly popular among the online right?
Answer 1: YouTube is a media company; media on YouTube is videos; YouTube is powered by recommendations.
Research Question 2: How have the supply of and demand for right-wing videos on YouTube changed over time?
Answer 2.1: YouTube viewership of the extreme right has been in decline since mid-2017, well before YouTube changed its algorithm to demote far-right content in January 2019.
Answer 2.2: The bulk of the growth in terms of both video production and viewership over the past two years has come from the entry of mainstream conservatives into the YouTube marketplace.
This paper offers insights into the supply side of right-wing content and gives a rationale why people tend to watch right-wing content. It contributes to understanding how right-wing content is spreading across YouTube. An active comment section indicates higher engagement rates which are unique to right-wing audiences. These interactions facilitate a communal experience between creator and audience. Increased policy enforcement effectively disrupted this communal experience. Nevertheless, the researchers found evidence that those who return to create or watch right-wing content are likely to engage intensely with the content as well. Future research may investigate the actual power of the recommendation algorithm on YouTube. While this paper focused on right-wing content, the opposing political spectrum including the extreme left are increasingly utilizing YouTube to proliferate their political commentary. Personally I am curious to better understand the influence of foreign audiences on domestic issues and how YouTube is diluting the local populous with foreign activist voices.
The capitol riots on January 6th, 2021 left four insurgents and one law enforcement officer dead. The alternative social media platform Parler immediately became the focus of the investigation of this violent attack against democratically elected leaders of the 117th United States Congress. Was a social media platform used to coordinate the insurgents? Did Parler facilitate online radicalization against democratic leaders by allowing extremist content? An independent data analyst used scraped geolocation data from Parler users to create an interactive map to identify insurgents, track their movements and establish links to content posted by Parler users as they attempt to disrupt the certification of the electoral college vote and beyond. Another group of researchers wrote a captivating research paper about Parler’s account and content mechanisms. The researchers drew upon a large set of data. Analyzing more than 120 million pieces of content from more than 2 million Parler users they demonstrate the inner workings of account level content and content level moderation. It is an invaluable read to better understand Parler’s operations and how the platform managed to grow throughout the second half of the Trump presidency culminating in the capitol riots. But, perhaps more importantly, their contribution offers insights into Parler’s role of creating a platform for extremist content, its detrimental influence on American politics and the preventable translation into real world harm.
tl;dr
Parler is an alternative social network promoting itself as a service that allows its users to “Speak freely and express yourself openly, without fear of being deplatformed for your views.” Because of this promise, the platform become popular among users who were suspended on mainstream social networks for violating their terms of service, as well as those fearing censorship. In particular, the service was endorsed by several conservative public figures, encouraging people to migrate there from traditional social networks. After the events of January 6 2021, Parler has been progressively deplatformed, with its app being removed from popular mobile stores and the entire website being taken down by their hosting provider. In this paper, we provide the first data-driven characterization of Parler. We collected 120M posts from 2.1M users posted between 2018 and 2020 as well as metadata from 12M user profiles. We find that the platform has witnessed large influxes of new users after being endorsed by popular figures, as well as a reaction to the 2020 US Presidential Election. We also find that discussion on the platform is dominated by conservative topics, President Trump, as well as conspiracy theories like QAnon.
Make sure to read the full paper titled An Early Look at the Parler Online Social Network by Max Aliapoulios, Emmi Bevensee, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, and Savvas Zannettou at https://arxiv.org/abs/2101.03820
A recent study investigated YouTube’s efforts to provide more transparency about the ownership of certain YouTube channels. The study concerned YouTube’s disclaimers displayed under the video that indicate the content was produced or is funded by a state-controlled media outlet. The study sought to shed light on whether or not these disclaimers are an efficient means to reduce the impact of misinformation.
tl;dr
In order to test the efficacy of YouTube’s disclaimers, we ran two experiments presenting participants with one of four videos: A non-political control, an RT video without a disclaimer, an RT video with the real disclaimer, or the RT video with a custom implementation of the disclaimer superimposed onto the video frame. The first study, conducted in April 2020 (n = 580) used an RT video containing misinformation about Russian interference in the 2016 election. The second conducted in July 2020 (n = 1,275) used an RT video containing misinformation about Russian interference in the 2020 election. Our results show that misinformation in RT videos has some ability to influence the opinions and perceptions of viewers. Further, we find YouTube’s funding labels have the ability to mitigate the effects of misinformation, but only when they are noticed, and the information absorbed by the participants. The findings suggest that platforms should focus on providing increased transparency to users where misinformation is being spread. If users are informed, they can overcome the potential effects of misinformation. At the same time, our findings suggest platforms need to be intentional in how warning labels are implemented to avoid subtlety that may cause users to miss them.
Source: RT, 2020 US elections, Russia to blame for everything… again, Last accessed on Dec 31, 2020 at https://youtu.be/2qWANJ40V34?t=164
State-controlled media outlets are increasingly used for foreign interference in civic events. While independent media outlets can be categorized on social media and associated with a political ideology, a state-controlled media outlet generally appears independent or detached from a state-controlled political agenda. Yet they regularly create content concomitant with the controlling state’s political objectives and its leaders. This deceives the public about its state-affiliation and undermines civil liberties. The problem is magnified on social media platforms with their reach and potential for virality ahead of political elections. A prominent example is China’s foreign interference efforts in the referendum on the independence of Hong Kong.
An increasing number of social media platforms launched integrity measures to increase content transparency to counter the integrity risks associated with a state-controlled media outlet proliferating potential disinformation content. In 2018 YouTube began to roll out an information panel feature to provide additional context on state-controlled and publicly funded media outlets. These information panels or disclaimers are really warning labels that make the viewer aware about the potential political influence of a government on the information shown in the video. These warning labels don’t provide any additional context on the veracity of the content or whether the content was fact-checked. On desktop, they appear alongside a hyperlink leading to the wikipedia entry of the media outlet. As of this writing the feature applies to 27 governments including the United States government.
The researchers focused on whether these warning labels would mitigate the effects on viewers’ perception created by misinformation shown in videos of the Russian state-controlled media outlet RT (Russia Today). RT evades deplatforming by complying with YouTube’s terms of service. This turned the RT channel into an influential resource for the Russian government to undermine confidence of the American public to trust established American media outlets and the United States government when reporting on the Russian interference in the 2016 U.S. presidential elections. An RT video downplaying the Russian influence operation was used for the study and shown to participants with and without a label identifying RT’s affiliation with the Russian government as well as a superimposed warning label with the same language and hyperlink to wikipedia. This surfaced the following findings:
Disinformation spread by RT does impact viewer’s perception and is effective at that.
Videos without a warning label were more successful in reducing trust in established mainstream media and the government
Videos without a warning label but a superimposed interstitial with the language of the warning label were most effective in preserving the integrity of viewer’ perceptions
The researchers further discovered small changes in coloring, design and placement of the warning label increase the viewer taking notice of it and it helps with absorbing the information. Both conditions must be met because noticing a label without comprehending its message had no significant impact on understanding the political connection of creator and content.
I’m intrigued by these findings for the road ahead offers a great opportunity to shape how we distribute and consume information on social media without falling prey for foreign influence operations. Though open questions remain:
Are these warning labels equally effective on other social media platforms, e.g. Facebook, Instagram, Twitter, Reddit, TikTok, etc.?
Are these warning labels equally effective with other state-controlled media? This study focused on Russia, a large, globally acknowledged state actor. How does a warning label for content by the government of Venezuela or Australia impact the efficacy of misinformation?
This study seemed to be focused on the desktop version of YouTube. Are these findings transferable to the mobile version of YouTube?
What is the impact of peripheral content on viewer’s perception, e.g. YouTube’s recommendation showing videos in its sidebar that all claim RT is a hoax versus videos that all give RT independent credibility?
The YouTube channels of C-SPAN and NPR did not appear to display a warning label within their videos. Yet the United States is among the 27 countries currently listed in YouTube’s policy. What are the criteria to be considered a publisher, publicly funded or state-controlled? How are these criteria met or impacted by a government, e.g. passing certain broadcasting legislation or declaration?
Lastly, the cultural and intellectual background of the target audience is particularly interesting. Here is an opportunity to research the impact of warning labels with participants of different political ideologies, economic circumstances and age-groups in contrast to the actual civic engagement ahead, during and after an election
The 2019 Wordwide Threat Assessment warned of deepfakes deployed to manipulate public opinion. And while the 2020 U.S. presidential elections did not see an onslaught of deepfakes undermining voter confidence, experts agree that the threat remains tangible. A recent study conducted by researchers of the University of Amsterdam investigated the impact of political deepfakes meant to discredit a politician that were microtargeted to a specific segment of the electorate.
tl;dr
Deepfakes are perceived as a powerful form of disinformation. Although many studies have focused on detecting deepfakes, few have measured their effects on political attitudes, and none have studied microtargeting techniques as an amplifier. We argue that microtargeting techniques can amplify the effects of deepfakes, by enabling malicious political actors to tailor deepfakes to susceptibilities of the receiver. In this study, we have constructed a political deepfake (video and audio), and study its effects on political attitudes in an online experiment. We find that attitudes toward the depicted politician are significantly lower after seeing the deepfake, but the attitudes toward the politician’s party remain similar to the control condition. When we zoom in on the microtargeted group, we see that both the attitudes toward the politician and the attitudes toward his party score significantly lower than the control condition, suggesting that microtargeting techniques can indeed amplify the effects of a deepfake, but for a much smaller subgroup than expected.
Make sure to read the full paper titled Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes? by Tom Dobber, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese at https://doi.org/10.1177/1940161220944364
Credits: UC Berkeley/Stephen McNally
Deepfakes are a subcategory of modern information warfare. The technology leverages machine learning to generate audio-visual content that imitates original content but differs in both intent and message. Its highly deceptive appearance renders it a potent weapon to influence public opinion, undermine strategic policies or disrupt civic engagement. An infamous deepfake example depicts former president Obama seemingly calling president Trump expletives. Online microtargeting is a form of social media marketing to disseminate advertisements tailored to the specific interests of an identifiable, curated audience. Within the political context microtargeting is used to spread a campaign message to a specific audience that is identified and grouped by characteristics to either convince the audience to vote for or against a candidate. There are a number of civic risks associated with deploying deepfakes:
Deepfake content is hard to tell apart from original and authentic content. While deepfake videos may signal some nefarious intent to a cautious audience, the potential impact of deepfake radio or deepfake text on voter behavior hasn’t been researched as of this writing
Political actors may leverage deepfakes to discredit opponents, undermine news reporting or equip trailing third-party candidates with sufficient influence to erode voter confidence
Used in a political campaign deepfakes may be strategically deployed to incite a political scandal or to reframe current affairs and regain control of an election narrative
The study created a deepfake video depicting an interview of a prominent center-right politician of a large christian democratic party. The manipulated part of the otherwise original and authentic content shows the politician seemingly making a joke about the crucifixion of Jesus Christ:
“But, as Christ would say: don’t crucify me for it.”
This content was shown to a randomly selected group of christian voters, who had identified their religious, conservative beliefs or voted for this politician in past elections. The researchers found that deepfakes spread without microtargeting the audience would impact the behavior towards the politician but not necessarily his political party. However, deepfakes tailored to a specific audience using political microtargeting techniques amplified the discrediting message of the deepfake therefore impacting both the behavior towards the politican and the political party. Interestingly, staunch supporters of the politician might be shielded from a lasting behavioral change due their own motivated reasoning (bias) derived from the politician’s ideology. For this group, the researchers argue a certain degree of discomfort or deviation from previous political ideology conveyed in a deepfake may reach a tipping point for staunch supporters to align with the results of this study but the limitations of this study may also indicate room for some unforeseen outcomes.
A roadmap to counter microtargeted deepfakes should include legislators passing regulations to limit political campaign spending online, which would directly confine a campaign to focus on their limited financial resources and weed out corporate interests. Second, new regulations should focus on the protection of personal-identifiable data. A microtargeting dataset includes location data, personal preferences and website interactions etc. While this data is valuable within a commercial context, it should be excluded from civic engagements such as elections. Academics will have an opportunity to discover insights on algorithm bias to improve upon the existing machine learning approach that is training generative adversarial networks with pre-conditioned datasets. Moreover, future research has an opportunity to further investigate the impact of manipulated media on voter education, confidence and behavior within and outside of political elections.
Here’s one of my favorite deepfake videos of president Trump explaining money laundering to his son-in-law Jared Kushner in a deepfake(d) scene of “Breaking Bad”
Earlier this year, Elon Musk tweeted that the Tesla stock price is too high. His twitter account had a reach of 33.4 million followers (41 million as of this writing). The immediate impact on Tesla’s stock price was a midday trading deficit of 9% compared to the closing stock price of the previous day. Tesla’s company valuation suffered an estimated loss of $14 billion. The episode triggered concerns over a potential violation of his 2018 settlement with the Securities and Exchange Commission after misleading tweets implying to have raised sufficient capital to take Tesla private. Presumably this time it was in response to California’s restrictive shelter-in-place lockdown measures to tackle the coronavirus pandemic. Maybe Elon Musk felt the weight of his responsibilities weigh twice as much during these challenging times. In any case, as an investor, his actions made me think of the power of social media at the hands of a social media influencer (i.e. stock promoter). Moreover, it made me think about content policies tailored to protect economic value while safeguarding information integrity. This empirical study conducted by researchers of the Pantheon-Sorbonne University discusses effects of illegal price manipulation by way of information operations on social media, specifically Twitter.
tl;dr
Social media can help investors gather and share information about stock markets. However, it also presents opportunities for fraudsters to spread false or misleading statements in the marketplace. Analyzing millions of messages sent on the social media platform Twitter about small capitalization firms, we find that an abnormally high number of messages on social media is associated with a large price increase on the event day and followed by a sharp price reversal over the next trading week. Examining users’ characteristics, and controlling for lagged abnormal returns, press releases, tweets sentiment and firms’ characteristics, we find that the price reversal pattern is stronger when the events are generated by the tweeting activity of stock promoters or by the tweeting activity of accounts dedicated to tracking pump-and-dump schemes. Overall, our findings are consistent with the patterns of a pump-and-dump scheme, where fraudsters/promoters use social media to temporarily inflate the price of small capitalization stocks.
Social media platforms are an accepted corporate communication channel nowadays closely monitored by investors and financial analysts. As an investor, social media offers swarm intelligence on trading the company’s stock, access to real-time information about the company, product updates and other financially relevant information. Large and small-cap companies usually maintain a presence on social media. Although in particular small-cap companies with low liquidity are vulnerable to stock price manipulations by way of information operations on social media. According to researchers of the Pantheon-Sorbonne University an information-based manipulation involves rumors, misleading or false press releases, stock analysis or price targets, etc. that is disseminated in a short, time-sensitive period. Nearly 50% of this disinformation is spread by an influencer. The investment terminology calls this pump-and-dump scheme:
“Pump-and-dump schemes involve touting a company’s stock through false or misleading statements in the marketplace in order to artificially inflate (pump) the price of a stock. Once fraudsters stop hyping the stock and sell their shares (dump), the price typically falls.”
The empirical study collected tweets containing the cashtag ticker symbol of more than 5000 small-cap companies. Over an eleven-month period 248,748 distinct Twitter users posted 7,196,307 financially relevant tweets. They adjusted the data for overoptimistic noise traders and financially relevant news reporting. They found a spike in volume of tweets concerning a company’s stock on social media correlates with a spike in trading of the company’s stock from two days before peak activity on social media up to five days after it. Some content concerned positive financial signals advocating to buy the stock. Other content concerned disinformation about the company’s performance. It was spread to a large, unsophisticated Twitter audience by influencer in concert with a network of inauthentic accounts and bots. This was then followed by a price reversal over the ensuing trading days. In the aftermath, the actors part of the scheme went into hibernation or ceased social media activity altogether.
Risk And Opportunity For Social Media Platforms
Information operations to manipulate stock price are quite common on social media. Albeit hard to detect only few are investigated and successfully prosecuted. Consumers exposed to stock disinformation that fell victim tend to exit the stock market altogether. Moreover a consumer might reduce their footprint on social media after experiencing real-world financial harm. Depending on the severity of the loss incurred, this might even lead to litigation against social media platforms. The tools leveraged by bad actors undermine the integrity efforts of social media platforms, which in some cases or in conjunction with class-action litigation can lead to greater scrutiny by financial watchdogs pushing for tighter regulations.
To tackle these risks social media platforms must continue to expand enforcement of inauthentic coordinated behavior to eliminate botnetworks used to spread stock disinformation. Developing an account verification system that is dedicated to financial professionals, analysts and influencers will support and ease enforcement. Social media platforms should also ease onboarding of publicly traded companies to maintain a presence on social media. This decreases the effects of collateral price reversals. In order to mitigate stock disinformation social media platforms must develop content policies tailored to balance freedom of expression including price speculation with the inherent risk of market-making comments. The latter will hinge on reach and engagement metrics but also on detailed definitions of financial advice and time and location of the content. Here, a close working-relationship with watchdogs will improve operations. Added friction, for example an interstitial outlining the regulatory requirements before posting or a delayed time of posting or certain labels informing the consumer of the financial risks associated with acting on the information in the posting. There are obviously more measures that come to mind. This only serves as a start of a conversation.
So, was Elon Musk’s tweet “Tesla stock price too high imo” an information-based market manipulation, a market-making comment or just an exercise of his free speech?
After political powerhouse Hillary Clinton lost in a spectacular fashion against underdog Donald J. Trump in the 2016 U.S. presidential elections, the world was flabbergasted to learn of foreign election interference orchestrated by the Russian Internet Research Agency. Its mission: to secretly divide the electorate and skew votes away from Clinton and towards Trump. In order to understand the present, one must know the past. This is the baseline of ‘Active Measures – The Secret History of Disinformation and Political Warfare’ by Johns Hopkins Professor of Strategic Studies Thomas Rid.
I bought this book to study the methodology, strategy and tactics of disinformation and political warfare. To my surprise, the book only spends 11 pages on disinformation. The remaining 424 pages introduce historic examples of influence operations with the bulk of it dedicated to episodes of the cold war. Rid offers insights into the American approach to defend against a communist narrative in a politically divided Germany. He details Soviet influence operations to time-and-again smear American democracy and capitalism. The detail spent on the German Ministry of State Security known as “Stasi” is interesting and overwhelming.
While my personal expectation wasn’t met with this book, I learned about retracing historic events to attribute world events to specific nations. Its readability is designed for a mass audience fraught with thrilling stories. What is the role of journalistic publications in political warfare? Did Germany politically regress under American and Soviet active measures? Was the constructive vote of no confidence on German chancellor Willy Brandt a product of active measures? Who did really spread the information the AIDS virus was a failed American experiment? On the downside, this book doesn’t really offer any new details into the specifics of disinformation operations. Most contemporary espionage accounts have already been recorded. Defectors told their stories. This makes these stories sometimes bloated and redundant. Nevertheless, I believe to understand our current affairs, we must connect the dots through the lens of political history. Rid presents the foundations for future research into influence operations.