The 2019 Wordwide Threat Assessment warned of deepfakes deployed to manipulate public opinion. And while the 2020 U.S. presidential elections did not see an onslaught of deepfakes undermining voter confidence, experts agree that the threat remains tangible. A recent study conducted by researchers of the University of Amsterdam investigated the impact of political deepfakes meant to discredit a politician that were microtargeted to a specific segment of the electorate.
Deepfakes are perceived as a powerful form of disinformation. Although many studies have focused on detecting deepfakes, few have measured their effects on political attitudes, and none have studied microtargeting techniques as an amplifier. We argue that microtargeting techniques can amplify the effects of deepfakes, by enabling malicious political actors to tailor deepfakes to susceptibilities of the receiver. In this study, we have constructed a political deepfake (video and audio), and study its effects on political attitudes in an online experiment. We find that attitudes toward the depicted politician are significantly lower after seeing the deepfake, but the attitudes toward the politician’s party remain similar to the control condition. When we zoom in on the microtargeted group, we see that both the attitudes toward the politician and the attitudes toward his party score significantly lower than the control condition, suggesting that microtargeting techniques can indeed amplify the effects of a deepfake, but for a much smaller subgroup than expected.
Make sure to read the full paper titled Do (Microtargeted) Deepfakes Have Real Effects on Political Attitudes? by Tom Dobber, Nadia Metoui, Damian Trilling, Natali Helberger, and Claes de Vreese at https://doi.org/10.1177/1940161220944364
Deepfakes are a subcategory of modern information warfare. The technology leverages machine learning to generate audio-visual content that imitates original content but differs in both intent and message. Its highly deceptive appearance renders it a potent weapon to influence public opinion, undermine strategic policies or disrupt civic engagement. An infamous deepfake example depicts former president Obama seemingly calling president Trump expletives. Online microtargeting is a form of social media marketing to disseminate advertisements tailored to the specific interests of an identifiable, curated audience. Within the political context microtargeting is used to spread a campaign message to a specific audience that is identified and grouped by characteristics to either convince the audience to vote for or against a candidate. There are a number of civic risks associated with deploying deepfakes:
- Deepfake content is hard to tell apart from original and authentic content. While deepfake videos may signal some nefarious intent to a cautious audience, the potential impact of deepfake radio or deepfake text on voter behavior hasn’t been researched as of this writing
- Political actors may leverage deepfakes to discredit opponents, undermine news reporting or equip trailing third-party candidates with sufficient influence to erode voter confidence
- Used in a political campaign deepfakes may be strategically deployed to incite a political scandal or to reframe current affairs and regain control of an election narrative
The study created a deepfake video depicting an interview of a prominent center-right politician of a large christian democratic party. The manipulated part of the otherwise original and authentic content shows the politician seemingly making a joke about the crucifixion of Jesus Christ:
“But, as Christ would say: don’t crucify me for it.”
This content was shown to a randomly selected group of christian voters, who had identified their religious, conservative beliefs or voted for this politician in past elections. The researchers found that deepfakes spread without microtargeting the audience would impact the behavior towards the politician but not necessarily his political party. However, deepfakes tailored to a specific audience using political microtargeting techniques amplified the discrediting message of the deepfake therefore impacting both the behavior towards the politican and the political party. Interestingly, staunch supporters of the politician might be shielded from a lasting behavioral change due their own motivated reasoning (bias) derived from the politician’s ideology. For this group, the researchers argue a certain degree of discomfort or deviation from previous political ideology conveyed in a deepfake may reach a tipping point for staunch supporters to align with the results of this study but the limitations of this study may also indicate room for some unforeseen outcomes.
A roadmap to counter microtargeted deepfakes should include legislators passing regulations to limit political campaign spending online, which would directly confine a campaign to focus on their limited financial resources and weed out corporate interests. Second, new regulations should focus on the protection of personal-identifiable data. A microtargeting dataset includes location data, personal preferences and website interactions etc. While this data is valuable within a commercial context, it should be excluded from civic engagements such as elections. Academics will have an opportunity to discover insights on algorithm bias to improve upon the existing machine learning approach that is training generative adversarial networks with pre-conditioned datasets. Moreover, future research has an opportunity to further investigate the impact of manipulated media on voter education, confidence and behavior within and outside of political elections.
Here’s one of my favorite deepfake videos of president Trump explaining money laundering to his son-in-law Jared Kushner in a deepfake(d) scene of “Breaking Bad”