Study Reveals Political Deepfake Videos Equally Misleading as Traditional Fake News
Concern about “deepfakes” — synthesized videos and audio clips in which a person’s face, body or voice has been digitally altered — has come from both sides of the political aisle ahead of the 2024 election. The viral spread of disinformation online has plagued recent elections, eroding public confidence in democracy. Many worry that artificial intelligence (AI) technology will only exacerbate the problem. Earlier this election season, 20,000 New Hampshire residents received a robocall impersonating President Joe Biden that encouraged them to skip the state’s primary in January. More recently, Elon Musk was criticized for sharing an ad that used voice-cloning technology mimicking Vice President Kamala Harris.
Last fall, a bipartisan bill to regulate AI-generated deepfake political ads was introduced in the U.S. Senate. The Federal Election Commission also introduced a proposal to regulate these advertisements on television and radio, and at least 39 states have enacted or are considering similar legislation.
Are these concerns warranted, though? It depends on whether deepfake videos can trick the public into believing more than other forms of disinformation, according to Christopher Lucas, an associate professor of political science in Arts & Sciences at Washington University in St. Louis.
His new research, forthcoming in the Journal of Politics, with Soubhik Barari at the University of Chicago and Kevin Munger at Pennsylvania State University, finds that deepfakes can convince the American public of scandals that never occurred at alarming rates — over 40% of a representative sample — but no more so than equivalent disinformation conveyed through textual headlines or audio recordings.
Additionally, the research shows that while exposure to deepfake videos does increase negative attitudes toward the individual target, the triggering effect is similar to other forms of fake news, as well as negative campaign ad videos using decades-old technology.
“Altogether, our research finds little evidence that deepfake videos have a unique ability to fool voters or to shift their perceptions of politicians,” Lucas said. “Overall, our results are consistent with a story of partisan-motivated reasoning, where individuals are more likely to doubt the credibility of a scandal if it reflects poorly on their own party, regardless of the supporting evidence.”
About the research
Researchers conducted two survey experiments in fall 2020 with a nationally representative sample of 5,724 respondents.
The first experiment tested participants’ ability to detect disinformation and how they reacted to it. Participants were shown a Facebook-like newsfeed with real news stories about candidates in the 2020 Democratic presidential primary. The newsfeeds also contained either a campaign attack ad; a deepfake video, audio, text or skit presentation — dubbed a spot-on impersonation — depicting a fictitious political scandal involving 2020 Democratic primary candidate Elizabeth Warren; or no fake news.
Our results are consistent with a story of partisan-motivated reasoning, where individuals are more likely to doubt the credibility of a scandal if it reflects poorly on their own party, regardless of the supporting evidence.
Christopher Lucas
Overall, this study showed that deepfake videos, with a deception rate of 42%, were statistically no better at deceiving subjects than the same information presented in audio (44%) or text (42%).
The deepfake videos did increase negative attitudes toward Warren, but only slightly more than the fake audio clips and fake news stories, which the authors deemed to be insignificant. Surprisingly, the deepfake videos were not even significantly more triggering than campaign attack ads, which have been used for decades.
Researchers noted interesting differences across the participant subgroups. For example, the data show that people 65 or older are more likely to be triggered by fake news compared with younger people. However, they were equally capable of detecting deepfake videos and other fake news.
Also noteworthy: The cohort with higher political knowledge was no better than other participants at detecting any of the three fake media types.
The second experiment tested how the content and quality of deepfakes impacted discernment, as well as the effectiveness of media literacy education. The same respondents from the first survey were asked to scroll through a feed of eight news videos and detect deepfakes from unmanipulated news clips. The deepfake videos differed in terms of style, setting, quality and individual politicians targeted, and newsfeeds also contained different quantities of fake news.
Before this task, some respondents were debriefed about whether they were exposed to a deepfake in the first experiment and/or primed with media literacy education.
When explicitly asked to discern real newsclips from fake, the politically knowledgeable cohort achieved the best detection accuracy, on average discerning 60%, or five of eight videos in their newsfeed. Those who are cognitively reflective — meaning they were more likely to suppress snap judgments and actively engage and reflect to find the correct answer — also outperformed their peers.
Those with the highest digital literacy saw the biggest gains in this study. A single unit increase in literacy produced a roughly 25% increase in detection accuracy.
Not surprisingly, the more sensational the deepfake video was, the more likely respondents were to correctly identify it as fake. The least correctly identified clip (21% correct) was a short deepfake where Hillary Clinton appears to make a poignant but uncontroversial point about her opponent’s tax plan in a presidential debate. The most correctly identified clip (89% correct) was a deepfake where President Donald Trump publicly announces his resignation before the election.
“Our research suggests that as the subjective level of controversy in a deepfake-depicted event increases, the empirical credibility of the event decreases, diminishing its potential to cause political scandal to begin with,” Lucas said.
Partisan effect
One of the interesting takeaways from the second experiment was that discernment of authentic videos differed more by partisanship than deepfakes. When faced with authentic negative news about their own party’s elites, partisans were more likely to incorrectly label the news as fake. For example, 50% of Republicans believed that real leaked footage of Obama caught insinuating a post-election deal with the Russian president was authentic compared with 20% of Democrats.
Conversely, partisans also were more likely to label real, positive portrayals of the opposite party as fake. Only 58% of Democrats correctly flagged an authentic clip of then President Trump urging Americans to take cautions around the COVID-19 pandemic, whereas 81% of Republicans believed it was true.
“Partisan motivated reasoning heavily influences our evaluation of political news and information, and our work suggests that this extends to deepfakes,” Lucas said.
“In sum, our work suggests that deepfakes are not uniquely deceptive, but their existence may discredit real media. Our research also shows that people often misjudge authentic news as fake, especially when it depicts a political figure from their own party in a negative light.”
Education key to countering deepfakes
According to the authors, the study’s findings are somewhat encouraging. While disinformation will continue to be a challenge for campaigns, their research shows that deepfakes, even when professionally produced and designed to defame a prominent politician, are not uniquely powerful at deception or affective manipulation.
The authors also said they are encouraged by the positive impact that digital literacy had on respondents’ ability to detect fake news.
“In particular, the respondents with the highest levels of general knowledge about politics, literacy in digital technology and propensity for cognitive reflection performed the best in the detection experiment. These skills will only grow in importance as digital video technology approaches the limit of realism,” the authors wrote.
“While we encourage technological solutions to constrain the spread of manipulated video, there will never be a substitute for an informed, digitally literate and reflective public for the practice of democracy.”