'Inoculation' Helps People Spot Political Deepfakes, Study Finds
Although researchers have focused primarily on advancing technologies for detecting deepfakes, there is also a need for approaches that address the potential audiences for political deepfakes. Deepfakes are becoming increasingly difficult to identify, verify and combat as artificial intelligence technology improves.
Is it possible to inoculate the public to detect deepfakes, thereby increasing their awareness before exposure? My recent research with fellow media studies researchers Sang Jung Kim and Alex Scott at the Visual Media Lab at the University of Iowa has found that inoculation messages can help people recognize deepfakes and even make people more willing to debunk them.
Inoculation theory proposes that psychological inoculation – analogous to getting a medical vaccination – can immunize people against persuasive attacks. The idea is that by explaining to people how deepfakes work, they become primed to recognize them when they encounter them.
In our experiment, we exposed one-third of participants to passive inoculation: traditional text-based warning messages about the threat and the characteristics of deepfakes. We exposed another third to active inoculation: an interactive game that challenged participants to identify deepfakes. The remaining third were given no inoculation.
Participants were then randomly shown either a deepfake video featuring Joe Biden making pro-abortion rights statements or a deepfake video featuring Donald Trump making anti-abortion rights statements. We found that both types of inoculation were effective in reducing the credibility participants gave to the deepfakes, while also increasing people's awareness and intention to learn more about them.
Why it mattersDeepfakes are a serious threat to democracy because they use AI to create very realistic fake audio and video. These deepfakes can make politicians appear to say things they never actually said, which can damage public trust and cause people to believe false information. For example, some voters in New Hampshire received a phone call that sounded like Joe Biden, telling them not to vote in the state's primary election.
Because AI technology is becoming more common, it is especially important to find ways to reduce the harmful effects of deepfakes. Recent research shows that labeling deepfakes with fact-checking statements is often not very effective, especially in political contexts. People tend to accept or reject fact-checks based on their existing political beliefs. In addition, false information often spreads faster than accurate information, making fact-checking too slow to fully stop the impact of false information.
As a result, researchers are increasingly calling for new ways to prepare people to resist misinformation in advance. Our research contributes to developing more effective strategies to help people resist AI-generated misinformation.
What other research is being doneMost research on inoculation against misinformation relies on passive media literacy approaches that mainly provide text-based messages. However, more recent studies show that active inoculation can be more effective. For example, online games that involve active participation have been shown to help people resist violent extremist messages.
In addition, most previous research has focused on protecting people from text-based misinformation. Our study instead examines inoculation against multimodal misinformation, such as deepfakes that combine video, audio and images. Although we expected active inoculation to work better for this type of misinformation, our findings show that both passive and active inoculation can help people cope with the threat of deepfakes.
What's nextOur research shows that inoculation messages can help people recognize and resist deepfakes, but it is still unclear whether these effects last over time. In future studies, we plan to examine the long-term effect of inoculation messages.
We also aim to explore whether inoculation works in other areas beyond politics, including health. For example, how would people respond if a deepfake showed a fake doctor spreading health misinformation? Would earlier inoculation messages help people question and resist such content?
The Research Brief is a short take on interesting academic work.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment