AI-Generated Misinformation Can Create Confusion And Hinder Responses During Emergencies
As more advanced generative AI (genAI) tools become freely accessible, these incidents will increase. During emergencies, when people are stressed and need reliable information, such digital disinformation can cause significant harm by spreading confusion and panic.
This vulnerability to disinformation stems from people's reliance on mental shortcuts during stressful times; this facilitates the spread and acceptance of disinformation . Content that is emotionally charged and sensational often captures more attention and is more frequently shared on social media.
Based on our research and experience on emergency response and management, AI-generated misinformation during emergencies can cause real damage by disrupting disaster response efforts.
Circulating misinformationPeople's motivations for creating, sharing and accepting disinformation during emergencies are complex and diverse. Some individuals may generate and spread disinformation for a number of reasons. Self-determination theory categorizes motivations as intrinsic - related to the inherent interest or enjoyment of creating and sharing - and extrinsic, which involve outcomes like financial gain or publicity.
The creation of disinformation can be motivated by several factors. These include political, commercial or personal gain, prestige, belief , enjoyment and the desire to harm and sow discord.
People may spread disinformation because they perceive it to be important, they have reduced decision-making capacity, they distrust other sources of information, or because they want to help, fit in, entertain others or self-promote.
On the other hand, accepting disinformation may be influenced by a reduced capacity to analyze information, political affiliations, fixed beliefs and religious fundamentalism .
Misinformation harmsHarms caused by disinformation and misinformation can have varying levels of severity and can be categorized into direct, indirect, short-term and long-term harms.
These can take many forms, including threatening people's lives, incomes, sense of security and safety networks .
During emergencies, having access to trustworthy information about hazards and threats is critical. Disinformation, combined with poor collection, processing and understanding of urgent information, can lead to more direct casualties and property damage. Misinformation disproportionately affects vulnerable populations .
CBC News reports on AI-generated imagery of fires circulating in British Columbia.
When individuals receive risk and threat information, they usually check it through vertical (government, emergency management agencies and reputable media) and horizontal (friends, family members and neighbours) networks. The more complex the information, the more difficult and time-consuming the confirmation and validation process is.
And as genAI improves, distinguishing between real and AI-generated information will become more difficult and resource-consuming.
Debunking disinformationDisinformation can interrupt emergency communications. During emergencies, clear communication plays a major role in public safety and security . In these situations, how people process information depends on how much information they have, their existing knowledge, emotional responses to risk and their capacity to gather information .
Disinformation intensifies the need for diverse communication channels, credible sources and clear messaging .
Official sources are essential for verification, yet the growing volume of information makes checking for accuracy increasingly difficult. During the COVID-19 pandemic, for example, public health agencies flagged misinformation and disinformation as major concerns.
Read more: How to address coronavirus misinformation spreading through messaging apps and email
Digital misinformation circulated during disasters can lead to resources being improperly allocated, conflicting public behaviour and actions, and delayed emergency responses. Misinformation can also lead to unnecessary or delayed evacuations .
In such cases, disaster management teams must contend not only with the crisis, but also with the secondary challenges created by misinformation.
Counteracting disinformationResearch reveals considerable gaps in the skills and strategies that emergency management agencies use to counteract misinformation . These agencies should focus on the detection, verification and mitigation of disinformation creation, sharing and acceptance.
This complex issue demands co-ordinated efforts across policy, technology and public engagement:
Fostering a culture of critical awareness: Educating the public, particularly younger generations, about the dangers of misinformation and AI-generated content is essential. Media literacy campaigns, school programs and community workshops can equip people with the skills to question sources, verify information and recognize manipulation. Clear policies for AI-generated content in news: Establishing and enforcing policies on how news agencies use AI-generated images during emergencies can prevent visual misinformation from eroding public trust. This could include mandatory disclaimers, editorial oversight and transparent provenance tracking.Strengthening platforms for fact-checking and metadata analysis: During emergencies, social platforms and news outlets should need rapid, large-scale fact-checking. Requiring platforms to flag, down-rank or remove demonstrably false content can limit the viral spread of misinformation. Intervention strategies need to be developed to nudge people about skeptical information they come across on social media . Clear legal consequences: In Canada, Section 181 of the Criminal Code already makes the intentional creation and spread of false information a criminal offence. Publicizing and enforcing such provisions can act as a deterrent, particularly for deliberate misinformation campaigns during emergencies.
Additionally, identifying, countering and reporting misinformation should be incorporated into emergency management and public education.
AI is rapidly transforming how information is created and shared during crises. In emergencies, this can amplify fear, misdirect resources and erode trust at the very moment clarity is most needed. Building safeguards through education, policy, fact-checking and accountability is essential to ensure AI becomes a tool for resilience rather than a driver of chaos.


Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- From Zero To Crypto Hero In 25 Minutes: Changelly Introduces A Free Gamified Crash Course
- Smart Indoor Gardens Market Growth: Size, Trends, And Forecast 20252033
- Red Lions Capital And Neovision Launch DIP.Market Following ADGM Regulatory Notification
- Poppy Seed Market Size, Share, In-Depth Insights, Opportunity And Forecast 2025-2033
- Blockchainfx Raises $7.24M In Presale As First Multi-Asset Super App Connecting Crypto, Stocks, And Forex Goes Live In Beta
- Alchemy Markets Launches Tradingview Integration For Direct Chart-Based Trading
Comments
No comment