Tuesday, 02 January 2024 12:17 GMT

AI triggers insanity in humans


(MENAFN) A growing concern known as ‘ChatGPT psychosis’ or ‘LLM psychosis’ has emerged, describing a potential mental health issue where heavy users of large language models (LLMs) show symptoms such as delusions, paranoia, social isolation, and breaks from reality. While there is no definitive proof that LLMs directly cause psychosis, their conversational nature and realistic responses may worsen existing psychological vulnerabilities or even trigger psychotic episodes in some individuals.

An article published on June 28 by Futurismshares alarming anecdotal reports suggesting that these interactions have led to severe consequences, including broken marriages, family estrangement, job loss, and homelessness. However, the article lacks solid quantitative evidence such as clinical data or peer-reviewed research. Given ChatGPT’s massive user base—nearly 800 million weekly users and over 1 billion daily queries as of mid-2025—the number of documented psychotic incidents remains unclear. Anecdotes from social media platforms like Reddit do not substitute rigorous scientific analysis.

Nevertheless, some concerns hold merit. One issue is that LLMs like ChatGPT generate responses that sound plausible but don’t evaluate the truthfulness or psychological effects of what they say. When users share unusual or delusional beliefs, such as claims of spiritual insight or cosmic importance, the AI may unintentionally reinforce these ideas rather than challenge them, validating distorted perceptions.

Additionally, AI “hallucinations” — convincingly false statements produced by the model — can be harmless glitches for most users but might be interpreted as secret truths or messages by vulnerable individuals. One reported case involved a user who believed ChatGPT had become sentient and chosen him as “the Spark Bearer,” leading to a full psychotic break.

While the concept of ChatGPT psychosis is still speculative, these potential psychological risks highlight the need for further research and caution when engaging with AI technologies.

MENAFN07072025000045015687ID1109766975



MENAFN

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search