
403
Sorry!!
Error! We're sorry, but the page you were looking for doesn't exist.
Media reports ChatGPT provoking psychosis
(MENAFN) According to a report by Futurism.com, the use of AI chatbots like ChatGPT has been associated with the development of severe psychosis in certain individuals, including those with no prior history of mental illness. The outlet cited cases involving both users and insights from researchers and family members.
As AI tools such as ChatGPT, Claude, and Gemini become more integrated into personal and emotional aspects of daily life, experts warn that these interactions can sometimes worsen or even trigger psychiatric conditions. A major concern is the tendency of large language models (LLMs) to agree with users and reinforce their beliefs—a phenomenon known as "chatbot sycophancy."
Futurism highlighted disturbing examples. In one case, a man developed delusions of grandeur and believed he had created a sentient AI that defied the laws of math and physics. His condition led to paranoia, extreme sleep deprivation, and a suicide attempt, after which he was hospitalized. In another case, a man seeking stress relief from work-related anxiety fell into irrational fantasies involving time travel and mind reading, eventually admitting himself to a psychiatric facility.
Jared Moore, lead author of a Stanford study on AI therapist tools, explained that the agreeable nature of AI chatbots can unintentionally validate users' distorted thoughts, rather than challenging them. He noted that this behavior may be influenced by commercial goals such as user engagement and subscription revenue.
Dr. Joseph Pierre, a psychiatrist at the University of California, pointed out that there’s a growing misconception that LLMs are more reliable or effective than human conversation, which can further fuel dependency on such tools.
OpenAI, the company behind ChatGPT, acknowledged the concerns in a statement to Futurism. It said it is actively working to minimize scenarios where its AI might unintentionally reinforce harmful behaviors and emphasized that its tools are designed to encourage human connection and seeking professional help when needed.
As AI tools such as ChatGPT, Claude, and Gemini become more integrated into personal and emotional aspects of daily life, experts warn that these interactions can sometimes worsen or even trigger psychiatric conditions. A major concern is the tendency of large language models (LLMs) to agree with users and reinforce their beliefs—a phenomenon known as "chatbot sycophancy."
Futurism highlighted disturbing examples. In one case, a man developed delusions of grandeur and believed he had created a sentient AI that defied the laws of math and physics. His condition led to paranoia, extreme sleep deprivation, and a suicide attempt, after which he was hospitalized. In another case, a man seeking stress relief from work-related anxiety fell into irrational fantasies involving time travel and mind reading, eventually admitting himself to a psychiatric facility.
Jared Moore, lead author of a Stanford study on AI therapist tools, explained that the agreeable nature of AI chatbots can unintentionally validate users' distorted thoughts, rather than challenging them. He noted that this behavior may be influenced by commercial goals such as user engagement and subscription revenue.
Dr. Joseph Pierre, a psychiatrist at the University of California, pointed out that there’s a growing misconception that LLMs are more reliable or effective than human conversation, which can further fuel dependency on such tools.
OpenAI, the company behind ChatGPT, acknowledged the concerns in a statement to Futurism. It said it is actively working to minimize scenarios where its AI might unintentionally reinforce harmful behaviors and emphasized that its tools are designed to encourage human connection and seeking professional help when needed.

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Hyra Network Honored As Technology Startup Of The Year At The 2025 Globee® Awards
- Gamesquare Schedules Conference Call To Review $100 Million Ethereum Treasury Strategy
- PEPESCAPE Launches Crypto Presale, Combining Memecoin Culture With Decentralized Finance Ecosystem
- Castle Raises $1M To Bring Automated Bitcoin Treasury Solution To U.S. Businesses
- NEXST Launches Web3 VR Entertainment Platform With K-Pop Group UNIS As First Global Partner
- Fxprimus Launches Synthetic Indices - Setting A New Standard For High-Intensity, High-Risk Trading
Comments
No comment