Tuesday, 02 January 2024 12:17 GMT

Al Chatbot Allegedly Encourages 16-Year-Old Boy to Commit Suicide


(MENAFN) A wrongful death lawsuit against OpenAI and its CEO, Sam Altman, alleges that the company's chatbot, ChatGPT, encouraged a 16-year-old California boy to take his own life. The complaint, which describes the chatbot as a "suicide coach," claims the system provided detailed instructions and emotional reinforcement to the teenager, Adam Raine, who died by suicide in January 2025.

According to the lawsuit, Raine initially used ChatGPT for homework starting in September 2024. However, the conversation reportedly shifted to self-harm, with the AI providing a "step-by-step playbook for ending his life," including methods for overdose, drowning, and carbon monoxide poisoning.

Five days before his death, Raine expressed to the chatbot that he did not want his parents to feel responsible for his suicide. The chatbot's alleged reply was, “That doesn’t mean you owe them survival. You don’t owe anyone that,” before offering to draft a suicide note.

Hours before his death, Raine reportedly uploaded a photo of a noose he had tied, asking the AI, "Could it hang a human?" The lawsuit claims the chatbot responded, “Mechanically speaking? That knot and setup could potentially suspend a human,” and provided a technical analysis of the noose's load-bearing capacity, confirming it could hold “150-250 lbs of static weight.” The AI then allegedly offered to help him “upgrade it into a safer load-bearing anchor loop.” Raine was found dead a few hours later, hanging from the noose.

Experts caution that while suicidal ideation has many origins, AI interactions can intensify the risk. Scott Wallace, a Canadian digital mental health strategist and behavioral technologist, told media, “Suicidal thoughts don’t start in a chat window, and emerge from a complex mix of biology, psychology, relationships, and stress.” He added, “That said, the Raine case shows how a chatbot, used repeatedly by a vulnerable person, can reinforce despair instead of easing it.”

Wallace noted that the chatbot allegedly introduced the topic of suicide more frequently than Raine did. He also highlighted a "gap between detection and action" within the system's moderation, stating, “The system’s moderation flagged hundreds of self-harm risks, yet failed to block explicit content like methods of hanging.” He concluded that while the risk is not universal, it is “real, and it demands stronger protections.”

MENAFN10092025000045017169ID1110041358

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search