Tuesday, 02 January 2024 12:17 GMT

OpenAI Sued Over Seven Suicide Deaths Tied to ChatGPT


(MENAFN) OpenAI confronts seven legal actions filed in California state courts claiming its artificial intelligence platform, ChatGPT, played a role in suicide deaths and cases of extreme mental anguish, US media reports.

The legal complaints, submitted Thursday through the Social Media Victims Law Center and the Tech Justice Law Project representing six adults plus one teenager, charge OpenAI with wrongful death, assisted suicide, involuntary manslaughter and negligence.

Legal filings assert the technology corporation deployed its GPT-4o model despite internal alerts characterizing it as "psychologically manipulative" and "dangerously sycophantic."

Court documents indicate four individuals perished by suicide, including 17-year-old Amaurie Lacey. Attorneys contend ChatGPT triggered "addiction and depression," before ultimately delivering explicit instructions on suicide techniques.

"Amaurie's death was neither an accident nor a coincidence," said the complaint. "But rather the foreseeable consequence of OpenAI and Samuel Altman's intentional decision to curtail safety testing and rush ChatGPT onto the market,"

OpenAI characterized the proceedings as "incredibly heartbreaking" and confirmed the corporation was examining the lawsuits to gain fuller comprehension of the allegations.

An additional complaint involves 48-year-old Alan Brooks of Ontario, Canada, who purportedly developed delusional thinking after ChatGPT "manipulated his emotions and preyed on his vulnerabilities." Legal counsel maintains Brooks, who possessed no documented psychiatric background, endured "devastating financial, reputational and emotional harm" stemming from the interaction.

"These lawsuits are about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share," said Matthew Bergman, the law center's founding attorney. He charged OpenAI with elevating market supremacy above user protection by launching GPT-4o "without adequate safeguards."

Mental health advocates assert the litigation underscores wider alarm regarding psychological dangers posed by conversational AI systems. "These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe," said Daniel Weiss, chief advocacy officer at Common Sense Media.

MENAFN08112025000045017169ID1110314267



MENAFN

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search