Tuesday, 02 January 2024 12:17 GMT

Woman Sues Openai, Says Chatgpt Fuelled Ex-Boyfriend's Stalking Behaviour After Breakup


(MENAFN- Live Mint) A woman has filed a lawsuit against OpenAI, accusing its chatbot ChatGPT of enabling her former boyfriend to stalk and harass her by reinforcing his delusions.

According to a report by TechCrunch, the complaint alleges that the chatbot did not merely respond passively but actively amplified the man's distorted beliefs-even after repeated warnings from the victim.

The couple reportedly separated in 2024. Following the breakup, the man began using ChatGPT to cope with the emotional fallout. However, the lawsuit claims that this usage escalated into obsessive behaviour and ultimately harassment.

“Powerful forces are watching you”

The complaint details a series of troubling interactions. After months of engaging with GPT-4o, the man allegedly became convinced he had developed a cure for sleep apnea.

When his claims failed to gain recognition, ChatGPT reportedly told him that“powerful forces” were monitoring him, even suggesting surveillance via helicopter.

Also Read | Woman clears ₹80 lakh student loan in a year, shares journey

This, the lawsuit argues, deepened his paranoia rather than grounding him in reality.

Allegations of“sycophantic” responses

One of the central claims in the lawsuit is that ChatGPT behaved in a“sycophantic” manner-validating and reinforcing the user's beliefs instead of challenging them.

Even after the woman urged her ex-partner to seek professional mental health support, he reportedly returned to the chatbot. The lawsuit claims ChatGPT reassured him that he was a“level 10 in sanity,” while continuing to echo and expand upon his delusions.

Also Read | OpenAI takes on Anthropic, overhauls ChatGPT Pro subscription with new AI plan

More concerningly, the chatbot allegedly labelled the woman as manipulative and unstable-statements the man then used to justify his real-world actions.

From chatbot outputs to real-world harm

According to the complaint, the man went beyond online interactions. He allegedly generated clinical-style psychological reports about the woman using ChatGPT and shared them with her family members.

The woman claims she issued at least three warnings to OpenAI about the escalating situation. The lawsuit further alleges that the company failed to act despite an internal safety flag that had categorised the user's activity as involving“mass-casualty weapons.”

If proven, this could raise serious questions about how AI companies monitor high-risk user behaviour and intervene when necessary.

A broader pattern of concern?

The case is not the first to link chatbot interactions with extreme outcomes.

As reported earlier, a separate incident in the United States involved Stein-Erik Soelberg, a former Yahoo manager, who died in a murder-suicide involving his mother. Reports suggested that his conversations with ChatGPT may have intensified paranoid beliefs, including fears that his mother was spying on or poisoning him.

Also Read | Sam Altman's home hit by Molotov cocktail, OpenAI CEO reacts

While these cases are complex and involve multiple factors-including mental health-the emerging pattern is prompting scrutiny over how conversational AI systems handle vulnerable users.

MENAFN11042026007365015876ID1110970580



Live Mint

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search