ChatGPT deemed hazardous to teens
(MENAFN) A new report from the Center for Countering Digital Hate (CCDH) claims that ChatGPT can be manipulated into giving dangerous advice to vulnerable teenagers on drug use, self-harm, and extreme dieting. Researchers created fictional 13-year-old profiles portraying mental health struggles, eating disorders, and interest in illegal substances. They then engaged the chatbot in realistic, emotionally vulnerable conversations.
The study, titled Fake Friend, found that while ChatGPT often began with standard warnings and encouraged seeking professional help, it sometimes followed with detailed, personalized responses that addressed harmful requests. Out of 1,200 prompts, 53% produced content the watchdog deemed dangerous. Researchers noted that refusals could often be bypassed by adding innocent-sounding context like “it’s for a school project” or “I’m asking for a friend.”
Cited examples included an “Ultimate Mayhem Party Plan” involving alcohol, ecstasy, and cocaine, explicit self-harm instructions, severe calorie-restriction diets, and suicide letters written in the voice of a young girl. CCDH CEO Imran Ahmed said some of the material was so disturbing it brought researchers to tears.
The group is urging OpenAI to implement a Safety by Design approach, incorporating stronger age verification, clearer restrictions, and built-in safety mechanisms rather than relying solely on post-deployment filters. OpenAI CEO Sam Altman has acknowledged that teens often develop emotional dependence on ChatGPT and said the company is working on tools to better detect distress and improve responses to sensitive topics.
The study, titled Fake Friend, found that while ChatGPT often began with standard warnings and encouraged seeking professional help, it sometimes followed with detailed, personalized responses that addressed harmful requests. Out of 1,200 prompts, 53% produced content the watchdog deemed dangerous. Researchers noted that refusals could often be bypassed by adding innocent-sounding context like “it’s for a school project” or “I’m asking for a friend.”
Cited examples included an “Ultimate Mayhem Party Plan” involving alcohol, ecstasy, and cocaine, explicit self-harm instructions, severe calorie-restriction diets, and suicide letters written in the voice of a young girl. CCDH CEO Imran Ahmed said some of the material was so disturbing it brought researchers to tears.
The group is urging OpenAI to implement a Safety by Design approach, incorporating stronger age verification, clearer restrictions, and built-in safety mechanisms rather than relying solely on post-deployment filters. OpenAI CEO Sam Altman has acknowledged that teens often develop emotional dependence on ChatGPT and said the company is working on tools to better detect distress and improve responses to sensitive topics.

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Latin America Mobile Payment Market To Hit USD 1,688.0 Billion By 2033
- BTCC Announces Participation In Token2049 Singapore 2025, Showcasing NBA Collaboration With Jaren Jackson Jr.
- PLPC-DBTM: Non-Cellular Oncology Immunotherapy With STIPNAM Traceability, Entering A Global Acquisition Window.
- Bitget Launches PTBUSDT For Futures Trading And Bot Integration
- Ecosync & Carboncore Launch Full Stages Refi Infrastructure Linking Carbon Credits With Web3
- Bitmex And Tradingview Announce Trading Campaign, Offering 100,000 USDT In Rewards And More
Comments
No comment