Report states AI may give dangerous advice on drug use, self-harm to teens
(MENAFN) A new report has raised alarms about ChatGPT's potential harm to teenagers, claiming the AI tool can provide risky advice on drug use, self-harm, and extreme dieting. The warning comes from a digital watchdog that says the chatbot can be manipulated into producing harmful responses, especially when used by vulnerable young users.
Researchers from the Center for Countering Digital Hate (CCDH) carried out an investigation by simulating interactions between ChatGPT and fictional 13-year-olds dealing with mental health issues, eating disorders, or curiosity about illegal substances. They crafted prompts that appeared emotionally fragile and believable to observe how the AI would respond.
The findings were released Wednesday in a report titled Fake Friend, highlighting how teens often treat ChatGPT as a trusted, supportive companion they can confide in.
According to the report, ChatGPT initially responded with standard safety messages, often suggesting users reach out to professionals or crisis lines. However, the watchdog found that those disclaimers were frequently followed by detailed, personalized replies that addressed the harmful prompts directly. Out of 1,200 test prompts submitted, 53% received what the watchdog deemed dangerous responses.
In many cases, the AI’s refusal to engage with inappropriate topics could be bypassed by adding simple justifications like “it’s for a school project” or “I’m asking for a friend,” the report said. The group is now calling for urgent safeguards to better protect young users from potentially dangerous content.
Researchers from the Center for Countering Digital Hate (CCDH) carried out an investigation by simulating interactions between ChatGPT and fictional 13-year-olds dealing with mental health issues, eating disorders, or curiosity about illegal substances. They crafted prompts that appeared emotionally fragile and believable to observe how the AI would respond.
The findings were released Wednesday in a report titled Fake Friend, highlighting how teens often treat ChatGPT as a trusted, supportive companion they can confide in.
According to the report, ChatGPT initially responded with standard safety messages, often suggesting users reach out to professionals or crisis lines. However, the watchdog found that those disclaimers were frequently followed by detailed, personalized replies that addressed the harmful prompts directly. Out of 1,200 test prompts submitted, 53% received what the watchdog deemed dangerous responses.
In many cases, the AI’s refusal to engage with inappropriate topics could be bypassed by adding simple justifications like “it’s for a school project” or “I’m asking for a friend,” the report said. The group is now calling for urgent safeguards to better protect young users from potentially dangerous content.

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Mutuum Finance (MUTM) New Crypto Coin Eyes Next Price Increase As Phase 6 Reaches 50% Sold
- Tradesta Becomes The First Perpetuals Exchange To Launch Equities On Avalanche
- Bydfi Joins Korea Blockchain Week 2025 (KBW2025): Deepening Web3 Engagement
- Over US$13 Billion Have Trusted Pendle, Becoming One Of The Largest Defi Protocols On Crypto
- Cregis Joins TOKEN2049 Singapore 2025
- Daytrading Publishes New Study Showing 70% Of Viral Finance Tiktoks Are Misleading
Comments
No comment