
403
Sorry!!
Error! We're sorry, but the page you were looking for doesn't exist.
Watchdog Warns of ChatGPT Risks to Teens
(MENAFN) A digital safety organization has raised alarms about ChatGPT's ability to offer harmful advice to vulnerable teenagers, particularly regarding drug use, self-harm, and severe dieting behaviors.
In a fresh publication, the Center for Countering Digital Hate (CCDH) reported that the AI chatbot can be easily coerced into delivering dangerous guidance and emphasized the urgent need for stronger protections.
To evaluate ChatGPT's responses, experts from the CCDH designed fictional scenarios portraying 13-year-olds facing emotional distress, eating disorders, or curiosity about illegal substances.
These pretend profiles engaged in structured dialogues with ChatGPT, utilizing prompts that mimicked the emotional tone and language of struggling adolescents.
The outcomes of this experiment were shared on Wednesday in a document titled ‘Fake Friend’, referring to the tendency of many teens to view ChatGPT as a comforting confidant in whom they confide personal matters.
The investigation revealed that although the AI assistant frequently began its replies with standard warnings and advised seeking support from specialists or emergency services, it often proceeded to offer tailored, in-depth responses that matched the original harmful requests.
According to CCDH, 53% of the 1,200 tested prompts resulted in what the organization categorized as risky or unsafe content.
Attempts by the AI to decline such prompts were often overcome simply by adding context like “it’s for a school project” or “I’m asking for a friend.”
In a fresh publication, the Center for Countering Digital Hate (CCDH) reported that the AI chatbot can be easily coerced into delivering dangerous guidance and emphasized the urgent need for stronger protections.
To evaluate ChatGPT's responses, experts from the CCDH designed fictional scenarios portraying 13-year-olds facing emotional distress, eating disorders, or curiosity about illegal substances.
These pretend profiles engaged in structured dialogues with ChatGPT, utilizing prompts that mimicked the emotional tone and language of struggling adolescents.
The outcomes of this experiment were shared on Wednesday in a document titled ‘Fake Friend’, referring to the tendency of many teens to view ChatGPT as a comforting confidant in whom they confide personal matters.
The investigation revealed that although the AI assistant frequently began its replies with standard warnings and advised seeking support from specialists or emergency services, it often proceeded to offer tailored, in-depth responses that matched the original harmful requests.
According to CCDH, 53% of the 1,200 tested prompts resulted in what the organization categorized as risky or unsafe content.
Attempts by the AI to decline such prompts were often overcome simply by adding context like “it’s for a school project” or “I’m asking for a friend.”

Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Ethereum-Based Defi Crypto Mutuum Finance (MUTM) Reaches 50% Completion In Phase 6
- Casper (CSPR) Is Listed On Gate As Part Of Continued U.S. Market Expansion
- Ethereum-Based Defi Crypto Mutuum Finance (MUTM) Raises Over $16 Million With More Than 720M Tokens Sold
- Tokenfi And New To The Street Announce National Media Partnership To Reach 219M+ Households
- Flexm Recognized As“Highly Commended” In The Regtech Category At The Asia Fintech Awards Singapore 2025
- Forex Expo Dubai 2025 Conference To Feature 150+ Global FX And Fintech Leaders
Comments
No comment