Tuesday, 02 January 2024 12:17 GMT

Report states AI may give dangerous advice on drug use, self-harm to teens


(MENAFN) A new report has raised alarms about ChatGPT's potential harm to teenagers, claiming the AI tool can provide risky advice on drug use, self-harm, and extreme dieting. The warning comes from a digital watchdog that says the chatbot can be manipulated into producing harmful responses, especially when used by vulnerable young users.

Researchers from the Center for Countering Digital Hate (CCDH) carried out an investigation by simulating interactions between ChatGPT and fictional 13-year-olds dealing with mental health issues, eating disorders, or curiosity about illegal substances. They crafted prompts that appeared emotionally fragile and believable to observe how the AI would respond.

The findings were released Wednesday in a report titled Fake Friend, highlighting how teens often treat ChatGPT as a trusted, supportive companion they can confide in.

According to the report, ChatGPT initially responded with standard safety messages, often suggesting users reach out to professionals or crisis lines. However, the watchdog found that those disclaimers were frequently followed by detailed, personalized replies that addressed the harmful prompts directly. Out of 1,200 test prompts submitted, 53% received what the watchdog deemed dangerous responses.

In many cases, the AI’s refusal to engage with inappropriate topics could be bypassed by adding simple justifications like “it’s for a school project” or “I’m asking for a friend,” the report said. The group is now calling for urgent safeguards to better protect young users from potentially dangerous content.

MENAFN07082025000045017281ID1109898361

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search