Tuesday, 02 January 2024 12:17 GMT

Instagram Remains Threat To Children


(MENAFN- AzerNews) by Alimat Aliyeva

A recent study has revealed that Instagram's safety features designed to protect teenagers from harmful content are largely ineffective in preventing them from viewing posts related to suicide, self-harm, and other dangerous behaviors, Azernews reports.

Researchers also found that the social media platform, owned by Meta, inadvertently encouraged minors to share content that attracted comments from adults containing explicit inappropriate overtones, raising serious concerns about online child safety.

The investigation, conducted by child safety organizations and cybersecurity experts, tested 47 security tools on Instagram, concluding that 30 of these tools were either significantly ineffective or no longer functional. Additionally, nine tools were found to reduce harm but had notable limitations, while only eight tools proved truly effective.

Meta has disputed the study's findings, arguing that their safety measures have successfully reduced the exposure of harmful content to teenagers. The company launched dedicated teen accounts on Instagram in 2024, promising enhanced protections and tools for parental monitoring. In 2025, these safety features were expanded to other Meta platforms such as Facebook and Messenger.

The study was led by the American research group Cybersecurity for Democracy, with contributions from whistleblower Arturo Behar and several child protection organizations, including the Molly Rose Foundation. The researchers created fake teen accounts to simulate real user experiences and uncovered numerous serious flaws in Instagram's safety systems.

Notably, the study highlighted that Instagram's own content policies were not consistently enforced. Teen users were still exposed to posts detailing humiliating explicit acts and search autocomplete suggestions promoting suicide, self-harm, and eating disorders-content explicitly prohibited for minors according to Instagram's guidelines.

This research raises urgent questions about the effectiveness of social media platforms' current approaches to protecting vulnerable users, emphasizing the need for stricter enforcement, improved technology, and greater transparency from companies like Meta to ensure the safety and wellbeing of young users online.

MENAFN26092025000195011045ID1110113930

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search