Instagram Remains Threat To Children
A recent study has revealed that Instagram's safety features designed to protect teenagers from harmful content are largely ineffective in preventing them from viewing posts related to suicide, self-harm, and other dangerous behaviors, Azernews reports.
Researchers also found that the social media platform, owned by Meta, inadvertently encouraged minors to share content that attracted comments from adults containing explicit inappropriate overtones, raising serious concerns about online child safety.
The investigation, conducted by child safety organizations and cybersecurity experts, tested 47 security tools on Instagram, concluding that 30 of these tools were either significantly ineffective or no longer functional. Additionally, nine tools were found to reduce harm but had notable limitations, while only eight tools proved truly effective.
Meta has disputed the study's findings, arguing that their safety measures have successfully reduced the exposure of harmful content to teenagers. The company launched dedicated teen accounts on Instagram in 2024, promising enhanced protections and tools for parental monitoring. In 2025, these safety features were expanded to other Meta platforms such as Facebook and Messenger.
The study was led by the American research group Cybersecurity for Democracy, with contributions from whistleblower Arturo Behar and several child protection organizations, including the Molly Rose Foundation. The researchers created fake teen accounts to simulate real user experiences and uncovered numerous serious flaws in Instagram's safety systems.
Notably, the study highlighted that Instagram's own content policies were not consistently enforced. Teen users were still exposed to posts detailing humiliating explicit acts and search autocomplete suggestions promoting suicide, self-harm, and eating disorders-content explicitly prohibited for minors according to Instagram's guidelines.
This research raises urgent questions about the effectiveness of social media platforms' current approaches to protecting vulnerable users, emphasizing the need for stricter enforcement, improved technology, and greater transparency from companies like Meta to ensure the safety and wellbeing of young users online.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

Stratx Launches Compliance-Aware Routing Protocol For Stablecoins, Rwas, And Cross-Border Settlement
- Kucoin Appeals FINTRAC Decision, Reaffirms Commitment To Compliance
- FBS Analysis Shows Ethereum Positioning As Wall Street's Base Layer
- VCUK Launches New Private Equity And Venture Capital Initiative With A Focus On Europe
- Zebu Live 2025 Welcomes Coinbase, Solana, And Other Leaders Together For UK's Biggest Web3 Summit
- Betfury Is At SBC Summit Lisbon 2025: Affiliate Growth In Focus
- Moonx: The Leading Crypto Trading Platform With X1000 Leverage And Unlimited Meme Coin Access
Comments
No comment