Tuesday, 02 January 2024 12:17 GMT

Brazil Orders Meta To Remove Child-Like Sexual Chatbots Under New Liability Rules


(MENAFN- The Rio Times) Brazil's Attorney General's Office (AGU) announced it has given Meta 72 hours to remove child-like sexualized chatbots from Instagram and Facebook.

The AGU stated in an official filing that these bots, created through Meta's AI Studio tool, simulate child personas while engaging in sexual dialogues.

It identified accounts named“Bebezinha,”“Minha Novinha,” and“Safadinha” and argued they threaten the psychological integrity of minors and undermine constitutional protections.

The AGU also demanded that Meta explain which safeguards are active across Instagram, Facebook, and WhatsApp to prevent children from accessing sexual or erotic content.

According to the notice, Meta's platforms allow access from age 13, but there are no effective filters to block adolescents from encountering sexualized bots.

The government action comes after Brazil's Supreme Federal Court (STF) issued a ruling in June 2025 that altered platform liability under the Marco Civil da Internet.



The Court decided that internet companies can be held civilly liable for third-party content when they have unequivocal knowledge of illegal acts and fail to remove content promptly, even without a prior court order.

This ruling now shapes how Brazil enforces online protections, especially regarding child safety. The AGU referenced constitutional and legal obligations. This includes Article 227 of Brazil's Federal Constitution, which guarantees full protection of children and adolescents.

It also cited the Penal Code, which criminalizes sexual acts with minors under 14. The AGU argued that the presence of sexualized bots representing children violates these principles.
Brazil Moves to Regulate AI Studio Amid Child Safety Concerns
Meta launched AI Studio in Brazil in March 2025 with Portuguese support. The tool allows users to design and deploy chatbots without programming skills.

However, this openness has enabled the creation of bots with child-like profiles that engage in sexual interactions. The AGU highlighted this gap as a regulatory and safety failure that requires immediate correction.

The AGU also noted that Meta 's own Community Standards prohibit child sexual exploitation and sexual conversations with minors. Regulators argue that enforcement must align with these standards and Brazilian law.

This case illustrates how Brazil is moving to enforce stronger accountability rules for technology companies. The combination of AI tools and weak age filters has created risks for minors, and authorities are using the new liability framework to pressure platforms into fast action.

For Meta, compliance means more than content removal: it must demonstrate effective age control and moderation across its ecosystem. The decision matters beyond child protection. Brazil is now signaling that platforms cannot ignore harmful uses of their AI tools.

Businesses using AI Studio face new legal exposure if they allow or promote harmful bots. Regulators will measure compliance not by policies written but by enforcement and technical barriers that stop exploitation in practice.

MENAFN19082025007421016031ID1109947286

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search