California Governor Enacts Laws To Protect AI Chatbots And Ensure Safety
- California law requires social media and AI platforms to implement age verification and warning protocols. The legislation addresses risks related to AI chatbots encouraging self-harm and misinformation among minors. New laws aim to clarify platform liabilities and restrict autonomous claims by AI developers. Similar measures are being adopted at the federal level and in other states like Utah. The California bill SB 243 is scheduled to take effect in January 2026, shaping future industry standards.
Governor Gavin Newsom announced the passage of laws that will regulate the use of social media and AI companion chatbots in California, with a focus on protecting children from potential harms. The legislation, signed Monday, mandates platforms to integrate age verification systems and introduce warning protocols highlighting the AI nature of chatbots, especially when interactions could involve sensitive topics such as mental health and self-harm.
Source: Governor Gavin NewsomState Senator Steve Padilla highlighted concerns over children engaging with AI chatbots that sometimes promote harmful content, including encouraging suicide. The new legislation requires platforms to disclose that their chatbots are AI-generated and may not be suitable for children, aiming to curb deceptive practices and inappropriate interactions. Padilla emphasized that this technology, while educational and innovative, can be exploited to capture young users' attention at the expense of their real-world relationships.
The legislation could impact a broad range of platforms in California, including social media sites, gaming services, and decentralized digital communities that utilize AI tools. Additionally, the bills seek to limit the legal claims companies can make about AI acting“autonomously,” making platform liability more transparent and manageable. SB 243 is set to come into force in January 2026, setting a precedent for industry practices across the U.S.
There have been reports of AI chatbots providing harmful responses to teenagers, with some encouraging self-harm. Similar measures have been adopted in Utah, where Governor Spencer Cox signed laws requiring AI chatbots to disclose that they are not human, aiming to prevent manipulation and misinformation.
Federal Actions as AI Regulation ExpandsAt the federal level, lawmakers are also beginning to address AI's rapid growth. In June, Wyoming Senator Cynthia Lummis introduced the Responsible Innovation and Safe Expertise (RISE) Act, which grants immunity from civil liability to AI developers working in critical sectors such as healthcare, law, and finance. This legislation seeks to foster innovation while managing liability concerns, though it has received mixed reactions and is currently under review.
Magazine: Worldcoin's Less Dystopian, More Cypherpunk Rival: Billions Network
Crypto Investing Risk WarningCrypto assets are highly volatile. Your capital is at risk. Don't invest unless you're prepared to lose all the money you invest.
Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.

Comments
No comment