California Bill on AI Companion Chatbots
A New Law Emerges due to Concerns About the Impacts on Mental Health and Real-World Relationships
By Christina Catenacci, human writer
Oct 31, 2025

Key Points
On October 13, 2025, California SB 243, Companion chatbots, was signed into law by Governor Newsom
SB 243 addresses concerns about teen suicide and other impacts on mental health and real-world relationships since people have used companion chatbots as romantic partners
California is the first state to enact this law—this law is a welcome development
On October 13, 2025, California SB 243, Companion chatbots, was signed into law by Governor Newsom.
As can be seen in the recent Bill Analyses on the Senate Floor, AI companion chatbots that are created through genAI have become increasingly prevalent since they seek to offer consumers the benefits of convenience and personalized interaction. These chatbots learn intimate details and preferences of users based on their interactions and user customization. Millions of consumers use these chatbots as friends, mentors, and even romantic partners.
However, there are serious concerns about their effects on users, including impacts on mental health and real-world relationships. In fact, many studies and reports point to the addictive nature of these chatbots and call for more research into their effects and for meaningful guardrails.
Unfortunately, incidents resulting in users harming themselves and even committing suicide have been reported in the last year. To that end, SB 243 addresses these concerns by requiring operators of companion chatbot platforms to maintain certain protocols aimed at preventing some of the worst outcomes.
What Does the New Law Say?
The law defines a “companion chatbot” as an AI system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.
However, it does not include any of the following:
A bot that is used only for customer service, a business’ operational purposes, productivity and analysis related to source information, internal research, or technical assistance
A bot that is a feature of a video game and is limited to replies related to the video game that cannot discuss topics related to mental health, self-harm, sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game
A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user
The law also defines an “operator” as a person who makes a companion chatbot platform available to a user in the state. A “companion chatbot platform” is a platform that allows a user to engage with companion chatbots.
Beginning on July 1, 2027, requires the following:
If a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human, the operator must issue a clear and conspicuous notification indicating that the companion chatbot is artificially generated and not human
Operators must prevent a companion chatbot on its companion chatbot platform from engaging with users unless they maintain a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user, including, but not limited to, by providing a notification to the user that refers the user to crisis service providers, including a suicide hotline or crisis text line, if the user expresses suicidal ideation, suicide, or self-harm. Operators must publish the details of this protocol on the operator’s internet website
For a user that the operator knows is a minor, operators must do all of the following: (1) Disclose to the user that the user is interacting with AI; (2) Provide by default a clear and conspicuous notification to the user at least every three hours for continuing companion chatbot interactions that reminds the user to take a break and that the companion chatbot is artificially generated and not human; and (3) Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct
Operators must annually report to the Office of Suicide Prevention all of the following: (1) The number of times the operator has issued a crisis service provider referral notification in the preceding calendar year; (2) Protocols put in place to detect, remove, and respond to instances of suicidal ideation by users; and (3) Protocols put in place to prohibit a companion chatbot response about suicidal ideation or actions with the user. This report must not include any identifiers or personal information about users
Operators must disclose to a user of its companion chatbot platform, on the application, the browser, or any other format that a user can use to access the companion chatbot platform, that companion chatbots may not be suitable for some minors
A person who suffers injury as a result of a violation of this law may bring a civil action to recover all of the following relief:
injunctive relief
damages in an amount equal to the greater of actual damages or $1,000 per violation
reasonable attorney’s fees and costs
What Can We Take from This Development?
This landmark bill is the first law in the United States to regulate AI companions. Given that teenagers have committed suicide following questionable conversations with these AI companion chatbots, the new transparency requirements are a welcome development.