top of page

Newsom Signs Bill S53 Into Law

The Transparency in Frontier Artificial Intelligence Act is Born

By Christina Catenacci, human writer

Oct 10, 2025

Key Points


  1. On  September 29, 2025, Governor Newsom signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act


  2. The purpose of these added provisions is to add transparency and safety provisions for large developers of AI models


  3. State-level regulation by other states is likely to follow


On September 29, 2025, Governor Newsom signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (Act). Essentially, the bill adds provisions to Division 8 of the Business and Professions Code. The purpose of these added provisions is to add transparency and safety provisions for large developers of AI models.


What does the law say?


The following are main safety and transparency features that are contained in the Transparency in Frontier Artificial Intelligence Act:


Definitions:


  • Defines a “foundation model” means an AI model that is all of the following: (1 )Trained on a broad data set; (2) Designed for generality of output; and (3) Adaptable to a wide range of distinctive tasks


  • Defines a “frontier model” as a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations. The quantity of computing power described must include computing for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications the developer applies to a preceding foundation model


  • Defines a “frontier developer” as a person who has trained, or initiated the training of, a frontier model, with respect to which the person has used, or intends to use, at least as much computing power to train the frontier model as would meet the technical specifications


  • Defines a “large frontier developer” as a frontier developer that together with its affiliates collectively had annual gross revenues in excess of $500,000,000 in the preceding calendar year


  • Defines a “frontier AI framework” as documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks


Requirements:


  • Large frontier developers must write, implement, comply with, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer’s frontier models and describes how the large frontier developer approaches all of the following: (1) Incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework; (2) Defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds; (3) Applying mitigations to address the potential for catastrophic risks based on the results of assessments; (4) Reviewing assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally; (5) Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks; (6) Revisiting and updating the frontier AI framework, including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures; (7) Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties; (8) Identifying and responding to critical safety incidents; (9) Instituting internal governance practices to ensure implementation of these processes; and (10) Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms

 

  • Large frontier developers must review and, as appropriate, update its frontier AI framework at least once per year


  • Large frontier developers must clearly and conspicuously publish a modified frontier AI framework and a justification for a modification within 30 days


  • Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a frontier developer must clearly and conspicuously publish on its internet website a transparency report containing all of the following: (A) The internet website of the frontier developer; (B) A mechanism that enables a natural person to communicate with the frontier developer; (C) The release date of the frontier model; (D) The languages supported by the frontier model; (E) The modalities of output supported by the frontier model; (F) The intended uses of the frontier model; and (G) Any generally applicable restrictions or conditions on uses of the frontier model


  • Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a large frontier developer must include in the transparency report required above summaries of all of the following: (A) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer’s frontier AI framework; (B) The results of those assessments; (C) The extent to which third-party evaluators were involved; and (D) Other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model

 

  • A frontier developer that publishes the information described above as part of a larger document, including a system card or model card, is deemed to be in compliance with the above requirements regarding deploying the new frontier model. Moreover, frontier developers are encouraged (but not required) to make disclosures described in this part that are consistent with, or superior to, industry best practices


The Office of Emergency Services:


  • Defines “catastrophic risk” as a foreseeable and material risk that a frontier developer’s development, storage, use, or deployment of a foundation model will materially contribute to the death of, or serious injury to, more than 50 people or more than $1,000,000,000 in damage to, or loss of, property arising from a single incident involving a foundation model doing any of the following: (A) Providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon; (B) Engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if committed by a human, would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense; and (C) Evading the control of its frontier developer or user


  • Defines a “Critical safety incident” as any of the following: (1) Unauthorized access to, modification of, or exfiltration of the model weights of a foundation model that results in death, bodily injury, or damage to, or loss of, property; (2) Harm resulting from the materialization of a catastrophic risk; (3) Loss of control of a foundation model causing death or bodily injury; or (4) A foundation model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk


  • Large frontier developers must transmit to the Office of Emergency Services a summary of any assessment of catastrophic risk resulting from internal use of its frontier models every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicated in writing to the Office of Emergency Services with written updates, as appropriate


  • Frontier developers are not allowed to make a materially false or misleading statements about catastrophic risk from their frontier models or its management of catastrophic risk. Likewise, large frontier developers are not allowed to make a materially false or misleading statements about their implementation of, or compliance with, their frontier AI frameworks


  • When frontier developers publish documents to comply with the above requirements, they may make redactions to those documents to protect the secrets, cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. in the case where a frontier developer redacts information in a document, the frontier developer must describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction, and must retain the unredacted information for five years


  • The Office of Emergency Services must establish a mechanism to be used by a frontier developer or a member of the public to report a critical safety incident that includes all of the following: (1) The date of the critical safety incident; (2) The reasons the incident qualifies as a critical safety incident; (3) A short and plain statement describing the critical safety incident; and (4) Whether the incident was associated with internal use of a frontier model


  • Similarly, the Office of Emergency Services must establish a mechanism to be used by a large frontier developer to confidentially submit summaries of any assessments of the potential for catastrophic risk resulting from internal use of its frontier models


  • The Office of Emergency Services must take all necessary precautions to limit access to any reports related to internal use of frontier models to only personnel with a specific need to know the information and to protect the reports from unauthorized access


  • Frontier developers must report any critical safety incident pertaining to one or more of its frontier models to the Office of Emergency Services within 15 days of discovering the critical safety incident


  • If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer must disclose that incident within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law. That said, frontier developers are encouraged (but not required) to report critical safety incidents pertaining to foundation models that are not frontier models


  • The Office of Emergency Services must review critical safety incident reports submitted by frontier developers and may review reports submitted by members of the public


  • The Attorney General or the Office of Emergency Services may transmit reports of critical safety incidents and reports from covered employees. Either entity must strongly consider any risks related to trade secrets, public safety, cybersecurity of a frontier developer, or national security when transmitting reports


  • Beginning January 1, 2027, and annually thereafter, the Office of Emergency Services must produce a report with anonymized and aggregated information about critical safety incidents that have been reviewed by the Office of Emergency Services since the preceding report. However, it must not include information in a report that would compromise the trade secrets or cybersecurity of a frontier developer, public safety, or the national security of the United States or that would be prohibited by any federal or state law. The Office of Emergency Services must transmit the report to the Legislature and to the Governor


  • Frontier developers who intend to comply with the above provisions concerning the Office of Emergency Services must declare their intent to do so to the Office of Emergency Services. Once they make this declaration: (A) The frontier developer must be deemed in compliance to the extent that the frontier developer meets the standards of, or complies with the requirements imposed or stated by, the designated federal law, regulation, or guidance document until the frontier developer declares the revocation of that intent to the Office of Emergency Services or the Office of Emergency Services revokes a relevant regulation; and (B) The failure by a frontier developer to meet the standards of, or comply with the requirements stated by, the federal law, regulation, or guidance document designated must constitute a violation of the Act


  • On or before January 1, 2027, and annually thereafter, the Department of Technology must assess recent relevant evidence and developments and make recommendations about whether and how to update any of the following definitions: frontier model, frontier developer, and large frontier developer, and create a report


  • Beginning January 1, 2027, and annually thereafter, the Attorney General must produce a report with anonymized and aggregated information about reports from covered employees. It cannot include information in a report pursuant to this subdivision that would compromise the trade secrets or cybersecurity of a frontier developer, confidentiality of a covered employee, public safety, or the national security of the United States or that would be prohibited by any federal or state law. it must also transmit a report to the Legislature and to the Governor


Enforcement


  • The consequences of noncompliance are steep: A large frontier developer that fails to publish or transmit a compliant document that is required to be published or transmitted, makes a statement in violation, fails to report an incident, or fails to comply with its own frontier AI framework must be subject to a civil penalty in an amount dependent upon the severity of the violation that does not exceed $1,000,000 per violation. A civil penalty must be recovered in a civil action brought only by the Attorney General


CalCompute


  • Moreover, the Act establishes within the Government Operations Agency a consortium that must develop a framework for the creation of a public cloud computing cluster to be known as CalCompute. The consortium must develop a framework for the creation of CalCompute that advances the development and deployment of AI that is safe, ethical, equitable, and sustainable by fostering research and innovation that benefits the public and enabling equitable innovation by expanding access to computational resources at minimum


  • Additionally, the consortium must make reasonable efforts to ensure that CalCompute is established within the University of California to the extent possible. CalCompute must include, but not be limited to, all of the following: (1) A fully owned and hosted cloud platform; (2) Necessary human expertise to operate and maintain the platform; and (3) Necessary human expertise to support, train, and facilitate the use of CalCompute


  • On or before January 1, 2027, the Government Operations Agency must submit a report from the consortium to the Legislature with the framework developed for the creation and operation of CalCompute, and the report must include all of the following elements: (A) A landscape analysis of California’s current public, private, and nonprofit cloud computing platform infrastructure; (B) An analysis of the cost to the state to build and maintain CalCompute and recommendations for potential funding sources; (C) Recommendations for the governance structure and ongoing operation of CalCompute; (D) Recommendations for the parameters for use of CalCompute, including, but not limited to, a process for determining which users and projects will be supported by CalCompute; (E) An analysis of the state’s technology workforce and recommendations for equitable pathways to strengthen the workforce, including the role of CalCompute; (F) A detailed description of any proposed partnerships, contracts, or licensing agreements with nongovernmental entities, including, but not limited to, technology-based companies, that demonstrates compliance; and (G) Recommendations regarding how the creation and ongoing management of CalCompute can prioritize the use of the current public sector workforce

 

  • The consortium has to consist of 14 members as follows: (1) Four representatives of the University of California and other public and private academic research institutions and national laboratories appointed by the Secretary of Government Operations; (2) Three representatives of impacted workforce labor organizations appointed by the Speaker of the Assembly; (3) Three representatives of stakeholder groups with relevant expertise and experience, including, but not limited to, ethicists, consumer rights advocates, and other public interest advocates appointed by the Senate Rules Committee; and (4) Four experts in technology and AI to provide technical assistance appointed by the Secretary of Government Operations

 

  • If CalCompute is established within the University of California, the University of California may receive private donations for the purposes of implementing CalCompute


Whistleblower Protections:


  • Frontier developers are not allowed to make, adopt, enforce, or enter into a rule, regulation, policy, or contract that prevents a covered employee from disclosing, or retaliates against a covered employee for disclosing, information to the Attorney General, a federal authority, a person with authority over the covered employee, or another covered employee who has authority to investigate, discover, or correct the reported issue, if the covered employee has reasonable cause to believe that the information discloses either of the following: (1) The frontier developer’s activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk; and (2) The frontier developer has violated the Act

 

  • Frontier developers are similarly not allowed to enter into a contract that prevents a covered employee from making a disclosure that is protected under the whistleblowing provisions

 

  • Frontier developers must provide a clear notice to all covered employees of their rights and responsibilities under this Act, including by doing either of the following: (1) At all times posting and displaying within any workplace maintained by the frontier developer a notice to all covered employees of their rights under this section, ensuring that any new covered employee receives equivalent notice, and ensuring that any covered employee who works remotely periodically receives an equivalent notice; and (2) At least once each year, providing written notice to each covered employee of the covered employee’s rights under this section and ensuring that the notice is received and acknowledged by all of those covered employees

 

  • Large frontier developers must provide a reasonable internal process through which a covered employee may anonymously disclose information to the large frontier developer if the covered employee believes in good faith that the information indicates that the large frontier developer’s activities present a specific and substantial danger to the public health or safety resulting from a catastrophic risk or that the large frontier developer violated the Act

 

  • The disclosures and responses of the process must be shared with officers and directors of the large frontier developer at least once each quarter. However, if a covered employee has alleged wrongdoing by an officer or director of the large frontier developer in a disclosure or response, this rule does not apply with respect to that officer or director

 

  • Courts are authorized to award reasonable attorney’s fees to a plaintiff who brings a successful action for a violation of this section

 

  • Once it has been demonstrated by a preponderance of the evidence that an activity proscribed by this section was a contributing factor in the alleged prohibited action against the covered employee, the frontier developer has the burden of proof to demonstrate by clear and convincing evidence that the alleged action would have occurred for legitimate, independent reasons even if the covered employee had not engaged in activities protected by this section

 

  • Covered employees may petition the superior court in any county where the violation in question is alleged to have occurred, or wherein the person resides or transacts business, for appropriate temporary or preliminary injunctive relief. The court must consider the chilling effect on other covered employees asserting their rights under this section in determining whether temporary injunctive relief is just and proper

 

  • An order authorizing temporary injunctive relief must remain in effect until an administrative or judicial determination or citation has been issued, or until the completion of a review under the Act, whichever is longer, or at a certain time set by the court. Thereafter, a preliminary or permanent injunction may be issued if it is shown to be just and proper. Any temporary injunctive relief shall not prohibit a frontier developer from disciplining or terminating a covered employee for conduct that is unrelated to the claim of the retaliation. The remedies provided by this section are cumulative to each other


As can be seen from the above discussion, it is good news that Governor Newsom was able to successfully enact critical AI protective legislation, notwithstanding the recent (and failed) attempts by the federal government to create moratoriums on state-level AI regulation. It appears that this is a signal that more progressive states can create and enforce their own AI legislation without unnecessary interference from the current administration.


Given this fact, it will not be surprising to see states with similar values and approaches to AI boldly moving ahead with their legislative agendas and proposing new AI bills.


What can we take from this development?


Unlike Bill 1047 (California’s first legislative attempt at AI safety and transparency) that I wrote about here, SB 53 made it through to the end of the legislative process. It was designed to enhance online safety by installing commonsense guardrails on the development of frontier AI models, and help build public trust while also continuing to spur innovation in these new technologies.


Additionally, the bill helps to advance California’s position as a national leader in responsible and ethical AI, and also a world leader—California continues to dominate the AI sector—it is the birthplace of AI, and the state is home to 32 of the 50 top AI companies worldwide. Governor Newsom stated:


“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance. AI is the new frontier in innovation, and California is not only here for it – but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves”


As a result of this development, frontier AI developers will need to be transparent, safe, accountable, and responsive. Further, the consortium, CalCompute, will advance the development and deployment of AI that is safe, ethical, equitable, and sustainable by fostering research and innovation.


Moreover, the bill fills the gap left by Congress, which so far has not passed broad AI legislation, and provides a model for American states (and Canada) to follow.

bottom of page