top of page

New York Governor Hochul Signs AI Safety and Transparency Bill into Law

New Law Takes Effect January 1, 2027

By Christina Catenacci, human writer

Jan 23, 2026

Key Points


  1. On December 19, 2025, Governor Hochul signed New York's Senate Bill S6953B into law


  2. The new law aims to take the most basic commonsense steps when training an AI model


  3. New York has joined a progressive state like California, which is at the forefront of AI regulation


On December 19, 2025, Governor Hochul signed New York's Senate Bill S6953B into law. Similar to California’s new AI law, New York’s AI law focuses on safety and transparency by requiring safety reports for powerful frontier artificial intelligence models in order to limit critical harm.


It is interesting to see that New York’s newly enacted AI law was just created, notwithstanding President Trump’s recent attempt to thwart the progress of legislative reform via the December 11, 2025 Executive Order, which I wrote about here.


What Is the New AI Law About?


The preamble aptly notes that the law has not kept up with this rapidly developing AI technology. It is also important to note that even to open up a daycare center, it is necessary to have a safety plan. To this end, the new law aims to take the most basic commonsense steps when training an AI model:


  • Have a safety plan to prevent severe risks


  • Conspicuously publish a redacted version of the safety plan


  • Disclose major security incidents so that no one has to make the same mistake twice


It also notes that in 2023, more than a thousand experts, including the CEOs of Google DeepMind, Anthropic, and OpenAI and many world leading academics signed a letter stating that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". As a result, the goal is to reduce risks in a targeted and surgical manner by limiting the law to only a small set of very severe risks; the law will not apply to most AI companies. Rather, it removes economic incentives to cut corners or abandon safety plans for companies that cause over $1 billion in damage or hundreds of deaths or injuries.


Accordingly, the law does not address issues involving bias, authenticity, workforce impacts, and other concerns that need to be handled with additional legislation.


What Does the New AI Law Require?


The law defines “critical harm” as the death or serious injury of 100 or more people or at least $1 billion of damages to rights in money or property caused by a large developer’s use, storage, or release of a frontier model through either the creation of a chemical, biological, radiological, or nuclear weapon, or an AI model engaging in conduct that acts with no meaningful human intervention and would, if committed by a human, constitute a crime requiring intent, recklessness, or gross negligence (or solicitation or aiding and abetting). However, the harm inflicted by a human actor is not considered to be the result of a developer’s activities unless those activities were a substantial factor in bringing about the harm, were reasonably foreseeable, and could have been prevented.

The law also defines a “frontier model” as an AI model trained using more than 10º26 computational operations, the compute cost of which exceeds $100 million, or an AI model produced by applying knowledge distillation to a frontier model, provided that the compute cost exceeds $5 million. To be clear, the law only applies to frontier models deployed in whole or in part in New York.


Moreover, the law defines a “safety incident” as a known incident of critical harm or an incident that has an increased risk of critical harm such as: a frontier model autonomously engaging in behaviour other than at the request of a user; theft, misappropriation malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model; the critical failure of any technical or administrative controls (including controls limiting the ability to modify a frontier model); or unauthorized use of a frontier model.


The following are transparency requirements regarding frontier model training and use, which a large developer must do before deploying the frontier model:


  • Implement a written safety and security protocol


  • Retain an unredacted copy of the safety and security protocol (including records and dates of any updates) for as long as the frontier model is deployed plus five years


  • Conspicuously publish a copy of the safety and security protocol with appropriate redactions and transmit a copy of the redacted protocol to the AG and Division of Homeland Security and Emergency Services—and grant access to the AG


  • Record and when possible, retain information on the specific tests and test results for as long as the frontier model is deployed plus five years regarding assessments of the frontier model—and implement appropriate safeguards 


The following are prohibitions:


  • A large developer must not deploy a frontier model if doing so would create unreasonable risk of critical harm


  • A large developer must not knowingly make false or materially misleading statements or omissions in or regarding documents produced


The AG can bring a civil action for a violation, and a civil penalty can result that is not exceeding $10 million for a first violation and not exceeding $30 million for any subsequent violation.


Large developers must also do the following:


  • Conduct an annual review of any safety and security protocol required, and if necessary, make modifications. If there are modifications made, it is necessary for the large developer to publish the protocol in the same manner as previous


  • Disclose each safety incident affecting the frontier model to the AG and Division of Homeland Security and Emergency Services within 72 hours. The disclosure must include: the date of the safety incident; the reasons it qualifies as a safety incident; and a short statement describing the safety incident


What Can We Take from This Development?


New York has joined those states that are at the forefront of AI regulation. In fact, the law has been referred to as a landmark AI safety bill because it aims to protect New Yorkers from AI risks while also supporting innovation.


Indeed, the preamble of the law referred to the January 2025 AI Action Summit’s International AI Safety Report that discussed the myriad AI risks. We can only hope that other progressive states will boldly join California and New York and enact similar safety and transparency legislation.

bottom of page