Trump Signs Executive Order on AI
A Unilateral Resurgence of the AI Moratorium
By Christina Catenacci, human writer
Dec 15, 2025

Key Points
Notwithstanding the fact that the AI moratorium was completely rejected in the recent Vote-a-Rama involving the Big Beautiful Bill, Trump has released an Executive Order on AI
State Governors have reacted negatively to Trump’s announcement, and have insisted that this would constitute federal government overreach
In line with Trump’s AI Action Plan, a consequence of continuing to have or enact AI state laws could mean a withdrawal of federal funding from noncompliant states
You may recall that I wrote about the lengthy Vote-a-Rama that took place in the early morning hours regarding the Big Beautiful Bill, which contained a 10-year moratorium on state-enacted and enforced AI laws. Ultimately, on July 4, 2025, the Big Beautiful Bill was signed by the President and became law—but there was no inclusion of the 10-year AI regulation moratorium for states. More specifically, it was ultimately decided that the moratorium provision was to be removed entirely.
As a refresher, the moratorium stipulated that no state or political subdivision thereof would be able to enforce, during a 10-year period, any law or regulation of that state or a political subdivision thereof limiting, restricting, or otherwise regulating AI models, AI systems, or automated decision systems entered into interstate commerce.
Notwithstanding the fact that the AI moratorium was completely rejected, Trump has signed an Executive Order on December 11, 2025. Thus, there has been a unilateral resurgence of the AI Moratorium.
There Were Hints of a 10-Year AI Moratorium
Apparently, Trump recently confirmed that he planned to sign an executive order that pre-empted AI regulations at the state level.
Where did he announce this decision? On Truth Social, of course. His post stated:
“There must be only One Rulebook if we are going to continue to lead in AI… We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS…You can’t expect a company to get 50 Approvals every time they want to do something…AI WILL BE DESTROYED IN ITS INFANCY!”
Despite what academics, safety groups, and state lawmakers on both sides of the aisle have said, Trump remained adamant that this deregulation push would happen. More specifically, he believed that there should be one unified AI law in the United States, and the individual states were interfering with that goal.
How Have State Governors Reacted to this Tweet?
Simply put, not well. For example, take Florida Governor Ron DeSantis referred to federal government overreach in his recent response on X:
“Stripping states of jurisdiction to regulate AI is a subsidy to Big Tech and will prevent states from protecting against online censorship of political speech, predatory applications that target children, violations of intellectual property rights and data center intrusions on power/water resources”
Similarly, Governor Gavin Newsom has called Trump’s attempt to restrict states from regulating AI “disgusting”, and has stated in a social media post:
“This moratorium threatens to defund states like California with strong laws against #AI-generated child porn…But no surprise given Trump’s years palling around with Jeffrey Epstein.”
Newsom’s spokesperson stated:
“California did not become the innovation hub of the nation by turning its back on new technology — and we can help ensure that future growth happens responsibly and safely”
Along the same lines, several organizations, including tech employee unions and other labor groups, tech safety and consumer protection nonprofits, and educational institutions, also signed letters to Congress opposing the idea of blocking state AI regulations and raising alarms about AI safety risks.
I fact, there are many individuals who have expressed zero sum view that it was innovation (AI deregulation) versus AI safety and accountability. It is either the tech companies in Silicon Valley, or it is red tape and bottlenecks. For instance, Sacha Haworth, Executive Director of The Tech Oversight Project, stated:
“We’re in a fight to determine who will benefit from AI: Big Tech CEOs or the American people… We cannot afford to spend the next decade with Big Tech in the driver’s seat, steering us toward massive job losses, surveillance pricing algorithms that jack up the cost of living, and data centers that are skyrocketing home energy bills”
What is in the Executive Order?
The following are essential points that are contained in the Executive Order:
The goal is to lead in AI and promote United States national and economic security and dominance across many domains. In particular, United States AI companies must be free to innovate without cumbersome regulation. Also, there must be a minimally burdensome national standard in AI, not 50 discordant state ones
Within 30 days of the date of this order, the Attorney General must establish an AI Litigation Task Force (Task Force) whose sole responsibility is to challenge state AI laws that are inconsistent with the policy to dominate through a minimally burdensome national policy framework for AI (policy)
Within 90 days of the date of this order, the Secretary of Commerce must publish an evaluation of existing state AI laws that identifies onerous laws that conflict with the policy, as well as laws that should be referred to the Task Force. That evaluation must, at a minimum, identify laws that require AI models to alter their truthful outputs, or that may compel AI developers or deployers to disclose or report information in a manner that would violate the First Amendment or any other provision of the Constitution
Within 90 days of the date of this order, the Secretary of Commerce must issue a Policy Notice specifying the conditions under which states may be eligible for remaining funding under the Broadband Equity Access and Deployment (BEAD) Program that was saved through my Administration’s “Benefit of the Bargain” reforms, consistent with 47 U.S.C. 1702(e)-(f). That Policy Notice must provide that states with onerous AI laws are ineligible for non-deployment funds, to the maximum extent allowed by Federal law. The Policy Notice must also describe how a fragmented State regulatory landscape for AI threatens to undermine BEAD-funded deployments, the growth of AI applications reliant on high-speed networks, and BEAD’s mission of delivering universal, high-speed connectivity
Executive departments and agencies (agencies) must assess their discretionary grant programs in consultation with the Special Advisor for AI and Crypto and determine whether agencies may condition such grants on states either not enacting an AI law that conflicts with the policy, or for those states that have enacted such laws, on those states entering into a binding agreement with the relevant agency not to enforce any such laws during the performance period in which it receives the discretionary funding
The Chairman of the Federal Communications Commission must, in consultation with the Special Advisor for AI and Crypto, initiate a proceeding to determine whether to adopt a federal reporting and disclosure standard for AI models that pre-empts conflicting state laws
Within 90 days of the date of this order, the Chairman of the Federal Trade Commission must, in consultation with the Special Advisor for AI and Crypto, issue a policy statement on the application of the Federal Trade Commission Act’s (FTCA) prohibition on unfair and deceptive acts or practices under 15 U.S.C. 45 to AI models. That policy statement must explain the circumstances under which state laws that require alterations to the truthful outputs of AI models are pre-empted by the FTCA’s prohibition on engaging in deceptive acts or practices affecting commerce
The Special Advisor for AI and Crypto and the Assistant to the President for Science and Technology must jointly prepare a legislative recommendation establishing a uniform Federal policy framework for AI that pre-empts state AI laws that conflict with the policy set forth in this order
However, the legislative recommendation above must not propose pre-empting otherwise lawful state AI laws relating to: child safety protections; AI compute and data center infrastructure, other than generally applicable permitting reforms; state government procurement and use of AI; and other topics as must be determined
What are the Consequences of this Development?
Whether Trump believes it or not, there are serious risks associated with AI, which need to be mitigated—not ignored. It may be news to some, but there can be simultaneous innovation and responsibility and accountability in the form of appropriate AI governance. It does not need to be one or the other.
Put another way, it is not the tech companies against individuals in society. It is not that zero sum. We can have both simultaneously: we can support innovation and protect individuals in society.
In particular, we can protect against AI hallucinations, AI chatbots that encourage self-harm, or exposing children to inappropriate sexualized content and other societal harms—with responsible AI state regulation (there is no federal AI law, and one is not expected to be made in the near future). States know their needs best, and it is troubling that Trump aims to silence them and prevent them from legislating in their own states.
It will be no surprise when individual progressive states with substantive AI laws launch complaints against the federal administration for raising the possibility of a federal withdrawal of funding if states do not comply with the unilateral Executive Order. In fact, we may see court cases commenced as soon as possible—well before any action from the Task Force pursuant to the Executive Order.
It is troubling that the Executive Order frames the issue of AI innovation as a zero-sum game:
“To win, United States AI companies must be free to innovate without cumbersome regulation”