Search Results
123 results found with an empty search
- Newsom Signs Bill S53 Into Law | voyAIge strategy
Newsom Signs Bill S53 Into Law The Transparency in Frontier Artificial Intelligence Act is Born By Christina Catenacci, human writer Oct 10, 2025 Key Points On September 29, 2025, Governor Newsom signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act The purpose of these added provisions is to add transparency and safety provisions for large developers of AI models State-level regulation by other states is likely to follow On September 29, 2025, Governor Newsom signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (Act). Essentially, the bill adds provisions to Division 8 of the Business and Professions Code . The purpose of these added provisions is to add transparency and safety provisions for large developers of AI models. What does the law say? The following are main safety and transparency features that are contained in the Transparency in Frontier Artificial Intelligence Act : Definitions: Defines a “foundation model” means an AI model that is all of the following: (1 )Trained on a broad data set; (2) Designed for generality of output; and (3) Adaptable to a wide range of distinctive tasks Defines a “frontier model” as a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations. The quantity of computing power described must include computing for the original training run and for any subsequent fine-tuning, reinforcement learning, or other material modifications the developer applies to a preceding foundation model Defines a “frontier developer” as a person who has trained, or initiated the training of, a frontier model, with respect to which the person has used, or intends to use, at least as much computing power to train the frontier model as would meet the technical specifications Defines a “large frontier developer” as a frontier developer that together with its affiliates collectively had annual gross revenues in excess of $500,000,000 in the preceding calendar year Defines a “frontier AI framework” as documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks Requirements : Large frontier developers must write, implement, comply with, and clearly and conspicuously publish on its internet website a frontier AI framework that applies to the large frontier developer’s frontier models and describes how the large frontier developer approaches all of the following: (1) Incorporating national standards, international standards, and industry-consensus best practices into its frontier AI framework; (2) Defining and assessing thresholds used by the large frontier developer to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk, which may include multiple-tiered thresholds; (3) Applying mitigations to address the potential for catastrophic risks based on the results of assessments; (4) Reviewing assessments and adequacy of mitigations as part of the decision to deploy a frontier model or use it extensively internally; (5) Using third parties to assess the potential for catastrophic risks and the effectiveness of mitigations of catastrophic risks; (6) Revisiting and updating the frontier AI framework, including any criteria that trigger updates and how the large frontier developer determines when its frontier models are substantially modified enough to require disclosures; (7) Cybersecurity practices to secure unreleased model weights from unauthorized modification or transfer by internal or external parties; (8) Identifying and responding to critical safety incidents; (9) Instituting internal governance practices to ensure implementation of these processes; and (10) Assessing and managing catastrophic risk resulting from the internal use of its frontier models, including risks resulting from a frontier model circumventing oversight mechanisms Large frontier developers must review and, as appropriate, update its frontier AI framework at least once per year Large frontier developers must clearly and conspicuously publish a modified frontier AI framework and a justification for a modification within 30 days Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a frontier developer must clearly and conspicuously publish on its internet website a transparency report containing all of the following: (A) The internet website of the frontier developer; (B) A mechanism that enables a natural person to communicate with the frontier developer; (C) The release date of the frontier model; (D) The languages supported by the frontier model; (E) The modalities of output supported by the frontier model; (F) The intended uses of the frontier model; and (G) Any generally applicable restrictions or conditions on uses of the frontier model Before, or concurrently with, deploying a new frontier model or a substantially modified version of an existing frontier model, a large frontier developer must include in the transparency report required above summaries of all of the following: (A) Assessments of catastrophic risks from the frontier model conducted pursuant to the large frontier developer’s frontier AI framework; (B) The results of those assessments; (C) The extent to which third-party evaluators were involved; and (D) Other steps taken to fulfill the requirements of the frontier AI framework with respect to the frontier model A frontier developer that publishes the information described above as part of a larger document, including a system card or model card, is deemed to be in compliance with the above requirements regarding deploying the new frontier model. Moreover, frontier developers are encouraged (but not required) to make disclosures described in this part that are consistent with, or superior to, industry best practices The Office of Emergency Services: Defines “catastrophic risk” as a foreseeable and material risk that a frontier developer’s development, storage, use, or deployment of a foundation model will materially contribute to the death of, or serious injury to, more than 50 people or more than $1,000,000,000 in damage to, or loss of, property arising from a single incident involving a foundation model doing any of the following: (A) Providing expert-level assistance in the creation or release of a chemical, biological, radiological, or nuclear weapon; (B) Engaging in conduct with no meaningful human oversight, intervention, or supervision that is either a cyberattack or, if committed by a human, would constitute the crime of murder, assault, extortion, or theft, including theft by false pretense; and (C) Evading the control of its frontier developer or user Defines a “Critical safety incident” as any of the following: (1) Unauthorized access to, modification of, or exfiltration of the model weights of a foundation model that results in death, bodily injury, or damage to, or loss of, property; (2) Harm resulting from the materialization of a catastrophic risk; (3) Loss of control of a foundation model causing death or bodily injury; or (4) A foundation model that uses deceptive techniques against the frontier developer to subvert the controls or monitoring of its frontier developer outside of the context of an evaluation designed to elicit this behavior and in a manner that demonstrates materially increased catastrophic risk Large frontier developers must transmit to the Office of Emergency Services a summary of any assessment of catastrophic risk resulting from internal use of its frontier models every three months or pursuant to another reasonable schedule specified by the large frontier developer and communicated in writing to the Office of Emergency Services with written updates, as appropriate Frontier developers are not allowed to make a materially false or misleading statements about catastrophic risk from their frontier models or its management of catastrophic risk. Likewise, large frontier developers are not allowed to make a materially false or misleading statements about their implementation of, or compliance with, their frontier AI frameworks When frontier developers publish documents to comply with the above requirements, they may make redactions to those documents to protect the secrets, cybersecurity, public safety, or the national security of the United States or to comply with any federal or state law. in the case where a frontier developer redacts information in a document, the frontier developer must describe the character and justification of the redaction in any published version of the document to the extent permitted by the concerns that justify redaction, and must retain the unredacted information for five years The Office of Emergency Services must establish a mechanism to be used by a frontier developer or a member of the public to report a critical safety incident that includes all of the following: (1) The date of the critical safety incident; (2) The reasons the incident qualifies as a critical safety incident; (3) A short and plain statement describing the critical safety incident; and (4) Whether the incident was associated with internal use of a frontier model Similarly, the Office of Emergency Services must establish a mechanism to be used by a large frontier developer to confidentially submit summaries of any assessments of the potential for catastrophic risk resulting from internal use of its frontier models The Office of Emergency Services must take all necessary precautions to limit access to any reports related to internal use of frontier models to only personnel with a specific need to know the information and to protect the reports from unauthorized access Frontier developers must report any critical safety incident pertaining to one or more of its frontier models to the Office of Emergency Services within 15 days of discovering the critical safety incident If a frontier developer discovers that a critical safety incident poses an imminent risk of death or serious physical injury, the frontier developer must disclose that incident within 24 hours to an authority, including any law enforcement agency or public safety agency with jurisdiction, that is appropriate based on the nature of that incident and as required by law. That said, frontier developers are encouraged (but not required) to report critical safety incidents pertaining to foundation models that are not frontier models The Office of Emergency Services must review critical safety incident reports submitted by frontier developers and may review reports submitted by members of the public The Attorney General or the Office of Emergency Services may transmit reports of critical safety incidents and reports from covered employees. Either entity must strongly consider any risks related to trade secrets, public safety, cybersecurity of a frontier developer, or national security when transmitting reports Beginning January 1, 2027, and annually thereafter, the Office of Emergency Services must produce a report with anonymized and aggregated information about critical safety incidents that have been reviewed by the Office of Emergency Services since the preceding report. However, it must not include information in a report that would compromise the trade secrets or cybersecurity of a frontier developer, public safety, or the national security of the United States or that would be prohibited by any federal or state law. The Office of Emergency Services must transmit the report to the Legislature and to the Governor Frontier developers who intend to comply with the above provisions concerning the Office of Emergency Services must declare their intent to do so to the Office of Emergency Services. Once they make this declaration: (A) The frontier developer must be deemed in compliance to the extent that the frontier developer meets the standards of, or complies with the requirements imposed or stated by, the designated federal law, regulation, or guidance document until the frontier developer declares the revocation of that intent to the Office of Emergency Services or the Office of Emergency Services revokes a relevant regulation; and (B) The failure by a frontier developer to meet the standards of, or comply with the requirements stated by, the federal law, regulation, or guidance document designated must constitute a violation of the Act On or before January 1, 2027, and annually thereafter, the Department of Technology must assess recent relevant evidence and developments and make recommendations about whether and how to update any of the following definitions: frontier model, frontier developer, and large frontier developer, and create a report Beginning January 1, 2027, and annually thereafter, the Attorney General must produce a report with anonymized and aggregated information about reports from covered employees. It cannot include information in a report pursuant to this subdivision that would compromise the trade secrets or cybersecurity of a frontier developer, confidentiality of a covered employee, public safety, or the national security of the United States or that would be prohibited by any federal or state law. it must also transmit a report to the Legislature and to the Governor Enforcement The consequences of noncompliance are steep: A large frontier developer that fails to publish or transmit a compliant document that is required to be published or transmitted, makes a statement in violation, fails to report an incident, or fails to comply with its own frontier AI framework must be subject to a civil penalty in an amount dependent upon the severity of the violation that does not exceed $1,000,000 per violation . A civil penalty must be recovered in a civil action brought only by the Attorney General CalCompute Moreover, the Act establishes within the Government Operations Agency a consortium that must develop a framework for the creation of a public cloud computing cluster to be known as CalCompute. The consortium must develop a framework for the creation of CalCompute that advances the development and deployment of AI that is safe, ethical, equitable, and sustainable by fostering research and innovation that benefits the public and enabling equitable innovation by expanding access to computational resources at minimum Additionally, the consortium must make reasonable efforts to ensure that CalCompute is established within the University of California to the extent possible. CalCompute must include, but not be limited to, all of the following: (1) A fully owned and hosted cloud platform; (2) Necessary human expertise to operate and maintain the platform; and (3) Necessary human expertise to support, train, and facilitate the use of CalCompute On or before January 1, 2027, the Government Operations Agency must submit a report from the consortium to the Legislature with the framework developed for the creation and operation of CalCompute, and the report must include all of the following elements: (A) A landscape analysis of California’s current public, private, and nonprofit cloud computing platform infrastructure; (B) An analysis of the cost to the state to build and maintain CalCompute and recommendations for potential funding sources; (C) Recommendations for the governance structure and ongoing operation of CalCompute; (D) Recommendations for the parameters for use of CalCompute, including, but not limited to, a process for determining which users and projects will be supported by CalCompute; (E) An analysis of the state’s technology workforce and recommendations for equitable pathways to strengthen the workforce, including the role of CalCompute; (F) A detailed description of any proposed partnerships, contracts, or licensing agreements with nongovernmental entities, including, but not limited to, technology-based companies, that demonstrates compliance; and (G) Recommendations regarding how the creation and ongoing management of CalCompute can prioritize the use of the current public sector workforce The consortium has to consist of 14 members as follows: (1) Four representatives of the University of California and other public and private academic research institutions and national laboratories appointed by the Secretary of Government Operations; (2) Three representatives of impacted workforce labor organizations appointed by the Speaker of the Assembly; (3) Three representatives of stakeholder groups with relevant expertise and experience, including, but not limited to, ethicists, consumer rights advocates, and other public interest advocates appointed by the Senate Rules Committee; and (4) Four experts in technology and AI to provide technical assistance appointed by the Secretary of Government Operations If CalCompute is established within the University of California, the University of California may receive private donations for the purposes of implementing CalCompute Whistleblower Protections : Frontier developers are not allowed to make, adopt, enforce, or enter into a rule, regulation, policy, or contract that prevents a covered employee from disclosing, or retaliates against a covered employee for disclosing, information to the Attorney General, a federal authority, a person with authority over the covered employee, or another covered employee who has authority to investigate, discover, or correct the reported issue, if the covered employee has reasonable cause to believe that the information discloses either of the following: (1) The frontier developer’s activities pose a specific and substantial danger to the public health or safety resulting from a catastrophic risk; and (2) The frontier developer has violated the Act Frontier developers are similarly not allowed to enter into a contract that prevents a covered employee from making a disclosure that is protected under the whistleblowing provisions Frontier developers must provide a clear notice to all covered employees of their rights and responsibilities under this Act, including by doing either of the following: (1) At all times posting and displaying within any workplace maintained by the frontier developer a notice to all covered employees of their rights under this section, ensuring that any new covered employee receives equivalent notice, and ensuring that any covered employee who works remotely periodically receives an equivalent notice; and (2) At least once each year, providing written notice to each covered employee of the covered employee’s rights under this section and ensuring that the notice is received and acknowledged by all of those covered employees Large frontier developers must provide a reasonable internal process through which a covered employee may anonymously disclose information to the large frontier developer if the covered employee believes in good faith that the information indicates that the large frontier developer’s activities present a specific and substantial danger to the public health or safety resulting from a catastrophic risk or that the large frontier developer violated the Act The disclosures and responses of the process must be shared with officers and directors of the large frontier developer at least once each quarter. However, if a covered employee has alleged wrongdoing by an officer or director of the large frontier developer in a disclosure or response, this rule does not apply with respect to that officer or director Courts are authorized to award reasonable attorney’s fees to a plaintiff who brings a successful action for a violation of this section Once it has been demonstrated by a preponderance of the evidence that an activity proscribed by this section was a contributing factor in the alleged prohibited action against the covered employee, the frontier developer has the burden of proof to demonstrate by clear and convincing evidence that the alleged action would have occurred for legitimate, independent reasons even if the covered employee had not engaged in activities protected by this section Covered employees may petition the superior court in any county where the violation in question is alleged to have occurred, or wherein the person resides or transacts business, for appropriate temporary or preliminary injunctive relief. The court must consider the chilling effect on other covered employees asserting their rights under this section in determining whether temporary injunctive relief is just and proper An order authorizing temporary injunctive relief must remain in effect until an administrative or judicial determination or citation has been issued, or until the completion of a review under the Act, whichever is longer, or at a certain time set by the court. Thereafter, a preliminary or permanent injunction may be issued if it is shown to be just and proper. Any temporary injunctive relief shall not prohibit a frontier developer from disciplining or terminating a covered employee for conduct that is unrelated to the claim of the retaliation. The remedies provided by this section are cumulative to each other As can be seen from the above discussion, it is good news that Governor Newsom was able to successfully enact critical AI protective legislation, notwithstanding the recent (and failed) attempts by the federal government to create moratoriums on state-level AI regulation. It appears that this is a signal that more progressive states can create and enforce their own AI legislation without unnecessary interference from the current administration. Given this fact, it will not be surprising to see states with similar values and approaches to AI boldly moving ahead with their legislative agendas and proposing new AI bills. What can we take from this development? Unlike Bill 1047 (California’s first legislative attempt at AI safety and transparency) that I wrote about here , SB 53 made it through to the end of the legislative process. It was designed to enhance online safety by installing commonsense guardrails on the development of frontier AI models, and help build public trust while also continuing to spur innovation in these new technologies. Additionally, the bill helps to advance California’s position as a national leader in responsible and ethical AI, and also a world leader—California continues to dominate the AI sector—it is the birthplace of AI, and the state is home to 32 of the 50 top AI companies worldwide. Governor Newsom stated: “California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance. AI is the new frontier in innovation, and California is not only here for it – but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation that builds public trust as this emerging technology rapidly evolves” As a result of this development, frontier AI developers will need to be transparent, safe, accountable, and responsive. Further, the consortium, CalCompute, will advance the development and deployment of AI that is safe, ethical, equitable, and sustainable by fostering research and innovation. Moreover, the bill fills the gap left by Congress, which so far has not passed broad AI legislation, and provides a model for American states (and Canada) to follow. Previous Next
- DeepSeek in Focus | voyAIge strategy
DeepSeek in Focus What Leaders Need to Know By Tommy Cooke, fueled by caffeine and curiousity Feb 14, 2025 Key Points: DeepSeek is a major disruptor in the AI market, rapidly gaining adoption due to its affordability and open-source appeal Despite being open-source, DeepSeek's data is stored in China, raising security, compliance, and censorship concerns Organizations must weigh the benefits of open-source AI against the risks of data privacy, geopolitical scrutiny, and regulatory uncertainty In just over a year, DeepSeek has gone from an emerging AI model to leaving an everlasting imprint on the global AI market. Developed in China as an open-source large language model (LLM), it is rapidly gaining attention. In fact, as of January 2025 it has overtaken ChatGPT as the most downloaded free app on Apple iPhones in the U.S. DeepSeek's meteoric rise signals a shift in AI adoption trends and the AI industry itself, and that warrants awareness and conversation for organization leaders; as people gravitate toward alternative AI models outside the traditional Western ecosystem, it is important to understand the what, why, and how of this recent AI phenomenon. As of February 2025, it is critically important to ensure that you are prepared to respond to DeepSeek in your organization. Leaders must accept the likelihood that DeepSeek is already being used by their workforce for work purposes. DeepSeek is a startup based in Hangzhou city, China. It was founded in 2023 by Liang Feng, an entrepreneur who also founded the $7bn USD hedge fund group High-Flyer in 2016. In January 2025, DeepSeek released its latest AI model, DeepSeek R1. It is a free AI Chatbot that looks, feels, sounds, and responds very similarly to ChatGPT. Unlike proprietary AI models developed in the West, like ChatGPT, Claude, and Gemini, DeepSeek is freely available for organizations to customize and use at their will. Part of the reason it is making waves is not only because of how quickly and easily it can be adopted and used, but also because it's significantly cheaper to build than its competitors' designs. While the exact figures are currently being debated, there is general agreement OpenAI - the company that owns, produced, and maintains ChatGPT - spent at least two to three times more to train their AI models. This point is very important to understand because it explains a lot about economic fallout, the balance of global AI development, market disruption, as well as accessibility and control. The implications stretch beyond cost alone. They affect how organizations plan AI adoption, determine their budgets, and structure their technology ecosystems. If AI models can be produced at a fraction of the cost of the development norm while maintaining competitive performance, organizations must consider how this changes their long-term investment in AI. Are proprietary solutions worth the price if open-source alternatives are rapidly closing the gap? As importantly, what are the hidden risks and trade-offs that come with choosing a model purely on affordability? Security & Compliance Concerns with DeepSeek DeepSeek’s rapid rise comes with critical questions for organizations, especially regarding security, governance, and compliance. First, DeepSeek was developed in China, and that is where its data is as well. The Western world is thus concerned about how data are processed, who has access to them, and whether companies using DeepSeek are exposing themselves to regulatory or cybersecurity risks. For organizations bound by stringent data privacy regulations, this is likely a major red flag. Secondly, DeepSeek is receiving considerable criticism for its censorship policies. It will not discuss certain political topics, and it was trained on filtered datasets. This impacts the reliability of its responses and raises concerns about bias in AI-generated content. This alone, at least in part, explains why South Korea, Australia, and Taiwan have banned it . Third, today's turbulent geopolitical climate means that Western governments are increasingly wary of foreign influence. AI is no exception. DeepSeek is being closely monitored by both governments and organizations around the globe, asking whether or not the company and its AI should be restricted or even outright banned. Organizations looking for a cost-effective entry to powerful AI are certainly attracted and interested in DeepSeek - and they are considering the long-term viability and potential implications of adopting a tool in the face of regulatory and political scrutiny. Is Your Staff Using DeepSeek? Guidance for Leaders Given the incredible rate that AI is being installed on the personal devices of your employees - with DeepSeek clearly being no exception - there are things we feel strongly that you should consider: Audit your AI Usage. Find out who in your company is using chatbots, especially DeepSeek - and how. Are employees feeding sensitive data into the model? Have they uploaded business plans, client data, personal information of their patients or coworkers? Do they understand the risks? Assess Risk. What do your technology and AI use policies say? Do you have them yet? Has your organization established clear policies and procedures on AI tools that store data outside your legal jurisdiction? Ask yourself, would using DeepSeek put your organization at risk of legal noncompliance or even reputational harms? Who are your external stakeholders and investors? It's critical that you start thinking about their perception, expectations, and needs. Engage and Communicate . One of our clients recently told us that an executive in their organization instructed their respective staff to freely use AI chatbots at will - without discussing the decision and announcement with legal counsel. As you might imagine, this raised many concerns about understanding and mitigating AI-related risks. If you have not done so already, now is the time to articulate clearly your organization's stance on AI to employees, stakeholders, and partners. Organization leaders need to be strategizing not only how they communicate to staff about AI, but they also need to be thinking about communication along the lines of organizational engagement. What are your employees thinking about AI, truly? Do they silently use it to keep up with the creative or time-consuming demands of their job? Are they afraid you will find out and will be punished? Do they feel supported by you, and are they willing to provide honest feedback? How Open is Open-source AI? Industry observers are debating how and whether DeepSeek’s biggest strength is also its biggest risk: it is open-source. What that means is that companies can see, download, and edit the AI's code. This opens interesting and valuable doors to many users and organizations. For example, openly readable code means that its openly verifiable, and openly scrutinized. If something exists in the code that can be deemed as a fatal flaw, security concern, or a path toward bias and harmful inference or decision-making, it can be detected more easily because the global community of talented volunteer programmers and engineers can find and address any such issues. In theory, this means that managing security, compliance, and governance yields more flexible and transparent control. Unlike a proprietary AI vendor, who does not disclose their code and invite public gaze into its design, if something goes wrong – it is often your problem to address. On the other hand, industry observers are also questioning how "open" DeepSeek truly is. By conventional understandings, open-source means that code is openly available for anyone to inspect, modify, and use. However, when it comes to AI, it is much more than code. AI must be trained, and training requires data. DeepSeek does not provide full transparency on what data it was trained on, nor has it been entirely forthcoming about the details of its training process. These points are important because it is forcing organizations and governments to question the transparency and trust of DeepSeek. As an organizational leader, you need to ask yourself: is open-source AI considered a strategic advantage or a risk? Who controls the AI for your organization? You, or vendors outside of your jurisdiction? DeepSeek is more than just another AI model. It’s a disruptor in the AI industry. Many attribute marketplace disruption as a positive - something that challenges norms, standards, and best-in-class models. However, there is much more that is potentially disrupted here. Those disruptions are not merely globally economic and political, they are in your organization. Leaders must recognize that AI strategy is no longer just about choosing the most powerful model. It’s about choosing the right balance between control, risk, and innovation. Previous Next
- 10-year Moratorium on AI State Regulation | voyAIge strategy
10-year Moratorium on AI State Regulation What Could Possibly go Wrong? By Christina Catenacci, human writer May 29, 2025 Key Points On May 21, 2025, Bill HR 1, One Big Beautiful Bill Act was introduced into the 119th Congress. Section 43201 of the bill is concerning, as it would allow for a 10-year Moratorium on state enforcement of their own AI legislation At this point, the bill has passed in the House, and needs to still pass in the Senate On May 21, 2025, Bill HR 1, One Big Beautiful Bill Act was introduced into the 119th Congress. This is a very lengthy and dense bill—the focus of this article is on Part 2--Artificial Intelligence and Information Technology Modernization. More specifically, under Subtitle C—Communications, Part 2 contains a few AI provisions that need to be discussed, as they are very concerning. What is in Part 2? Section 43201 states that: There would be funds appropriated to the Department of Commerce for fiscal year 2025, out of any funds in the Treasury not otherwise appropriated, $500,000,000, to remain available until September 30, 2035, to modernize and secure Federal information technology systems through the deployment of commercial AI, the deployment of automation technologies, and the replacement of antiquated business systems in accordance with the next provision dealing with authorized uses The Secretary of Commerce would be required to use the funds for the following: to replace or modernize, within the Department of Commerce, legacy business systems with state-of-the-art commercial AI systems and automated decision systems; to facilitate, within the Department of Commerce, the adoption of AI models that increase operational efficiency and service delivery; to improve, within the Department of Commerce, the cybersecurity posture of Federal information technology systems through modernized architecture, automated threat detection, and integrated AI solutions No state or political subdivision would be allowed to enforce any law or regulation regulating AI models, AI systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act. The exception to this is the Rule of Construction: the law would not prohibit the enforcement of any law that: -removes legal impediments to, or facilitates the deployment or operation of, an AI model, AI system, or automated decision system -streamlines licensing, permitting, routing, zoning, procurement, or reporting procedures in a manner that facilitates the adoption of AI models, AI systems, or automated decision systems -does not impose any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement on AI models, systems, or automated decision systems unless such requirement is imposed under Federal law, or in the case of a requirement imposed under a generally applicable law, is imposed in the same manner on models and systems, other than AI models, AI systems, and automated decision systems, that provide comparable functions to AI models, AI systems, or automated decision systems; and does not impose a fee or bond unless that fee or bond is reasonable and cost-based, and under such fee or bond, AI models, AI systems, and automated decision systems are treated in the same manner as other models and systems that perform comparable functions Why is Section 43201 Concerning? In the context of AI, while it is encouraging that section 43201 sets aside funds to modernize and secure Federal information technology systems through the deployment of commercial AI, the deployment of automation technologies, and the replacement of antiquated business systems, the proposed provision is troubling because it suggests that there would be a 10-year Moratorium: states would be prevented from enacting AI legislation for 10 years unless they show that they fall under the exception, involving the Rule of Construction. More precisely, state AI laws would not be allowed to be enforced unless they: Remove legal impediments regarding a AI models, AI systems, or automated decision systems Streamline licensing, permitting, routing, zoning, procurement, or reporting procedures in a manner that facilitates the adoption of AI models, AI systems, or automated decision systems Do not impose any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement on AI models, AI systems, or automated decision systems unless that requirement is imposed under Federal law or if under a general law, is imposed in the same manner on models and systems, other than AI models, AI systems, and automated decision systems, that provide comparable functions to AI models, AI systems, or automated decision systems, and Do not impose a fee or bond unless it is reasonable and cost-based, and under the fee or bond, AI models, AI systems, and automated decision systems are treated in the same manner as other models and systems that perform comparable functions Essentially, all of the progressive AI laws that have been created by forward-thinking states may ultimately be unenforceable, unless the states can jump the hoops and show that their laws fall within the exception. What could passing a federal AI bill like this mean? The implications of states not being able to pass and enforce their own AI laws could be very risky: no other jurisdiction is doing this. Just ask the EU, a jurisdiction that has already created cutting-edge privacy and AI legislation, which is known by the world as the golden standard. Just recently at the AI Summit in Paris, Vice President JD Vance on Tuesday warned global leaders and tech industry executives that “excessive regulation” could cripple the rapidly growing AI industry in a rebuke to European efforts to curb AI’s risks. Yes, when giving the AI policy speech, he pretty much said that unfettered innovation should trump AI regulation (pun intended). This move came in stark contrast to what was being decided at the AI Summit—over 60 countries pledging to: Promote AI accessibility to reduce digital divides Ensure AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all Make innovation in AI thrive by enabling conditions for its development and avoid market concentration driving industrial recovery and development Encourage AI deployment that positively shapes the future of work and labour markets and delivers opportunity for sustainable growth Make AI sustainable for people and the planet Reinforce international cooperation to promote coordination in international governance On the other hand, Vance called all of this “ excessive regulation ”. Unfortunately for VP Vance, he will need to take some time to reflect on why certain laws are created when there are rapid, potentially risky advances that could cause harm. Criticism by Musk and Pelosi—“Robinhood in reverse” If we zoom out, we see that there are several other troubling proposed provisions, including those that aggravate wealth inequality in the United States. In fact, Nancy Pelosi has referred to the bill as “ Republican Robinhood in reverse ”. Moreover, with cuts to Medicare and Medicaid, it is not surprising that Pelosi would question what is being proposed. Even Elon Musk has criticized the recent move as “ disappointing ”. Why? All of the things that are included in the bill are cuts that can help fund tax cuts for the rich and secure the border. More specifically, Musk made a point of saying that the massive spending bill would undermine his work at the Department of Government Efficiency (DOGE). What is the Status of the Bill? At this point, the bill has passed in the House but still needs to pass in the Senate. President Donald Trump and Speaker Mike Johnson are hopeful for minimal modifications in the Senate to the bill; however, some believe that there is enough resistance to halt the bill unless there are significant changes. For example, Republican Senator Rand Paul told "Fox News Sunday" that he will not vote for the legislation unless the debt ceiling increase is stripped, since he stated that it would "explode deficits." The main sticking points right now involve the numerous tax cuts, cuts to Medicaid and requirements that disabled people work, tax deductions at the state and local levels, and cuts to food assistance programs like the Supplemental Nutrition Assistance Program. It is hard to tell whether the bill will pass as is in the Senate; we will keep you posted on the 10- year Moratorium that is proposed. Previous Next