This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

Client Intelligent insights

| 3 minute read

From Veto to Victory: California's TFAIA Establishes Blueprint for Frontier AI Governance

Following an earlier unsuccessful attempt, California has passed legislation regulating AI, and frontier AI models in particular, with Governor Newsom signing the Transparency in Frontier Artificial Intelligence Act 2025 (TFAIA) on September 29. The first attempt, The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) was vetoed by Governor Newsom in 2024.  Newsom then convened an AI expert group to provide an evidence-based foundation for AI policy decision-making.  Their March 2025 report recommended the development of suitable guardrails to minimize material risks posed by the deployment, use and governance of GenAI tools.  

In pursuing this balance between innovation and safety, California has a unique opportunity to productively shape the AI policy conversation and provide a blueprint for well-balanced policies beyond its borders. The California Report on Frontier AI Policy - March 2025 

The TFAIA is an attempt to implement this balanced approach. The legislation acknowledges that fundamental questions about AI's trajectory remain unanswered, while recognizing the growing consensus around key risks from “frontier models”, defined as those trained using computing power exceeding 10^26 integer or floating-point operations. Importantly, the Act builds in flexibility: California's Department of Technology must assess by January 2027 (and annually thereafter) whether the definitions of "frontier model," "frontier developer," and "large frontier developer" still accurately capture technological developments and align with national and international standards. This ensures the regulation stays current with the rapidly evolving AI landscape.

The pace of technological change will require adaptive approaches that allow for thresholds to be flexible and readily updated to avoid ossification. The California Report on Frontier AI Policy - March 2025 

Under the TFAIA, by January 1, 2026, large frontier model developers, defined as those with annual gross revenue exceeding $500 million, must publicly release comprehensive AI frameworks detailing their risk management approaches, along with regular transparency reports and critical incident disclosures. These frameworks must explain how developers incorporate national and international standards and best practices.  The transparency reports must also include a mechanism that enables a natural person to communicate with the frontier developer, the modalities of output supported by the frontier model, the intended uses of the frontier model, and any generally applicable restrictions or conditions on uses of the frontier model.

The TFAIA also mandates that developers assess and mitigate catastrophic risks from their AI models.  Large frontier developers must submit quarterly summaries of catastrophic risk assessments to the Office of Emergency Services. Non-compliance carries civil penalties of up to $1 million per violation. 

The March 2025 California report specifically highlighted the EU AI Act's whistleblower protections as a pragmatic safeguard worth adopting, which is now reflected in the TFAIA.  The legislation strengthens whistleblower protections, recognizing that those closest to AI model development are often best positioned to identify emerging risks. It prohibits retaliation against employees who disclose information about potential AI model dangers and requires developers to establish anonymous reporting mechanisms for employees to report risks. 

The Impact on Frontier Model Deployers

While the TFAIA's primary impact is on frontier AI developers, deployers should take notice. The transparency requirements create new due diligence avenues and obligations for deployers who will soon have to navigate an enhanced information landscape.  Developers should look to enhance their AI governance processes to integrate these new disclosures. Specifically, we recommend integrating these new frontier developer disclosure requirements into existing AI governance frameworks in the following ways:  

  • Broadly update vendor procurement checklists to request transparency documentation that aligns with TFAIA obligations. 
  • Establish processes to regularly review developer-provided AI frameworks and transparency reports. 
  • Consider requiring developers to provide timely notice of any changes to their AI frameworks or critical incidents that could impact the deployer's operations. 
  • Monitor when frontier developers make critical incident disclosures and activate your organization’s incident response procedures, as appropriate.
  • Incorporate specific TFAIA compliance representations into vendor contracts.

California's TFAIA represents a significant milestone in AI regulation. By establishing clear frameworks for transparency, risk management, and accountability, the TFAIA provides an evidence-based model for AI governance that other jurisdictions may be inclined to follow. 

As AI regulation continues to evolve across different jurisdictions, companies face an increasingly complex patchwork of AI laws. BCLP's AI State-by-State Legislation Map provides a comprehensive overview of AI legislation developments across the US states. 

Just as California’s technology leads innovation, its governance can also set a trailblazing example with worldwide impact. The California Report on Frontier AI Policy - March 2025

Tags

digital speaks, digital transformation & emerging technology, data privacy & security, san francisco