California has long been a center of technological innovation, and it now faces the challenge of leading responsibly in the governance of highly capable general-purpose AI models. These so-called frontier models are growing rapidly in power and influence, becoming deeply embedded in daily life. In 2024, Governor Gavin Newsom vetoed a proposed law (SB 1047) that would have imposed strict mandates on advanced AI systems, citing various flaws. In the wake of that veto, the state has continued to stress the urgency of regulation and has taken an important first step, releasing a comprehensive policy framework aimed at encouraging innovation while addressing emerging risks.
The framework is embodied in a major policy report released on June 17, 2025, outlining a flexible and evidence-based framework for regulating these frontier artificial intelligence systems. "The California Report on Frontier AI Policy" was developed by a multidisciplinary working group of experts from leading academic institutions.
The report acknowledges the opportunities these models offer, from revolutionizing education and healthcare to enabling scientific discovery. At the same time, it considers the serious risks they pose, including risks relating to biological misuse, disinformation, systemic bias, labor disruption, and potential loss of human control. In the end, it aims to balance innovation with public safety, and rather than focusing on prescriptive rules, the report sets out guiding principles that can evolve with the technology.
Emphasizing that early policy action matters and that clear governance structures can help prevent irreversible harms, the focus is not on regulating all AI, but specifically those highly capable models that pose exceptional risks of three types: malicious use (deepfakes, fishing campaigns, cyberattacks, and instructions for building illegal weapons), failure (hallucinations, rewarding of hacking, and models that pursue unintended goals), and systemic effects (monopolistic market structures, job displacement, and voluminous synthetic online content).
With these risks in mind, the report proposes a policy framework built around the pillars of transparency, oversight, thresholds, and public participation.
1. Transparency and Disclosure: Trust but Verify
The report offers a clear response to these challenges under the heading "Trust but Verify." It highlights the importance of core strategies, including requiring developers to publicly disclose risks and issues. This might include providing internal red-teaming results and other internal safety findings, and any evidence of misuse. The state could set up a public registry of AI incidents, similar to systems used in aviation and pharmaceuticals, and whistleblower protections would encourage internal accountability.
2. Independent Oversight: Testing and Consequences
Second, the report calls for external testing of frontier models. Rather than rely solely on company-led safety evaluations, independent experts would be tasked with assessing model behavior. These experts would be granted access to models through secure means, allowing them to explore potential misuse scenarios. The goal would be to surface risks that might not emerge during normal use but could cause harm if triggered.
Third, the report recommends legal consequences for negligent behavior. Developers who hide known risks or release models with dangerous capabilities could be subject to liability. At the same time, the report encourages safe harbor provisions for good-faith disclosures. Companies that report incidents and follow best practices would receive some legal protection, and procurement incentives could further encourage compliance.
3. Thresholds for Regulatory Scope: Tiered Obligations
One of the most debated elements of SB 1047 was its reliance on fixed thresholds. The report argues that such static thresholds are both too narrow and too broad. They may miss risky smaller models or overregulate harmless larger ones. Instead, the report proposes a more adaptive and contextual framework.
This scoping approach takes into account several factors, including a model's emergent capabilities, the level of access it provides, the domains in which it is deployed, and how it behaves in adversarial testing. A smaller model that helps design malware or autonomous systems would trigger scrutiny. A larger model used only for enterprise analytics might not. Regulation should depend on behavior and context rather than size alone.
The report proposes tiered obligations based on this risk assessment. Low-risk systems would face light transparency rules. Medium-risk systems might be subject to red-teaming and disclosure mandates. High-risk frontier models would require full third-party evaluation and possibly usage restrictions. Thresholds would be revisited annually, or when major advances occur.
4. Public Engagement: Tools to Broaden Participation
The final section addresses public trust and engagement. The working group received over one hundred public comments on the draft report. Feedback came from civil society groups, AI developers, academics, and individual citizens, most of whom supported greater transparency and independent oversight. Civil society groups urged more attention to labor displacement, surveillance, and equity. Industry commenters emphasized the need for clear standards and warned against chilling open-source development.
The report stresses that public legitimacy is crucial for any regulatory regime. California should develop accessible tools for public input, maintain transparency portals, and ensure diverse voices are part of the policymaking process. The goal is to prevent a regulatory approach that is shaped solely by industry or a small group of technical experts.
Conclusion
While broad in scope and rich in analysis, the report is not itself nor does it propose specific new legislation. Instead, it offers a roadmap for policymakers that favors transparency over secrecy, verification over blind trust, and flexibility over rigidity.
Companies and other stakeholders should anticipate legislative activity and consider taking early, proactive steps, including engaging in public dialogue, contributing to the policy process, and establishing internal AI governance frameworks. While the report itself stops short of legal mandates, its guidance points directly toward a legislative framework to come.