A California-based policy group is urging lawmakers to take a proactive approach to regulating artificial intelligence, recommending laws that anticipate and mitigate potential risks associated with AI systems. The 41-page interim report, released on Tuesday, emphasizes the need for transparency and accountability in AI model development, citing the importance of anticipating future consequences that might occur without sufficient safeguards.
The report, led by AI pioneer Fei-Fei Li, along with co-authors Jennifer Chayes and Mariano-Florentino Cuéllar, suggests that lawmakers should consider AI risks that "have not yet been observed in the world" when crafting AI regulatory policies. This approach is a departure from traditional reactive regulation, which often responds to existing problems rather than anticipating potential ones.
The report's recommendations are the result of a collaborative effort involving industry stakeholders from across the ideological spectrum, including AI safety advocates like Yoshua Benjio and those who opposed California's controversial AI safety bill, SB 1047. The report argues that laws should be enacted to increase transparency into what frontier AI labs, such as OpenAI, are building, including publicly reporting safety tests, data acquisition practices, and security measures.
Furthermore, the report advocates for increased standards around third-party evaluations of these metrics and corporate policies, as well as expanded whistleblower protections for AI company employees and contractors. This "trust but verify" approach aims to provide avenues for reporting on areas of public concern while also ensuring accountability through third-party verification.
The report's authors acknowledge that there is an "inconclusive level of evidence" for AI's potential to help carry out cyberattacks, create biological weapons, or bring about other "extreme" threats. However, they argue that AI policy should not only address current risks but also anticipate future consequences that might occur without sufficient safeguards. As the report states, "we do not need to observe a nuclear weapon [exploding] to predict reliably that it could and would cause extensive harm."
The report's recommendations have been well-received by experts on both sides of the AI policymaking debate. Dean Ball, an AI-focused research fellow at George Mason University, praised the report as a promising step for California's AI safety regulation. California State Senator Scott Wiener, who introduced SB 1047 last year, also welcomed the report, stating that it builds on "urgent conversations around AI governance we began in the legislature [in 2024]."
The report's final version is due out in June 2025, and while it does not endorse specific legislation, it provides a framework for policymakers to consider as they navigate the complex landscape of AI regulation. As the AI landscape continues to evolve, this report serves as a crucial step towards ensuring that AI development is guided by transparency, accountability, and a proactive approach to mitigating potential risks.