In-Short
- California State Assembly approves AI safety bill SB 1047, setting new standards for AI companies.
- Bill includes requirements for model shutdown, safeguards against unsafe modifications, and risk assessment procedures.
- Opposition from AI industry and politicians, leading to amendments that soften penalties and limit enforcement powers.
- SB 1047 awaits State Senate vote and potential enactment by Governor Gavin Newsom.
Summary of California’s AI Safety Bill
The California State Assembly has recently passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), a pioneering piece of legislation aimed at regulating the AI industry within the state. The bill mandates AI companies to implement safety measures prior to training advanced foundation models, including mechanisms for immediate model shutdown, protection against unsafe post-training modifications, and comprehensive testing to prevent critical harm.
Senator Scott Wiener, the bill’s primary author, has worked with various stakeholders to refine the bill, ensuring it addresses foreseeable AI risks. Despite its intentions to promote both innovation and safety, the bill has faced criticism from AI companies like OpenAI and Anthropic, as well as politicians and the California Chamber of Commerce, who argue it could hinder small AI developers.
In response to these concerns, amendments were made to SB 1047, replacing potential criminal penalties with civil ones, limiting the attorney general’s enforcement powers, and modifying the requirements for joining the “Board of Frontier Models.” The bill now moves to the State Senate for a vote and, if passed, will be presented to Governor Gavin Newsom for a final decision.
The passage of SB 1047 is significant as it could establish a precedent for AI regulation in the United States, potentially influencing the development and deployment of AI models nationwide.
For more detailed information, please visit the original source.