California Governor Gavin Newsom vetoed SB 1047, known as the Frontier Artificial Intelligence Models Safety and Security Innovation Act, on September 30, arguing that the bill could stifle innovation and fail to protect the public from the “real” threats posed by AI technology.
The bill proposes mandatory safety testing requirements for AI models, but has faced opposition from many major tech companies, who are concerned that it will slow down innovation. In a statement, Newsom stressed that the bill focuses on regulating leading AI companies without paying enough attention to real-world risks.
Authored by Democratic Senator Scott Wiener, the bill would require major developers in California, including OpenAI, Meta, and Google, to provide “kill switches” — mechanisms to shut down AI systems when needed — and to publish risk mitigation plans. If passed, developers could also face lawsuits from the state attorney general in the event of threats like AI taking over the power grid.
Newsom said he has been working with leading AI safety experts to develop science-based safeguards and has asked state agencies to expand their risk assessments of AI. While vetoing SB 1047, he stressed the importance of establishing safety protocols for AI and said his administration has signed more than 18 bills related to AI regulation in the past 30 days.
The bill has not received support from many lawmakers and major tech companies. House Speaker Nancy Pelosi and companies like OpenAI have expressed concerns that it will hinder AI development. However, some tech leaders, including Elon Musk, have supported the bill and more comprehensive AI regulations, although Musk admitted it was a “difficult decision.”