Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
California governor Gavin Newsom will consider whether to sign into law or veto a controversial artificial intelligence bill proposing to enforce strict regulations on technology companies after it cleared its final hurdle in the state legislature on Thursday.
Newsom, a Democrat, has until September 30 to issue his decision on the bill, which has divided Silicon Valley. It would force tech groups and start-ups developing AI models in the state to adhere to a strict safety framework. All of the largest AI start-ups, including OpenAI, Anthropic and Cohere, as well as Big Tech companies with AI models, would fall under its remit.
Newsom is likely to face intense lobbying from both sides. Some of the largest technology and AI companies in the state, including Google, Meta and OpenAI, have expressed concerns about the bill in recent weeks, while others, such as Amazon-backed Anthropic and Elon Musk, who owns AI start-up xAI, have voiced their support.
The Safe and Secure Innovation for Frontier Artificial Intelligence Systems act, known as SB 1047, mandates safety testing for advanced AI models operating in the state that cost more than $100mn to develop or that require a high level of computing power. The US Congress has not yet established a federal framework for AI regulation, which has left an opening for California, a hub for tech innovation, to come up with its own plans.
According to the bill, developers would need to create a “kill switch” to turn off their models if they go awry, and they would face legal action by the state attorney-general if they are not compliant and their models are used to threaten public safety.
They would also have to guarantee they will not develop models with “a hazardous capability”, such as creating biological or nuclear weapons or aiding cyber security attacks. Developers would be compelled to hire third-party auditors to assess their safety practices and protect whistleblowers on potential AI abuses.
Its opponents, including some Silicon Valley tech groups and investors, claim it would stifle innovation and force AI companies to leave the state.
Meta’s chief AI scientist Yann LeCun wrote on X in July that the bill would be harmful to AI research efforts, while OpenAI warned it would create an uncertain legal environment for AI companies and could cause entrepreneurs and engineers to leave California.
Andreessen Horowitz and Y Combinator have intensified a lobbying campaign against the proposals in recent weeks, and Nancy Pelosi, the former US House Speaker from California, also published a statement in opposition to the bill, dubbing it “well-intentioned but ill-informed”. Opponents also claimed the bill focused on hypothetical risks and added an “extreme” liability risk on founders.
The bill was amended to soften some of those requirements in recent weeks, including limiting the civil liabilities that it had originally placed on AI developers and narrowing the scope of those who would need to adhere to the rules. However, critics argue that the bill still burdens start-ups with onerous and sometimes unrealistic requirements.
Supporters say the fast-developing technology needs a clear regulatory framework. Geoffrey Hinton, the former head of AI at Google, who supports the bill, said in a statement: “Powerful AI systems bring incredible promise, but the risks are also very real and should be taken extremely seriously.
“SB 1047 takes a very sensible approach to balance those concerns. I am still passionate about the potential for AI to save lives through improvements in science and medicine, but it’s critical that we have legislation with real teeth to address the risks.”
California state senator Scott Wiener, a Democrat who introduced the bill earlier this year, said after it passed the state Senate: “Innovation and safety can go hand in hand — and California is leading the way. The legislature has taken the truly historic step of working proactively to ensure an exciting new technology protects the public interest as it advances.”
Read the full article here