California amends AI safety bill after Anthropic suggestions

16 Aug 2024

Image: © Robert/Stock.adobe.com

The bill previously came under fire from Y Combinator and a host of AI start-ups based in California amid concerns it could stifle innovation in the state.

California lawmakers have accepted amendments proposed by Anthropic and others to an AI safety bill in a bid to accommodate the needs of the open-source community.

The bill, known as Senate Bill 1047, passed through California’s Appropriations Committee yesterday (15 August) with several changes – bringing it a step closer to becoming law.

SB 1047 aims to ensure the safe development of AI systems by putting more responsibilities on AI developers. The bill would force developers of large “frontier” AI models to take precautions such as safety testing, implementing safeguards to prevent misuse and post-deployment monitoring.

After this week’s amendments, the bill no longer allows California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred, a suggestion made by Anthropic. Instead, the attorney general can sue a company after a catastrophic event has occurred because of its AI model. It can also request a company to cease a certain operation if it finds the operation dangerous.

“We accepted a number of very reasonable amendments proposed, and I believe we’ve addressed the core concerns expressed by Anthropic and many others in the industry,” Senator Scott Wiener told TechCrunch in a statement. “These amendments build on significant changes to SB 1047 I made previously to accommodate the unique needs of the open-source community, which is an important source of innovation.”

The bill has previously come under fire from investors such as Y Combinator and a host of AI start-ups based in California amid concerns the new rules could stifle innovation and “inadvertently threaten the vibrancy of California’s technology economy and undermine competition”.

In June, Y Combinator argued in a letter signed by more than 100 start-ups that the responsibility for the misuse of large language models should rest “with those who abuse these tools, not with the developers who create them”.

“Developers often cannot predict all possible applications of their models and holding them liable for unintended misuse could stifle innovation and discourage investment in AI research,” the letter read.

“Furthermore, creating a penalty of perjury would mean that AI software developers could go to jail simply for failing to anticipate misuse of their software – a standard of product liability no other product in the world suffers from.”

According to the letter, the AI safety bill needs to take a more balanced approach that protects society from potential harm while also fostering an environment that is conducive to technological advancement “that is not more burdensome than other technologies have previously enjoyed”.

“Open-source AI, in particular, plays a critical role in democratising access to cutting-edge technology and enabling a diverse range of contributors to drive progress,” it read.

Similar rules to the California bill, albeit more comprehensive, were approved in the EU earlier this year in the form the AI Act – which came into force this month.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Vish Gain was a journalist with Silicon Republic

editorial@siliconrepublic.com