OpenAI says California AI safety bill will slow innovation

22 Aug 2024

California State Capitol. Image: © Christopher Boswell/Stock.adobe.com

State senator Scott Wiener responded by saying that the AI start-up ‘doesn’t criticise a single provision of the bill’ and acknowledges it is implementable.

OpenAI says the AI safety bill being considered in California could slow the pace of innovation in the US state and drive away talent.

In a letter sent yesterday (21 August) to state senator Scott Wiener, who introduced the bill known as SB 1047, OpenAI chief strategy officer Jason Kwon argued that only a “clear federal framework” of AI regulations will help the US maintain its competitive advantage over rivals such as China.

“The AI revolution is only just beginning, and California’s unique status as the global leader in AI is fuelling the state’s economic dynamism. SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunities elsewhere,” Kwon wrote.

“Given those risks, we must protect America’s AI edge with a set of federal policies – rather than state ones – that can provide clarity and certainty for AI labs and developers while also preserving public safety.”

SB 1047 aims to ensure the safe development of AI systems by putting more responsibilities on AI developers. The bill would force developers of large “frontier” AI models to take precautions such as safety testing, implementing safeguards to prevent misuse and post-deployment monitoring.

Wiener responded to OpenAI by saying that the San Francisco start-up “doesn’t criticise a single provision of the bill” and acknowledges its core provisions as “reasonable and implementable”.

“Instead of criticising what the bill actually does, OpenAI argues this issue should be left to Congress. As I’ve stated repeatedly, I agree that ideally Congress would handle this. However, Congress has not done so, and we are sceptical Congress will do so,” Wiener said.

“Under OpenAI’s argument about Congress, California never would have passed its data privacy law, and given Congress’s lack of action, Californians would have no protection whatsoever for their data.”

‘More harmful than helpful’

OpenAI joins a string of US politicians and companies that have opposed the bill, including Anthropic and Nancy Pelosi, on grounds that it does more harm than good.

Earlier this week, Pelosi, former speaker of the House of Representatives, called SB 1047 “well-intentioned but ill-informed” said that while she wants California to lead in AI in a way that protects consumers, data and intellectual property, the bill is “more harmful than helpful in that pursuit”.

“California has the intellectual resources that understand the technology, respect the intellectual property, and prioritise academia and entrepreneurship,” Pelosi wrote.

“There are many proposals in the California legislature in addition to SB 1047. Reviewing them all enables a comprehensive understanding of the best path forward for our great state.”

The bill has previously come under fire from Y Combinator and a host of AI start-ups based in California because of concerns the new rules could stifle innovation and “inadvertently threaten the vibrancy of California’s technology economy and undermine competition”.

In June, Y Combinator argued in a letter signed by more than 100 start-ups that the responsibility for the misuse of large language models should rest “with those who abuse these tools, not with the developers who create them”.

The bill passed through California’s Appropriations Committee last week with several changes suggested by tech companies including Anthropic – bringing it a step closer to becoming law.

After last week’s amendments, the bill no longer allows California’s attorney general to sue AI companies for negligent safety practices before a catastrophic event has occurred, a suggestion made by Anthropic. Instead, the attorney general can sue a company after a catastrophic event has occurred because of its AI model. It can also request a company to cease a certain operation if it finds the operation dangerous.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Vish Gain was a journalist with Silicon Republic

editorial@siliconrepublic.com